URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=OrderBasis | [
"",
null,
"OrderBasis - Maple Help\n\nOrderBasis\n\ncompute an order basis",
null,
"Calling Sequence OrderBasis([f1, f2, ..., fn], x, N, [d1, d2, ..., dn])",
null,
"Parameters\n\n f1, ..., fn - expressions; represent the functions to be approximated x - variable appearing in the fis N - (optional) non-negative integer; specify the order of approximation. You must specify at least one of N or d1, ..., dn. d1, ..., dn - (optional) non-negative integers; specify the degree bounds. You must specify at least one of N or d1, ..., dn.",
null,
"Description\n\n • The OrderBasis([f1, ..., fn], x, N, [d1, ..., dn]) command computes an order basis for the functions f1, ..., fn with respect to the variable x, the degrees d1, ..., dn, and the order N. It finds all polynomial coefficients that provide an identity of the form\n\n$\\mathrm{f1}\\left(x\\right)\\mathrm{v1}\\left(x\\right)+\\dots +\\mathrm{fn}\\left(x\\right)\\mathrm{vn}\\left(x\\right)=0$\n\n up to a certain number of terms and with the degree of each vi bounded. This is similar to what the IntegerRelations[PSLQ] algorithm does for finding integer relations for floating-point numbers.\n • More precisely, given functions fi assumed to have a series expansion about 0, the OrderBasis function returns a matrix whose columns provide a basis for the (mathematical) module defined by\n\n$L=\\left\\{\\left[\\mathrm{v1}\\left(x\\right),...,\\mathrm{vn}\\left(x\\right)\\right]|\\mathrm{f1}\\left(x\\right)\\mathrm{v1}\\left(x\\right)+...+\\mathrm{fn}\\left(x\\right)\\mathrm{vn}\\left(x\\right)=\\mathrm{O}\\left({x}^{N}\\right),\\mathrm{degree}\\left(\\mathrm{v1}\\right)\\le \\mathrm{d1},...,\\mathrm{degree}\\left(\\mathrm{vn}\\right)\\le \\mathrm{dn}\\right\\}$\n\n • That is, for every vector v of polynomials in L there exist n polynomials $\\mathrm{a1},\\mathrm{a2},\\mathrm{#mo\\left(…\\right)},\\mathrm{an}$\n\n$v=\\mathrm{a1}\\left(x\\right)\\mathrm{column}\\left(M,1\\right)+\\mathrm{a2}\\left(x\\right)\\mathrm{column}\\left(M,2\\right)+\\dots +\\mathrm{an}\\left(x\\right)\\mathrm{column}\\left(M,n\\right).$\n\n Here the degree of $\\mathrm{ai}$ is bounded by $\\mathrm{di}-\\mathrm{degree}\\left({M}_{i,i}\\right)$.\n • If there are three arguments with the third argument as a positive integer N, that is, OrderBasis([f1, ..., fn], x, N), then the degree bounds d1, ..., dn are assumed to be N, ..., N.\n • If there are three arguments with the third argument as a list, that is, OrderBasis([f1, ..., fn], x, [d1, ..., dn]), then the order is determined by ${\\mathrm{dn}}_{1}+\\dots +{\\mathrm{dn}}_{n}+n-1$.",
null,
"Examples\n\n > $F≔\\left[1+{x}^{2}-{x}^{7}+{x}^{12},\\mathrm{sin}\\left(x\\right),\\mathrm{exp}\\left(x\\right)\\right]:$\n > $M≔\\mathrm{OrderBasis}\\left(F,x,8,\\left[3,1,2\\right]\\right)$\n ${M}{≔}\\left[\\begin{array}{ccc}{{x}}^{{4}}{+}\\frac{{68238360}}{{4251353}}{}{{x}}^{{3}}{+}\\frac{{511942344}}{{4251353}}{}{{x}}^{{2}}{+}\\frac{{971729136}}{{4251353}}{}{x}{-}\\frac{{2669976}}{{4251353}}& \\frac{{2011699}}{{12754059}}{}{{x}}^{{3}}{-}\\frac{{23128}}{{4251353}}{}{{x}}^{{2}}{-}\\frac{{4252654}}{{4251353}}{}{x}{+}\\frac{{34692}}{{4251353}}& \\frac{{5533799}}{{12754059}}{}{{x}}^{{3}}{+}\\frac{{17049452}}{{4251353}}{}{{x}}^{{2}}{+}\\frac{{33708898}}{{4251353}}{}{x}{-}\\frac{{66060}}{{4251353}}\\\\ \\frac{{829819404}}{{4251353}}{+}\\frac{{1293611160}{}{x}}{{4251353}}& {{x}}^{{2}}{-}\\frac{{141659}}{{8502706}}{}{x}{+}\\frac{{8421469}}{{8502706}}& \\frac{{68785145}}{{8502706}}{+}\\frac{{93799511}{}{x}}{{8502706}}\\\\ \\frac{{2669976}}{{4251353}}{-}\\frac{{1804218516}{}{x}}{{4251353}}& {-}\\frac{{34692}}{{4251353}}{+}\\frac{{153223}{}{x}}{{8502706}}& {{x}}^{{2}}{-}\\frac{{136335061}}{{8502706}}{}{x}{+}\\frac{{66060}}{{4251353}}\\end{array}\\right]$ (1)\n\nEach column of $M$ has order 8.\n\n > $\\mathrm{map}\\left(\\mathrm{series},\\mathrm{map}\\left(\\mathrm{expand},\\mathrm{Matrix}\\left(1,3,F\\right)·M\\right),x,8\\right)$\n $\\left[\\begin{array}{ccc}{O}{}\\left({{x}}^{{8}}\\right)& {O}{}\\left({{x}}^{{8}}\\right)& {O}{}\\left({{x}}^{{8}}\\right)\\end{array}\\right]$ (2)\n > $\\mathrm{map}\\left(\\mathrm{degree},M,x\\right)$\n $\\left[\\begin{array}{ccc}{4}& {3}& {3}\\\\ {1}& {2}& {1}\\\\ {1}& {1}& {2}\\end{array}\\right]$ (3)\n\nImplies that a basis for all $\\left[\\mathrm{v1},\\mathrm{v2},\\mathrm{v3}\\right]$ satisfying $\\mathrm{v1}{F}_{1}+\\mathrm{v2}{F}_{2}+\\mathrm{v3}{F}_{3}$ with $\\mathrm{degree}\\left(\\mathrm{v1}\\right)\\le 3,\\mathrm{degree}\\left(\\mathrm{v2}\\right)\\le 1,\\mathrm{degree}\\left(\\mathrm{v3}\\right)\\le 2$ is a constant * column(M,3).\n\nIn the next example, OrderBasis(F,x,8) is the same as OrderBasis(F,x,8,[8,8,8]).\n\n > $M≔\\mathrm{OrderBasis}\\left(F,x,8\\right)$\n ${M}{≔}\\left[\\begin{array}{ccc}{{x}}^{{3}}{-}\\frac{{69384}}{{2011699}}{}{{x}}^{{2}}{-}\\frac{{12757962}}{{2011699}}{}{x}{+}\\frac{{104076}}{{2011699}}& {-}\\frac{{40543788}}{{2011699}}{}{{x}}^{{2}}{-}\\frac{{105266112}}{{2011699}}{}{x}{+}\\frac{{464712}}{{2011699}}& \\frac{{8097740}}{{2011699}}{}{{x}}^{{2}}{+}\\frac{{21486216}}{{2011699}}{}{x}{-}\\frac{{76416}}{{2011699}}\\\\ \\frac{{12754059}}{{2011699}}{}{{x}}^{{2}}{-}\\frac{{424977}}{{4023398}}{}{x}{+}\\frac{{25264407}}{{4023398}}& {{x}}^{{3}}{+}\\frac{{32425320}}{{2011699}}{}{{x}}^{{2}}{-}\\frac{{95498640}}{{2011699}}{}{x}{-}\\frac{{30079248}}{{2011699}}& {-}\\frac{{5533799}}{{2011699}}{}{{x}}^{{2}}{+}\\frac{{22284705}}{{2011699}}{}{x}{+}\\frac{{10793304}}{{2011699}}\\\\ {-}\\frac{{104076}}{{2011699}}{+}\\frac{{459669}{}{x}}{{4023398}}& {-}\\frac{{464712}}{{2011699}}{+}\\frac{{135810072}{}{x}}{{2011699}}& {{x}}^{{2}}{-}\\frac{{32355936}}{{2011699}}{}{x}{+}\\frac{{76416}}{{2011699}}\\end{array}\\right]$ (4)\n\nEach column of $M$ has order 8.\n\n > $\\mathrm{map}\\left(\\mathrm{series},\\mathrm{map}\\left(\\mathrm{expand},\\mathrm{Matrix}\\left(1,3,F\\right)·M\\right),x,8\\right)$\n $\\left[\\begin{array}{ccc}{O}{}\\left({{x}}^{{8}}\\right)& {O}{}\\left({{x}}^{{8}}\\right)& {O}{}\\left({{x}}^{{8}}\\right)\\end{array}\\right]$ (5)",
null,
"References\n\n Beckermann, B., and Labahn, G. \"Fraction-Free Computation of Matrix Rational Interpolants and Matrix GCDs.\" SIAM Journal on Matrix Analysis and Applications. Vol. 22 No. 1. (2000): 114-144.\n Beckermann, B. and Labahn, G. \"A Uniform Approach for the Fast Computation of Matrix-Type Pade Approximants.\" SIAM Journal on Matrix Analysis and Applications. Vol. 15 No. 3. (1994): 804-823."
]
| [
null,
"https://bat.bing.com/action/0",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.58843946,"math_prob":0.99998856,"size":3519,"snap":"2022-27-2022-33","text_gpt3_token_len":1329,"char_repetition_ratio":0.13570413,"word_repetition_ratio":0.11253197,"special_character_ratio":0.43535095,"punctuation_ratio":0.2513158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99950147,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T13:05:28Z\",\"WARC-Record-ID\":\"<urn:uuid:f2f08d12-5ef7-441d-a79b-f3a870ed743d>\",\"Content-Length\":\"328747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65208a30-2595-4386-a4b3-c37093d0c186>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ad83378-37b0-45c8-8fe9-958f03d97ec8>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/Maple/view.aspx?path=OrderBasis\",\"WARC-Payload-Digest\":\"sha1:B42AJSCTNNPJLQUENKY37F62I4GLRWWN\",\"WARC-Block-Digest\":\"sha1:NEX6QOXS5JLLKCF4FZHKKMURKW6SVLW3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103035636.10_warc_CC-MAIN-20220625125944-20220625155944-00627.warc.gz\"}"} |
https://docs.google.com/document/d/e/2PACX-1vSCNs4MfLKgfaJLQLmHrXD6Eij4HMmza-q043hFejd-zA5Ho6qHHpaXHsLRDEcbLTB0Y21-DfTz9PJt/pub | [
"Algebra II 1st Quarter 3rd Quarter Equations & Inequalities Rational Exponents and Radical Functions a. Evaluate, Simplify, & Solve Linear & Absolute Value a. Evaluate nth Roots and Rational Exponents Equations & Inequalities b. Perform Function Operations and Composition c. Graph Square Root and Cube Root Functions d. Solve Radical Equations Linear Equations & Functions a. Represent Relations and Functions b. Find Slope and Rate of Change Exponential and Logarithmic Functions c. Graph Equations of Lines a. Graph Exponential Growth & Decay Functions d. Draw Scatter Plots & Best Fitting Lines b. Evaluate and Graph Logarithmic Functions e. Graph Linear Inequalities in Two variables d. Solve Exponential and Logarithmic Functions Linear Systems and Matrices Counting Methods and Probability a. Solve Linear Systems by Graphing and Algebraically a. Apply Counting Principle & Permutations b. Graph and Solve Linear Inequalities b. Use Combinations and the Binomial Theorem c. Perform Basic Matrix Operations c. Define and Use Probability d. Multiply Matrices d. Find Probability of Disjoint and Overlapping Events e. Evaluate Determinants and Apply Cramer's Rule e. Find Probabilities of Independent and Dependent f. Use Inverse Matrices to Solve Linear Systems Events f. Construct & Interpret Binomial Distributions 2nd Quarter Quadratic Functions and Factoring 4th Quarter a. Graph Quadratic Functions Data Analysis and Statistics b. Solve Quadratic Equations by Factoring a. Find Measures of Central Tendency & Dispersion c. Perform Operations with Complex Numbers b. Use Normal Distributions d. Solve Quadratic Equations by Completing the Square c. Select & Draw Conclusions from Samples e. Solve Quadratic Equations by Using the Quadratic Formula f. Graph and Solve Quadratic Inequalities Sequences & Series a. Define & Use Sequences and Series Polynomials & Polynomial Functions b. Analyze Arithmetic Sequences & Series a. Use Properties of Exponents c. Analyze Geometric Sequences & Series b. Add, Subtract, & Multiply Polynomials d. Find Sums of Infinite Geometric Series c. Factor and Solve Polynomial Equations e. Use Recursive Rules with Sequences d. Find Rational Zeros"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6923163,"math_prob":0.9943007,"size":2186,"snap":"2019-13-2019-22","text_gpt3_token_len":520,"char_repetition_ratio":0.16590284,"word_repetition_ratio":0.0,"special_character_ratio":0.17978042,"punctuation_ratio":0.13239437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984514,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T04:03:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e8a7129e-0cdd-4532-8f1b-ffe0a206ea8f>\",\"Content-Length\":\"37209\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0521de48-1533-4065-acb3-42c904714572>\",\"WARC-Concurrent-To\":\"<urn:uuid:dda7572a-58b8-4f15-968e-30e82d5ade44>\",\"WARC-IP-Address\":\"172.217.7.238\",\"WARC-Target-URI\":\"https://docs.google.com/document/d/e/2PACX-1vSCNs4MfLKgfaJLQLmHrXD6Eij4HMmza-q043hFejd-zA5Ho6qHHpaXHsLRDEcbLTB0Y21-DfTz9PJt/pub\",\"WARC-Payload-Digest\":\"sha1:O5QQAM3SBPTNUC6RYJUDNF7IU3HMIDGL\",\"WARC-Block-Digest\":\"sha1:HU253TQ65FBXLMKD7PYLEZ25SEIFLGW3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203548.81_warc_CC-MAIN-20190325031213-20190325053213-00110.warc.gz\"}"} |
https://discourse.bokeh.org/t/set-upper-lower-bounds-individually/1917 | [
"",
null,
"# set upper/lower bounds individually\n\nHello,\n\n@bryevdv says there’s a way to accomplish this, but I can’t find any clues in the docs:\n\nSetting both limits of an axis is easy:\n\n``````p = figure(x_range=[1, 5])\nor p.set(x_range=[1,5])\n\n``````\n\nSetting lower bound only could be done though using (nonexisting) `p.get_x_range()` but as discussed in #4371 it is not possible to get calculated values of the `x_range` in python because the calculation is done on the client side.\n\nIt’d be great if one could set only one limit manually and let bokeh calculate the other one for example using this notation:\n\n``````p = figure(x_range=[1, None])\n\n``````\n\n(it doesn’t work right now).\n\nHi,\n\nIf you want any auto-ranging at all, then the Plot's range objects need to be DataRange1d. These are what are added by default. However, when you provide a tuple (or list) like this:\n\np = figure(x_range=[1, 5])\n\nThat is a just a shorthand convenience or setting a different kind of range object, a Range1d, which does *not* do any sort of auto-ranging. It's just a fixed start and end tuple, basically.\n\nSo if you want some auto-ranging, there needs to be a DataRange1d. So there are two options:\n\n* update the default ones that figure creates, or\n* create data ranges explicitly by hand\n\nThose two options look roughly like:\n\np = figure()\np.x_range.start = 1\n\nor\n\np = figure(x_range=DataRange1d(start=1)\n\nIt's conceivable that\n\np = figure(x_range=[1, None])\n\ncould be made to do the above \"automagically\" however it would first merit some discussion. There's already a way to accomplish it without more \"magic\" (which is often harder to document).\n\nThanks,\n\nBryan\n\n···\n\nOn Jan 24, 2017, at 10:04 PM, [email protected] wrote:\n\nHello,\n\n@bryevdv says there's a way to accomplish this, but I can't find any clues in the docs:\n\nSetting both limits of an axis is easy:\n\np = figure(x_range=[1, 5])\nor p.set(x_range=[1,5])\n\nSetting lower bound only could be done though using (nonexisting) p.get_x_range() but as discussed in #4371 it is not possible to get calculated values of the x_range in python because the calculation is done on the client side.\n\nIt'd be great if one could set only one limit manually and let bokeh calculate the other one for example using this notation:\n\np = figure(x_range=[1, None])\n\n(it doesn't work right now).\n\n--\nYou received this message because you are subscribed to the Google Groups \"Bokeh Discussion - Public\" group.\nTo unsubscribe from this group and stop receiving emails from it, send an email to [email protected].\nTo post to this group, send email to [email protected].\nTo view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/a18d6a46-47d8-438d-b51c-a0df03808c96%40continuum.io."
]
| [
null,
"https://discourse-uploads.bokeh.org/original/2X/f/f503a01092bbf1758cf8db3ec9ce14a0b0412566.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8810243,"math_prob":0.7247124,"size":2108,"snap":"2020-34-2020-40","text_gpt3_token_len":543,"char_repetition_ratio":0.10313688,"word_repetition_ratio":0.0060790274,"special_character_ratio":0.26233396,"punctuation_ratio":0.13411765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844775,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T05:52:09Z\",\"WARC-Record-ID\":\"<urn:uuid:42a6f6ec-4588-4568-ac36-d46995e969b2>\",\"Content-Length\":\"17060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd754839-8be1-4d9c-a005-5d010cfb70cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fbeca75-fe6d-42ad-af85-13cf215b01a0>\",\"WARC-IP-Address\":\"104.24.108.228\",\"WARC-Target-URI\":\"https://discourse.bokeh.org/t/set-upper-lower-bounds-individually/1917\",\"WARC-Payload-Digest\":\"sha1:6XNZPOIZ3FOPHN4WJ7E2JLMTNJ744BP3\",\"WARC-Block-Digest\":\"sha1:I6BRX4B2JMTOPRDI7RY76B5E45BP3FCW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400193391.9_warc_CC-MAIN-20200920031425-20200920061425-00233.warc.gz\"}"} |
https://programmerah.com/python-run-error-typeerror-hog-got-an-unexpected-keyword-argument-visualise-34200/ | [
"# Python Run Error: TypeError: hog() got an unexpected keyword argument ‘visualise‘”\n\nRunning Python code reports an error “typeerror: hog() got an unexpected keyword argument ‘visualise'”\n\nFD, hog_ image = hog(image, orientations=8, pixels_ per_ cell=(12, 12),\ncells_ per_ Block = (1, 1), visualise = true) can be normal by changing visualise to visualize, that is (changing the letter S to Z):\n\n`````` fd, hog_image = hog(image, orientations=8, pixels_per_cell=(12, 12),\ncells_per_block=(1, 1), visualize=True)``````"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5076426,"math_prob":0.80178666,"size":2095,"snap":"2021-31-2021-39","text_gpt3_token_len":527,"char_repetition_ratio":0.13582018,"word_repetition_ratio":0.020477816,"special_character_ratio":0.25202864,"punctuation_ratio":0.15864022,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96754193,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T00:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:a6c963c1-d47f-4bfb-a6e2-c214fcb9040b>\",\"Content-Length\":\"35409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a677738b-5a73-4ed2-8c0c-feafdb89219d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c1fd337-7798-409d-b06d-d1298bf99e4f>\",\"WARC-IP-Address\":\"172.67.214.245\",\"WARC-Target-URI\":\"https://programmerah.com/python-run-error-typeerror-hog-got-an-unexpected-keyword-argument-visualise-34200/\",\"WARC-Payload-Digest\":\"sha1:IGSKIFLWHGPBMG7WQA55QT6SIVQOLEDO\",\"WARC-Block-Digest\":\"sha1:JEJJGELKUC7TTIZGIMOVBIX7JQIQZ2AD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058589.72_warc_CC-MAIN-20210928002254-20210928032254-00442.warc.gz\"}"} |
https://www.indiabix.com/civil-engineering/hydraulics/discussion-3756 | [
"# Civil Engineering - Hydraulics - Discussion\n\nDiscussion Forum : Hydraulics - Section 2 (Q.No. 46)\n46.\nA cylinder 3 m in diameter and 4 m long retains water one side as shown in the below figure. If the weight of the cylinder is 2000 kgf, the vertical reaction at A is",
null,
"14,137 kgf\n5,863 kgf\n20,000 kgf\n18,000 kgf.\nExplanation:\nNo answer description is available. Let's discuss.\nDiscussion:\n13 comments Page 1 of 2.\n\nJENISH POUDEL said: 1 month ago\nReaction = vertical force by water on A+2000kg\nVertical force = wt of water in a volume[volume here is vertically upward projected portion of water without the semicylindrical portion]\nVertical force = 1000 * vol = 1000 * [3 * 1.4 * 4 - 0.5 * 3.14 * 3^(2) * 4/4 = 1000 * [18 - 14.137].\n= 3863kg force.\nThen the reaction = 3863 + 2000 = 5863kg.\n\nRakesh roushan said: 2 years ago\nThank you for explaining @Nushi.\n\nAadil bhat said: 3 years ago\n@Nungshi Jr.\n\nYes, you are right. Thanks.\n\nNushi said: 5 years ago\nVertical reaction = wt Of cylinder - vertical Force.\nVertical force= specific weight * volume,\n= 1000 * &pie; *1.5^2 * 4/2.\n= 14137 kgf.\n\nRa= 20000-14137 = 5863 kgf.\n(2)\n\nNungshi Jr said: 5 years ago\nReaction at B = horizontal force (Fh).\nFh= specific wt * area * y.\n= 1000 * 3 * 4 * 3/2,\n= 18000 kgf.\n\nBasavaraj said: 5 years ago\nF(verticle) = 3.142 * R^2 * L * 9.81.\n\nKumar Adarsh said: 5 years ago\nYes, right @Baleno.\n\nAnshu said: 5 years ago"
]
| [
null,
"https://www.indiabix.com/_files/images/civil-engineering/hydraulics/159-7.237-1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6800268,"math_prob":0.97576326,"size":2231,"snap":"2023-40-2023-50","text_gpt3_token_len":755,"char_repetition_ratio":0.14638527,"word_repetition_ratio":0.424821,"special_character_ratio":0.3814433,"punctuation_ratio":0.16299559,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97357786,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T19:57:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f173e144-4161-4db1-8534-5a9e272ea156>\",\"Content-Length\":\"47179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd06946b-ba0a-49fb-96e1-372afceb1ecc>\",\"WARC-Concurrent-To\":\"<urn:uuid:51aa5a6d-573c-4a93-804e-58f8a30540fd>\",\"WARC-IP-Address\":\"34.100.254.150\",\"WARC-Target-URI\":\"https://www.indiabix.com/civil-engineering/hydraulics/discussion-3756\",\"WARC-Payload-Digest\":\"sha1:2VVY4YVJFH3OASSHYSOGXITUE7D7PQT2\",\"WARC-Block-Digest\":\"sha1:JRMHMLU2DHWDF2HYP6EPRUVC4V4F5MMT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00248.warc.gz\"}"} |
https://assumptionsofphysics.org/problems/005-BetterQuantumStateSpace | [
"## #005: Find a better mathematical characterization for quantum states\n\nCategory: Quantum mechanics - Tags: Vector spaces, Function spaces\n\nFind a better physically motivated characterization for the state space of quantum mechanics.\n\nMathematical problem. The standard way to represent quantum states is using vectors in Hilbert spaces. The requirements for a Hilbert space can be broken up into the following components:\n\n• Vector space\n• + normed (normed vector spaces)\n• + complete under the norm (Banach space)\n• + every closed linear subspace is the range of a projection (Hilbert space) (see here) The completion under the norm seems the only non-physical requirement.\n\nOn the other hand, the Schwartz_space is a dense subset of $$L^2$$, which seems better suited as:\n\n• It is closed under Fourier transform\n• It has finite expectations for all polynomials of position and momentum\n• It is dense over $$L^2$$ In fact, requiring the second feature means recovering the Schwartz space.\n\nDoes the Schwartz space satisfy all other characteristics of a Hilbert space, except the closure under the norm? If we just drop the closure under the norm, what do we lose?\n\nPhysical significance. If we look at the list of defining properties of a Hilbert space, completeness is the only one that does not make physical sense. The linearity of the normed vector space can be understood as coming from linearity of probability space; the existence of projections is the requirement of being able to identify states (i.e. a measurement that outputs 1 if the state matches and 0 if it doesn’t). The completeness would mean that the limit of a sequence of state preparation always leads to a state, but this is not the case: we can (in principle) prepare states with narrower and narrower spatial distribution, but not zero spatial distribution (delta Dirac). The idea would be that the requirement of completeness makes the mathematical space nicer to work with, but not physically meaningful.\n\nNotes. Even listing all problems with Hilbert spaces may be useful.\n\nFor example, a related problem with Hilbert spaces ($$L^2 specifically$$ see Adrian Heathcote - Unbounded operators and the incompleteness of quantum mechanics) is that self-adjoint operators that are defined on the whole space must be bounded. This means that unbounded operators must not be defined on all states: there must be states for which the average position is not well-defined. Again, note that this is not true for the Schwartz space: the position is unbounded (there is no maximum value for position) but all states have a well defined average position."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9074526,"math_prob":0.974806,"size":2608,"snap":"2022-27-2022-33","text_gpt3_token_len":534,"char_repetition_ratio":0.12327189,"word_repetition_ratio":0.0,"special_character_ratio":0.20015338,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99061096,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T16:57:58Z\",\"WARC-Record-ID\":\"<urn:uuid:66a3703c-cb6e-47ea-b56f-bb0898499b80>\",\"Content-Length\":\"10767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76170541-8eee-4020-bf0b-571fdabe409e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ce8b44a-e2b4-43f1-9f37-1fc41b797866>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://assumptionsofphysics.org/problems/005-BetterQuantumStateSpace\",\"WARC-Payload-Digest\":\"sha1:L32Z5ON7R6MVOHII7FCRHEYTULQ37EQK\",\"WARC-Block-Digest\":\"sha1:KBS4LXBE4WHKM2M7WX3XFT6MYGIGOVFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573242.55_warc_CC-MAIN-20220818154820-20220818184820-00206.warc.gz\"}"} |
https://hvacrschool.com/acfm-scfm-baseball-dents/ | [
"## ACFM, SCFM, & Baseball dents",
null,
"This is a VERY in-depth look at ACFM vs. SCFM and why it matters to airflow measurement from Steven Mazzoni. Thanks, Steve!\n\nImagine your job is to figure out how fast baseballs were traveling before they hit a sheet-rock wall. The only method you have is to measure the depth of the dent left in the wall. Suppose, at 60 mph, the ball leaves a ¼”-deep dent. At 80 mph, it leaves a ½”-dent, and so forth. No problem. All you have to do is measure the dents, and you can derive the speed (velocity).\n\nBut it’s more complicated than that. You discover that some of the balls are a bit lighter than others. Otherwise, they are all identical. What does this mean? The lighter balls leave behind a shallower dent than the heavy ones, even if they traveled at the same velocity before hitting the wall. Obviously, more is needed than just the depth of the dents. The weight of the balls must also be factored in. Suppose you can weigh the balls in addition to measuring the depth of the dent they leave. You come up with an equation that considers the ball’s weight and depth of the dent and solves for its velocity.\n\nSomething similar to the baseballs is happening when we measure airflow. To determine the airflow (cfm, or ft3/min) in a duct, all we need to find out is its average velocity (ft/min) and the duct area (ft2). Measuring the air’s velocity (duct traverse) is the tricky part. A pitot tube and manometer measure the speed of the air flowing in a duct. At a faster speed or velocity, more force is imparted to the column of water in the manometer. The pressure difference (velocity pressure or VP) determines the air’s velocity in feet/minute.\n\nHowever, like the baseballs, air’s density isn’t always the same. Thus, the force it imparts to the column of water when traveling at a given velocity changes if its density changes. “Heavy” air will lift a column of water to a higher level (velocity pressure, in inches of water) on a manometer than “light” air will, even though it's moving at the exact same velocity. Thus, the velocity pressure and the air’s density must be factored in before we can determine its velocity.\n\nWhat factors determine the air’s density? Mainly its temperature and the barometric pressure. Warm air is lighter (less dense) than cold air. Air at higher barometric pressures near sea level is denser than air at lower pressures (high altitudes). The air’s moisture content also plays a minor role. Moist air (high humidity) at a given temperature is lighter than dry air at the same temperature.\n\nThe flow of air (volumetric) is usually expressed in cfm (ft3/min). To be more specific, actual cfm (ACFM) and standard cfm (SCFM) are used. ACFM & SCFM have been defined as follows:",
null,
"Air is at “standard conditions” when its density is @ 0.075 lb/ft3. We can thus conclude a couple of key points. First, if the airflow measurement is taken at or near standard conditions, the ACFM and the SCFM will have the exact same value. Second, if the reading were taken on air at a significantly different density, ACFM and SCFM would have two different values.\n\nLet’s work through an example duct traverse at a high elevation and temperature to show how to determine ACFM & SCFM. Suppose a 4-point duct traverse has been taken at the following conditions. A pitot tube was used to obtain velocity pressures (VP), but these have not yet been converted to velocity (ft/min). Let’s keep it simple and assume a 1.0 ft2 duct.\n\n Elevation: 4,000 ft Barometric pressure: 25.84” Hg Duct temperature: 120°F Duct area: 12” x 12” = 1.0 ft2 Actual air density: 0.059 lb/ft3 Standard air density: 0.075 lb/ft3 Actual velocity pressure (VP) readings: 0.020” WC 0.025” WC 0.030” WC 0.035” WC\n\nNow, what do we do with these four velocity pressure readings? We need to convert them to velocity using one of the equations below. The “4,005” equation is only valid for air at standard density. The “1,096” equation works at any density.",
null,
"Here is where it gets interesting. Which density should we use to convert the VP readings to velocity to determine ACFM & SCFM? The actual density (0.059 lb/ft3), or standard density (0.075 lb/ft3)? We’ll explore two options.\n\n• Option 1: Calculate the actual average duct velocity using the actual density of the air measured.\n\nThen, multiply average velocity by the duct area in ft2. The result will be in ACFM.\n\nCalculate ACFM Using Option 1:",
null,
"0.020” WC = 638 ft/min 0.025” WC = 713 ft/min 0.030” WC = 782 ft/min 0.035” WC = 844 ft/min Avg = 744 ft/min",
null,
"• Determine SCFM for our example using one of these two methods:\n• Method A: Determine the mass flow rate of the ACFM. From that, determine what volumetric flow at standard conditions would result in the same mass flow. The result will be in SCFM.",
null,
"• Method B: Multiply the ACFM by the ratio of the actual density to standard density. The result will be in SCFM.",
null,
"• Method A & B both result in @ 585 SCFM.\n• Option 2: Even though we realize the actual density at the traverse was not standard, calculate using the standard density. Multiply by the area in ft2. Then take the result and apply a correction factor to determine ACFM & SCFM. Calculate velocity and flow using the same VPs from the non-standard density traverse but using the standard density 4,005 formula:",
null,
"0.020” WC = 566 ft/min 0.025” WC = 633 ft/min 0.030” WC = 694 ft/min 0.035” WC = 749 ft/min Avg = 661 ft/min",
null,
"• Is this 661 “cfm” the ACFM? No. Is it the SCFM? No. Obviously, it falls in between the 744 ACFM and 585 SCFM we calculated above. What is it, then? It is a value that, when corrected, can get us to the true ACFM & SCFM.\n• Determine a unique correction factor for our example as follows. Notice the square root function:",
null,
"• Now what? Use this correction factor to convert the “uncorrected” 661 cfm to ACFM as follows:",
null,
"• Next, use the same correction factor to convert the “uncorrected” 661 cfm to SCFM as follows:",
null,
"Conclusions: Consider the type of instrument you are using to measure the differential pressure coming from a pitot tube. Velocity pressure readings from inclined manometers and simple differential pressure instruments will need the correct math applied. Electronic ones may be able to correct for local density and display the actual velocity.\n\n• Both Option 1 & 2 resulted in the same ACFM & SCFM values.\n• In Option 1, we used the actual local density to determine the actual average duct velocity and the ACFM. From the ACFM, we calculated the SCFM based on either the mass flow (Method A) or the ratio of actual density to standard density (Method B).\n• In Option 2, standard density was used to calculate a “reference cfm.” This reference cfm did not reflect reality but was used to calculate ACFM & SCFM. A correction factor had to be calculated (square root of the ratio of the two densities) and used to convert the reference cfm to ACFM and SCFM. This method is similar to assuming all the baseballs are the heavy ones and calculating a reference speed based on that incorrect premise. Then, you must correct the result based on the actual weight of the baseball.\n• To avoid confusion, it seems best to use Option 1 and Method B when working with air under non-standard conditions. At least then, the calculation gives you the ACFM directly. SCFM can be calculated easily based on the ratio of the two densities. No other correction factors are needed.\n\n—Steven Mazzoni\n\nHVAC/R Instructor\n\n##",
null,
"",
null,
"Related Tech Tips",
null,
"Venting for High efficiency Gas Furnaces - Part 1 Materials\nThis article was written by senior furnace tech Benoît (Ben) Mongeau. Ben hails from the frozen tundra of Ontario, Canada, where high-efficiency gas furnaces are commonplace. While some codes and practices may be different from the US, I find that most of it is common sense and translates pretty well. One glaring difference between Canada […]",
null,
"The House is the Biggest Duct\nThis article was inspired by a podcast episode about the house being the most underappreciated duct with Joe Medosch. You can listen to that podcast HERE. We probably think about sheet metal, flex duct, or the ever-controversial duct board when we think about ducts. It’s probably shiny and out of sight in most homes. But […]",
null,
"Is Liquid Incompressible?\nCompressibility is the ability of a substance to be squeezed into a smaller volume. It is the change in volume and increase in density that results from an increase in pressure. The subject of compression should be familiar to HVAC techs. After the return air passes over the boiling refrigerant in the evaporator coils, the […]\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
]
| [
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20794%20530%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20749%20171%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20753%20329%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20526%20567%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20644%2080%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20652%20420%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20746%20158%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20593%20102%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20608%2086%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20631%20396%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20758%20117%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20532%20221%22%3E%3C/svg%3E",
null,
"https://hvacrschool.com/wp-content/themes/hvacrschool/assets/svg/tips.svg",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.902279,"math_prob":0.9683151,"size":7477,"snap":"2022-27-2022-33","text_gpt3_token_len":1853,"char_repetition_ratio":0.1411749,"word_repetition_ratio":0.022053232,"special_character_ratio":0.24903037,"punctuation_ratio":0.11439842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98836154,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T09:00:26Z\",\"WARC-Record-ID\":\"<urn:uuid:73446cf9-06cd-41eb-b406-76d412cab428>\",\"Content-Length\":\"388109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:034d2b26-f0e7-438d-8eaa-919a69ae5479>\",\"WARC-Concurrent-To\":\"<urn:uuid:68fa92e8-5651-422a-ba3c-54b3ca69e603>\",\"WARC-IP-Address\":\"35.208.143.255\",\"WARC-Target-URI\":\"https://hvacrschool.com/acfm-scfm-baseball-dents/\",\"WARC-Payload-Digest\":\"sha1:WU4KD252X2JPMEH2VM27IDHXPYCEHILR\",\"WARC-Block-Digest\":\"sha1:3I3PHAXV7ZFLFLB2AJX3J5DALKXSRIVZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571911.5_warc_CC-MAIN-20220813081639-20220813111639-00408.warc.gz\"}"} |
http://sosmath.com/algebra/solve/solve3/s32/s3211/s3211.html | [
"",
null,
"## SOLVING EQUATIONS CONTAINING ABSOLUTE VALUE(S)",
null,
"Note:\n\n•",
null,
"if and only if",
null,
"•",
null,
"if and only if a + b = 3 or a + b = -3\n\n• Step1: Isolate the absolute value expression.\n\n• Step 2: Set the quantity inside the absolute value notation equal to + and - the quantity on the other side of the equation.\n\n• Step 3: Solve for the unknown in both equations.\n\n• Step 4: Check your answer analytically or graphically.\n\nSolve for x in the following equation.\n\nIf you would like to see the answers and the solutions, click on Solution.\n\nProblem 3.2a :",
null,
"Solution\n\nProblem 3.2b :",
null,
"Solution\n\nProblem 3.2c :",
null,
"Solution\n\nIf you would like to go back to the equation table of contents, click on Contents.",
null,
"[Algebra] [Trigonometry]\n[Geometry] [Differential Equations]\n[Calculus] [Complex Variables] [Matrix Algebra]",
null,
"S.O.S MATHematics home page\n\nDo you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.",
null,
"Author:Nancy Marcus"
]
| [
null,
"http://sosmath.com/logos001.gif",
null,
"http://sosmath.com/gif/bar.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img2.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img3.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img4.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img5.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img6.gif",
null,
"http://sosmath.com/algebra/solve/solve3/s32/s3211/img7.gif",
null,
"http://sosmath.com/gif/bar.gif",
null,
"http://sosmath.com/gif/lighthouse.gif",
null,
"http://sosmath.com/gif/sailbar.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7275007,"math_prob":0.9804579,"size":984,"snap":"2019-13-2019-22","text_gpt3_token_len":279,"char_repetition_ratio":0.113265306,"word_repetition_ratio":0.01183432,"special_character_ratio":0.2703252,"punctuation_ratio":0.15544042,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961724,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T12:44:29Z\",\"WARC-Record-ID\":\"<urn:uuid:1f40e44e-80cc-4617-ba6c-23ef33ce5031>\",\"Content-Length\":\"6406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4ba8ae9-16ed-4111-b386-3438ed54ab32>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ed8f0a4-8daa-4017-90b0-a22118ca381a>\",\"WARC-IP-Address\":\"69.13.193.154\",\"WARC-Target-URI\":\"http://sosmath.com/algebra/solve/solve3/s32/s3211/s3211.html\",\"WARC-Payload-Digest\":\"sha1:JF4HZOGWMQGILBIIDCPGY3B6NW7XT5XX\",\"WARC-Block-Digest\":\"sha1:C6YFM2WAD77JX63XFECRKQU2N3KOG4MB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202523.0_warc_CC-MAIN-20190321112407-20190321134407-00167.warc.gz\"}"} |
http://www.dev-hq.net/javascript/3--variables-and-maths | [
"# JavaScript: Variables and Maths\n\n## Variables\n\nBefore we talk about how to create variables in JavaScript, let's first make sure that we all know what variables are. I usually think of a variable like a box - we can create a box whenever we like, and then we can put things in it, see what's in it, and even change what's in it.\n\nAlthough in other programming languages we often have to specify what data our 'box' (variable) can hold - JavaScript does all this hard work for us, and we can choose what kind of data our variable can hold, simply by how we specify the value we set it to. If we want a combination of different characters/letters (like a string of words or something), we specify the value that we set the variable to in either single or double quotes - whereas if we want to set the value to an integer (a whole number), then we can simply set the variable to the number we want to set it to. With this theory out of the way, let's actually learn how to create variables!\n\nThe first keyword that we use to create variables in JavaScript, is the `var` keyword, which stands for variable. For a number of applications you can actually exclude this keyword and it'll work just fine, but the keyword always makes sure that you are creating a new variable rather than setting the value of an already-existing one in the cases where this matters (if you don't have a clue what I'm talking about, you'll see soon enough). After this keyword (optionally in most cases, as discussed), we specify the variable's name so we can reference it in our code, and if we just want to create an 'empty box' then we can just put our semicolon here to end the variable creation. This process of creating a variable is called the variable declaration. For example the following would create an empty variable called 'VariableOne':\n\n ```1 ``` ``````var VariableOne; ``````\n\nTo assign values to this variable we can write its name, followed by an equals sign, followed by the value we want to set it to (and remember, this can be any data-type you like). So if we wanted to create a variable called 'Greeting' and then set it to the string of text, \"Hello!\", then we might do something like the following:\n\n ```1 2 ``` ``````var Greeting; Greeting = \"Hello!\"; ``````\n\nWhen we first set a variable to a value, this is called the initialization. Also note that we can set the variable to a different value wherever we like by simply using its name, and using the equals operator as we have above. So we could change the value of 'Greeting' again later on in our JavaScript document with something like `Greeting = \"Hiya!\";`. We can also get the value of the variable wherever we want by simply typing its name - so if we wanted to combine this with the alert function that we learnt in previous lessons to output the value of 'Greeting', we can just write the variable name in the `alert` function's parameters. In the following example, we will set the new variable, 'Greeting', to something, output that in an `alert`, then set the variable to something else, and `alert` that:\n\n ```1 2 3 4 5 ``` ``````var Greeting; Greeting = \"Hello!\"; alert(Greeting); Greeting = \"Hi!\"; alert(Greeting); ``````\n\nNote that we could actually combine the first two lines of our JavaScript so that we declare the variable and initialize it at the same time, to create a more condensed and neat piece of JavaScript which looks something like the following:\n\n ```1 2 3 4 ``` ``````var Greeting = \"Hello!\"; alert(Greeting); Greeting = \"Hi!\"; alert(Greeting); ``````\n\nIf you put all of the above in the 'script.js' file of the project folder that we set up previously, you should see that the pop-up boxes output exactly what we expected, \"Hello!\" and then \"Hi!\"!\n\nIf you want to just quickly test this simply script without setting up the whole project - there is an excellent service available called JSFiddle which is super awesome. I've embedded the code from this tutorial in a JSFiddle iframe below (you can run it by clicking the run button in the iframe) for quick testing.\n\n## Mathematical Operations\n\nSo the next part of this tutorial, is all about mathematical operations. Luckily for us, these are extremely simple in JavaScript. There are 4 main, simple mathematical operators:\n\n ```1 2 ``` ``````var myVariable = 5; alert(myVariable + 5); ``````\n ```1 2 ``` ``````var myVariable = 5; var VariableTwo = myVariable * 5; ``````\nI think the usage of mathematical operators is generally pretty straightforward, so I'll leave the rest of finding out to you! Try messing about with operators in different places, and if you're feeling up for a little bit of a challenge, try creating a basic script in which 2 variables contain different lengths of triangle sides, and the third side is calculated using the Pythagoras' Theorem (Hint: As well as using the operators we've just covered, you're going to need to utilise the `sqrt` method of the `Math` object to square root values!)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90488476,"math_prob":0.9495489,"size":4978,"snap":"2022-27-2022-33","text_gpt3_token_len":1100,"char_repetition_ratio":0.14636108,"word_repetition_ratio":0.013348165,"special_character_ratio":0.22097228,"punctuation_ratio":0.08615384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98213816,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T20:13:11Z\",\"WARC-Record-ID\":\"<urn:uuid:274cc932-7d0c-4ad9-814e-e329212e9a71>\",\"Content-Length\":\"17248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:055720dc-91ad-419d-ad89-5e0fdfb85f53>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1c672e9-8148-46b1-a35f-9f4665298482>\",\"WARC-IP-Address\":\"45.79.8.208\",\"WARC-Target-URI\":\"http://www.dev-hq.net/javascript/3--variables-and-maths\",\"WARC-Payload-Digest\":\"sha1:QC4262SNNYLU53DQNMB4ZL7O6X5H6JLB\",\"WARC-Block-Digest\":\"sha1:6MMQG42I6QSZDYL2XJRZWZ3C55H5JJKD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103877410.46_warc_CC-MAIN-20220630183616-20220630213616-00631.warc.gz\"}"} |
https://math.stackexchange.com/questions/2372624/a-question-about-electrical-energy-in-a-network | [
"# A question about electrical energy in a network\n\nLet $G=(V,E)$ be a network. Each arc $e$ has a resistance $r_e>0$. Let $f_e$ be the current flowing though $e=(u,v)$. By Ohm's law we know that a voltage $\\phi$ is induced on the nodes and $f_e = \\frac{\\phi_u - \\phi_v}{r_e}$. We define the electical energy of the network as $\\mathcal{E}_r(f) = \\sum_e f_e^2 r_e$. Note that $\\phi$ is a mapping from the node set to $\\mathbb{R}$. Given a real number $x$, define $E_x=\\{e=(u,v)\\in E : \\min(\\phi_u,\\phi_v)\\leq x\\leq \\max(\\phi_u,\\phi_v)\\}$. Also $F_x=\\sum_{e\\in E_x}f_e$. Now show that $\\int_{\\mathbb{R}}F(x)dx = \\mathcal{E}_r(f)$. I think this follows from the definition of integration, but can not find the connection. I am asking this question question in relation to max-flow computation using electrical flows.\n\n## 1 Answer\n\nNote that the energy can also be written as $\\sum_e \\Delta \\phi(e)^2/r_e$. (To a physicist, this is the familiar $P=IV=I^2 R=V^2/R$ formula, but you don't need to know anything about electromagnetism to get this.) Now each edge contributes to the integral of $F$ on an interval of length $\\Delta \\phi(e)$, and contributes $f_e=\\Delta \\phi(e)/r_e$ there. Multiplying that gives that the contribution of each edge to the integral is $\\Delta \\phi(e)^2/r_e$; summing over edges gives the desired result.\n\n• Can you be more specific? Are you using the definition of Riemann integration somehow? – Sudipta Roy Jul 26 '17 at 19:10\n• @SudiptaRoy Not really. The point is that $e \\in E_x$ if and only if $x \\in [\\min \\{ \\phi_u,\\phi_v \\},\\max \\{ \\phi_u,\\phi_v \\}]$. This interval has length $\\Delta \\phi(e)$. The contribution of $e$ to $F$ on this interval is $\\Delta \\phi(e)/r_e$. So you multiply the length of the interval by the contribution to $F$ to get the contribution of $e$ to $\\int F$. I'm not sure about sign issues, though (is $f_e$ selected to be positive?) – Ian Jul 26 '17 at 19:21"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80465347,"math_prob":0.99988806,"size":1900,"snap":"2021-04-2021-17","text_gpt3_token_len":589,"char_repetition_ratio":0.119198315,"word_repetition_ratio":0.0,"special_character_ratio":0.32210526,"punctuation_ratio":0.10485934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T03:33:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f887c3cd-c9cd-4141-9ad8-88676dbfb71a>\",\"Content-Length\":\"161718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9a11aa2-d3fe-4568-a876-078a5ff05396>\",\"WARC-Concurrent-To\":\"<urn:uuid:a10be98c-627d-4cd4-a464-12d5ae4f8004>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2372624/a-question-about-electrical-energy-in-a-network\",\"WARC-Payload-Digest\":\"sha1:OUVZRJX3JUIZVBIK66MOKJRD6WFXFYDP\",\"WARC-Block-Digest\":\"sha1:35BYH6MCF7Q6K2HTQKM7DRMIF5VAXAKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039375537.73_warc_CC-MAIN-20210420025739-20210420055739-00249.warc.gz\"}"} |
https://www.geeksforgeeks.org/set-list-java/?ref=rp | [
"Related Articles\nSet to List in Java\n• Difficulty Level : Easy\n• Last Updated : 11 Dec, 2018\n\nGiven a set (HashSet or TreeSet) of strings in Java, convert it into a list (ArrayList or LinkedList) of strings.\n\n```Input : Set hash_Set = new HashSet();\nhash_Set.add(\"Geeks\");\nhash_Set.add(\"For\");\nOutput : ArrayList or Linked List with\nfollowing content\n{\"Geeks\", \"for\"}\n```\n\n## Recommended: Please try your approach on {IDE} first, before moving on to the solution.\n\nMethod 1 (Simple)\nWe simply create an list. We traverse the given set and one by one add elements to the list.\n\n `// Java program to demonstrate conversion of``// Set to array using simple traversal``import` `java.util.*;`` ` `class` `Test {`` ``public` `static` `void` `main(String[] args) {`` ` ` ``// Creating a hash set of strings`` ``Set s = ``new` `HashSet();`` ``s.add(``\"Geeks\"``);`` ``s.add(``\"for\"``);`` ` ` ``int` `n = s.size();`` ``List aList = ``new` `ArrayList(n);`` ``for` `(String x : s)`` ``aList.add(x);`` ` ` ``System.out.println(``\"Created ArrayList is\"``);`` ``for` `(String x : aList)`` ``System.out.println(x);`` ` ` ``// We can created LinkedList same way`` ``}``}`\nOutput:\n```Created ArrayList is\nGeeks\nfor\n```\n\nMethod 2 (Using ArrayList or LinkedList Constructor)\n\n `// Java program to demonstrate conversion of``// Set to list using constructor``import` `java.util.*;`` ` `class` `Test {`` ``public` `static` `void` `main(String[] args) {`` ` ` ``// Creating a hash set of strings`` ``Set s = ``new` `HashSet();`` ``s.add(``\"Geeks\"``);`` ``s.add(``\"for\"``);`` ` ` ``// Creating an array list using constructor`` ``List aList = ``new` `ArrayList(s);`` ` ` ``System.out.println(``\"Created ArrayList is\"``);`` ``for` `(String x : aList)`` ``System.out.println(x);`` ` ` ``System.out.println(``\"Created LinkedList is\"``);`` ``List lList = ``new` `LinkedList(s); `` ``for` `(String x : lList)`` ``System.out.println(x); `` ``}``}`\nOutput:\n\n```Created ArrayList is\nGeeks\nfor\nCreated LinkedList is\nGeeks\nfor\n```\n\nMethod 3 (Using addAll method)\n\n `// Java program to demonstrate conversion of``// Set to array using addAll() method.``import` `java.util.*;`` ` `class` `Test {`` ``public` `static` `void` `main(String[] args) {`` ` ` ``// Creating a hash set of strings`` ``Set s = ``new` `HashSet();`` ``s.add(``\"Geeks\"``);`` ``s.add(``\"for\"``);`` ` ` ``List aList = ``new` `ArrayList();`` ``aList.addAll(s);`` ` ` ``System.out.println(``\"Created ArrayList is\"``);`` ``for` `(String x : aList)`` ``System.out.println(x);`` ` ` ``List lList = ``new` `LinkedList();`` ``lList.addAll(s);`` ` ` ``System.out.println(``\"Created LinkedList is\"``);`` ``for` `(String x : lList)`` ``System.out.println(x); `` ``}``}`\nOutput:\n```Created ArrayList is\nGeeks\nfor\nCreated LinkedList is\nGeeks\nfor\n```\n\nMethod 4 (Using stream in Java)\nWe use stream in Java to convert given set to steam, then stream to list. This works only in Java 8 or versions after that.\n\n `// Java program to demonstrate conversion of``// Set to list using stream``import` `java.util.*;``import` `java.util.stream.*;`` ` `class` `Test {`` ``public` `static` `void` `main(String[] args) {`` ` ` ``// Creating a hash set of strings`` ``Set s = ``new` `HashSet();`` ``s.add(``\"Geeks\"``);`` ``s.add(``\"for\"``);`` ` ` ``List aList = s.stream().collect(Collectors.toList());`` ` ` ``for` `(String x : aList)`` ``System.out.println(x);`` ``}``}`\nOutput:\n```Geeks\nfor\n```\n\nAttention reader! Don’t stop learning now. Get hold of all the important Java Foundation and Collections concepts with the Fundamentals of Java and Java Collections Course at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.\n\nMy Personal Notes arrow_drop_up"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.511208,"math_prob":0.58710086,"size":3204,"snap":"2021-04-2021-17","text_gpt3_token_len":789,"char_repetition_ratio":0.1490625,"word_repetition_ratio":0.41004184,"special_character_ratio":0.26841447,"punctuation_ratio":0.184765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9727873,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T08:58:56Z\",\"WARC-Record-ID\":\"<urn:uuid:05616957-884a-4151-9e74-6452bdfc7bb1>\",\"Content-Length\":\"102260\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e207d9cc-4224-4b21-8825-0f6d93ac536e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6a0efc7-1ea8-47f5-8da4-8e9dbc9a5937>\",\"WARC-IP-Address\":\"23.62.230.73\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/set-list-java/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:ZWNFV4V5NCPXZSJXI2PSHFWO4CE3V7QE\",\"WARC-Block-Digest\":\"sha1:U24TUDPNPXKAYZMVJTDSFMPVCPRGGCGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039526421.82_warc_CC-MAIN-20210421065303-20210421095303-00243.warc.gz\"}"} |
https://collaborate.princeton.edu/en/publications/an-application-of-filtering-theory-to-parameter-identification-us | [
"# An application of filtering theory to parameter identification using stochastic mechanics\n\nJ. G.B. Beumee, Herschel Albert Rabitz\n\nResearch output: Contribution to journalArticle\n\n2 Scopus citations\n\n### Abstract\n\nAn estimation method for unknown parameters in the initial conditions and the potential of a quantal system using the stochastic interpretation of quantum mechanics and some results in system theory are presented. According to this interpretation the possible trajectories of a particle through coordinate space may be represented by the realization of a stochastic process that satisfies a stochastic differential equation. The drift term in this equation is derived from the wave function and consequently contains all unknown parameters in the initial conditions and the potential. The main assumption of the paper is that a continuous sequence of position measurements on the trajectory of the particle can be identified with a realization of this stochastic process over the corresponding period of time. An application of the stochastic filtering theorems subsequently provides a minimum variance estimate of the unknown parameters in the drift conditional on this continuous sequence of measurements. As simple illustrations, this method is used to obtain estimates for the initial momentum of a free particle given measurements on its trajectory and to construct an estimator for the unknown parameters in a harmonic potential. It is shown that an optimal estimator exists if the stochastic processes are associated with a wave function from a potential of the Rellich type. In addition the a posteriori probability density of the parameters in the quantal system is calculated, assuming that all parameters involved prescribe a Rellich potential.\n\nOriginal language English (US) 1787-1794 8 Journal of Mathematical Physics 28 8 https://doi.org/10.1063/1.527820 Published - Jan 1 1987\n\n### All Science Journal Classification (ASJC) codes\n\n• Statistical and Nonlinear Physics\n• Mathematical Physics"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8225528,"math_prob":0.7553264,"size":1860,"snap":"2020-34-2020-40","text_gpt3_token_len":334,"char_repetition_ratio":0.12823276,"word_repetition_ratio":0.02919708,"special_character_ratio":0.17741935,"punctuation_ratio":0.034843206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823664,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T14:39:49Z\",\"WARC-Record-ID\":\"<urn:uuid:086a34f8-3dd9-4d58-82d4-1abb59b5d8de>\",\"Content-Length\":\"47644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbef2b30-e5b1-48bb-ab5a-71e81209ffc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:c93b28ba-9068-4cab-8781-c2357f46e84b>\",\"WARC-IP-Address\":\"3.90.122.189\",\"WARC-Target-URI\":\"https://collaborate.princeton.edu/en/publications/an-application-of-filtering-theory-to-parameter-identification-us\",\"WARC-Payload-Digest\":\"sha1:FTAOJVCM3ZIQZIX324EYPYVCFQY3XW7W\",\"WARC-Block-Digest\":\"sha1:G5SMNQEIFSVZNSCOUZ2AWMANUURDASJS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400279782.77_warc_CC-MAIN-20200927121105-20200927151105-00376.warc.gz\"}"} |
https://support.systemweaver.se/en/support/solutions/articles/31000141814-managing-test-execution-time-in-a-test-model | [
"If there is a need to manage test execution time, using an Integer attribute and a Computed attribute could be one solution.\n\nIn our example, the execution time is the time required for performing an individual Test Case and the set-up time required for the complete Test.\n\nAn 'Execution time' [5TCE] integer attribute is set as a default attribute on Test Case and Test with a scale of minutes. The value is stored on Test rather than Test Specification (which would make it easier to reuse the value, along with the Test Specification) because the set-up time depends on the test environment which is under the Test level. Using the same argument, the 'Execution time' ought to be a node/part attribute of the Test Case, under the Test (or maybe Test Specification) but this would make the solution a bit more difficult to manage, since such attributes would require editing using a special grid view.\n\nThe total of 'Execution time' is calculated by a Computed attribute 'Total execution time' [5TTE], which summarizes the time on Test Suite level (including all sub Test suits) and Tests/Test Cases:\n\n`/ISSS*/ISES*/ISSP*/ITEC[@5TCE > 0].Select(@5TCE).Sum + /ISSS*/ISES*[@5TCE > 0].Select(@5TCE).Sum `\n\nNote: The condition [@5TCE > 0] is required since the sum would otherwise potentially be calculated over 'nil' values for those attributes that have no value, which would result in a 'nil' sum. If the attribute is introduced before the first Test Case is created, this condition is not needed as the attribute would have a default value of 0 min.\n\nAn alternative method is to present the Total execution time in the h:m format:\n\n`minutes := /ISSS*/ISES*/ISSP*/ITEC[@5TCE > 0].Select(@5TCE).Sum + /ISSS*/ISES*[@5TCE > 0].Select(@5TCE).Sum; hours := minutes.Div( 60); minute:= minutes - (hours * 60); hours.ToString + \":\" + minute.ToString `\n\nThis method, however, displays 1 hour + 3 minutes as: \"1:3\".\n\nThe following method displays the time as \"1:03\":\n\n`minutes := /ISSS*/ISES*/ISSP*/ITEC[@5TCE > 0].Select(@5TCE).Sum + /ISSS*/ISES*[@5TCE > 0].Select(@5TCE).Sum; hours := minutes.Div( 60); minute:= minutes - (hours * 60); textminute := if minute < 10 then \"0\" + minute.ToString else minute.ToString; hours.ToString + \":\" + textminute `\n\n# Result",
null,
""
]
| [
null,
"https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/31004638902/original/E6OSq6tkls7TjpDbzCn4PiWioKiH-AQBsw",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82675153,"math_prob":0.89663726,"size":2223,"snap":"2023-40-2023-50","text_gpt3_token_len":577,"char_repetition_ratio":0.15141957,"word_repetition_ratio":0.093294464,"special_character_ratio":0.26675662,"punctuation_ratio":0.1438515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785981,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T14:13:21Z\",\"WARC-Record-ID\":\"<urn:uuid:43912a93-8e30-499d-a7f7-77a53c8672dd>\",\"Content-Length\":\"36961\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2570694e-7673-4239-bc3e-6fe8422ca396>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f698e40-93e0-4d6b-bec7-e520161b2737>\",\"WARC-IP-Address\":\"34.232.242.102\",\"WARC-Target-URI\":\"https://support.systemweaver.se/en/support/solutions/articles/31000141814-managing-test-execution-time-in-a-test-model\",\"WARC-Payload-Digest\":\"sha1:AFX65YLMAJOTTK6WGQXED2TJBP4GKG2O\",\"WARC-Block-Digest\":\"sha1:JDKPXTMSEGKFAPZZSMRFRM4FINP2KVJF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00537.warc.gz\"}"} |
https://favtutor.com/blogs/machine-learning-algorithms-for-beginners | [
"# Machine Learning Algorithms for Beginners [Guide]\n\nAs a human being can recognize faces and detect images using cognitive skills, with technological advancement, it is now even possible for machines to perform activities that a human can do and even more!\n\nFrom the beginning of their lives, humans collect data and analyze it to find patterns, our brains are trained in this way to have cognitive skills and interpret data. Likewise, computers can be trained to find a pattern in the data and make appropriate predictions, this is called machine learning. Based on what kind of predictions the models make, there are different machine learning algorithms.\n\nIn this guide, we are going to discuss various types of machine learning algorithms for beginners. There are two types of datasets on which algorithms can be trained, one which has prediction or labeled data and another which has only raw data with no actual prediction values to train your model on. The former is a category of supervised learning where your models train on known predictions whereas the latter is unsupervised learning, where the model trains on undetected and unlabeled data. Let’s discuss these algorithms in detail!\n\n## Supervised learning\n\nSupervised learning is a category of machine learning algorithms where you have input factors (x) and an output variable (Y) and you utilize an algorithm to create a mapping from the input to the output variable.\n\nY = f(X)\n\nThe objective is to create an efficient and well-defined function that can create predictions as the output on inputting unseen data.\n\nSupervised learning can be further structured into regression and classification problems.\n\n1. Regression: It is an algorithm that predicts a continuous real value. Eg. Predicting gold prices. There are many different types of regression algorithms. The three most common are listed below:\n• Linear Regression\n• Polynomial Regression\n2. Classification: It is an algorithm that predicts class. Eg. Predicting if the patient is suffering from heart disease or not. Classification problems can be solved with a numerous number of algorithms. Suitable algorithms can be chosen depending upon the data and the structure of the data. Here are a few popular classification algorithms:\n• Logistic Regression\n• K-Nearest Neighbor\n• Support Vector Machines\n• Naive Bayes\n3. Common supervised learning algorithms, which can be used for regression and classification algorithms problems:\n• Decision Trees\n• Random Forest\n\n## Regression Algorithms\n\nRegression problems are unique, as they anticipate that the model should output a real continuous value. For example, predicting stock prices, home loan prices, classifying if the website can be hacked, etc.\n\n### 1) Linear Regression\n\nIn a simple linear regression algorithm, we create predictions from a single variable. The output attribute is known as the target variable and is alluded to as Y. The input parameter is known as the predictor variable and is alluded to as X. When we consider only one input parameter, the algorithm is known as simple linear regression.\n\n“Linear Regression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation.”\n\nAfter creating the test and training data, train the model using scikit learn library.\n\n```# Fitting Simple Linear Regression to the Training set\nfrom sklearn.linear_model import LinearRegression\nregressor = LinearRegression()\nregressor.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = regressor.predict(X_test)\n```\n\nThe sample plot for the training data following simple linear regression is:",
null,
"### 2) Polynomial Regression\n\nIn polynomial regression, the input and output variables are mapped in the nth degree of the polynomial. Polynomial Regression doesn't need the connection between the input and output variables to be linear according to the data, this is the basic difference between Linear and Polynomial Regression.\n\nThe code below illustrates how you can train a polynomial regression model using python:\n\n```# Fitting Linear Regression to the dataset\nfrom sklearn.linear_model import LinearRegression\nlin_reg = LinearRegression()\nlin_reg.fit(X, y)\n\n# Fitting Polynomial Regression to the dataset\nfrom sklearn.preprocessing import PolynomialFeatures\npoly_reg = PolynomialFeatures(degree = 4)\nX_poly = poly_reg.fit_transform(X)\npoly_reg.fit(X_poly, y)\nlin_reg_2 = LinearRegression()\nlin_reg_2.fit(X_poly, y)\n\n# Predicting a new result with Linear Regression\nlin_reg.predict([[6.5]])\n```\n\nPlots for linear regression and polynomial regression:",
null,
"",
null,
"## Classification Algorithms\n\n### 3) Logistic regression\n\nLogistic regression algorithms are used for classification problems. We use a logistic function to generate predictions that is why it is known as Logistic regression.\n\nAnother name for the logistic function is the sigmoid function, sigmoid function or activation function is used to convert the output into categorical discrete value. It’s an S-shaped curve that inputs a real-valued number and maps it into a value between 0 and 1. The logistic regression equation is:\n\n1/(1 + e-value)",
null,
"After creating and scaling the training and test dataset we can fit our training data to the model and create model predictions on the test data. And create a confusion matrix.\n\n```# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n\n# Fitting Logistic Regression to the Training set\nfrom sklearn.linear_model import LogisticRegression\nclassifier = LogisticRegression(random_state = 0)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nThe decision boundary and scatter plot for the training data predicting if the user will purchase the commodity based on the data from social media advertisement, looks like this:",
null,
"The model can overfit if the input parameters are highly correlated like linear regression, to cure this we can map pairwise correlations between inputs by removing the highly correlated inputs.\n\n### 4) K-Nearest Neighbor algorithm\n\nKNN is used for both regression and classification problems. However, it is vastly used for classification problems. K-nearest neighbors (KNN) algorithm uses ‘feature similarity’ to create the prediction values of new data points. This implies that new data points are assigned new values based on points in the training set. The working of the algorithm-\n\n• Step 1: Creating training and test data.\n• Step 2: We choose the value of K i.e. the nearest data points. K can be any integer. N− For each point in the test data do the following:\n• 2.1: Calculate the distance between test data and each value of training data using any of the methods namely: Euclidean, Manhattan, or Hamming distance. Euclidean distance is the most common method.\n• 2.2: Sort the distance in ascending order.\n• 2.3: The algorithm chooses the top K rows from the sorted array.\n• 2.4: It will assign a class to the test point based on the most frequent class of these rows.\n\nThe training data is fit into the KNN model and predictions are created using test data and create confusion matrix:\n\n```# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n\n# Fitting K-NN to the Training set\nfrom sklearn.neighbors import KNeighborsClassifier\nclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nThe plot of the training data and labels with decision boundary according to the KNN classification algorithm:",
null,
"### 5) Support Vector machines - Kernel SVM\n\nThe aim of support vector machine algorithms is to find a hyperplane in an N-dimensional space where N is the number of features that are used to distinctly classify data.\n\nThere are many different methods or hyperplanes to separate the two classes of data points. Our aim is to find a hyperplane with a maximum distance between data points of both classes. Maximizing the margin distance enhances the efficiency of the model and predicts with more confidence.\n\nThe code to scale and fit the training data is:\n\n```# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n\n# Fitting Kernel SVM to the Training set\nfrom sklearn.svm import SVC\nclassifier = SVC(kernel = 'rbf', random_state = 0)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nThe plot of the training data and labels with decision boundary according to the SVM classification algorithm:",
null,
"### 6) Naïve Bayes Algorithm\n\nNaïve Bayes Classifier is one of the straightforward and best Classification algorithms which helps in building the fast machine learning models which will make quick predictions.\n\nIt is a probabilistic classifier, which suggests it predicts the idea of the probability of an object. Some popular examples of the Naïve Bayes Algorithm are spam filters, sentimental analysis, and classifying articles.\n\nThe algorithm follows the following equation:\n\nP(h|d) = (P(d|h) * P(h)) / P(d)\n\nAnd this is how we fit the data to the naive Bayes model:\n\n```# Fitting Naive Bayes to the Training set\nfrom sklearn.naive_bayes import GaussianNB\nclassifier = GaussianNB()\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nNaive Bayes is often extended to real-valued attributes, most ordinarily by assuming a normal distribution.\n\nThis extension of naive Bayes is named Gaussian Naive Bayes. The Gaussian (or Normal distribution) is the easiest method because we only need to estimate the mean and therefore the variance from our training data.\n\nThe plot of decision boundary for a gaussian naive Bayes algorithm on training data:",
null,
"## Common Supervised learning Algorithms\n\n### 7) Decision Tree Algorithm\n\nThe decision tree as the name suggests works on the principle of conditions. It is efficient and has strong algorithms used for predictive analysis. It has mainly attributed that include internal nodes, branches, and a terminal node.\n\nEvery internal node holds a “test” on an attribute, branches hold the conclusion of the test and every leaf node means the class label. It is used for both classifications as well as regression which are both supervised learning algorithms. Decisions trees are extremely delicate to the information they are prepared on — little changes to the preparation set can bring about fundamentally different tree structures.\n\nTrees answer consecutive roles that send us down a specific use of the tree given we have the answer. The model acts with \"if this then that\" conditions, at last, yielding a particular outcome. The code to fit the training data to the decision tree classification model:\n\n```# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n\n# Fitting Decision Tree Classification to the Training set\nfrom sklearn.tree import DecisionTreeClassifier\nclassifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nThe plot showing the decision boundary for the decision tree classification algorithm for the training data.",
null,
"### 8) Random Forest Algorithm\n\nRandom forest, as its name suggests, comprises an enormous amount of individual decision trees that work as a group or as they say, an ensemble. Every individual decision tree in the random forest lets out a class prediction and the class with the most votes is considered as the model's prediction.\n\nRandom forest uses this by permitting every individual tree to randomly sample from the dataset with replacement, bringing about various trees. This is known as bagging. You can refer to a detailed tutorial on Random forest classifier by making a project on credit card fraud detection using machine learning.\n\nFitting the training data:\n\n```# Feature Scaling\nfrom sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n\n# Fitting Random Forest Classification to the Training set\nfrom sklearn.ensemble import RandomForestClassifier\nclassifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)\n\n# Making the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n```\n\nThe plot for the training data for the random forest classification. You can notice that the plot somewhat looks like the plot for the decision tree but with better accuracy of the decision boundary.",
null,
"## Unsupervised Learning\n\nAn unsupervised learning algorithm is training a model on data that is neither classified nor labeled and allowing the algorithm to find patterns in the data without guidance. The algorithm groups the unsorted information according to patterns without any prior training of the model on any data.\n\nUnlike supervised learning, no labels are provided in the data that means no training is done for the model. Therefore models are restricted to find the hidden structure in unlabeled data by our-self.\n\nIn this guide, we’ll discuss the two most prominent unsupervised learning algorithms, namely K-mean clustering and Principal component analysis.\n\n### 9) K-means clustering Algorithm\n\nK-Means Clustering is an Unsupervised Learning algorithm, which groups the unlabeled dataset into different clusters. Here K is the number of predefined clusters that are needed to train the model, as if K=2, there will be two clusters, and for K=3, there will be three clusters, and so on.\n\n“It is an iterative algorithm that divides the unlabeled dataset into k different clusters in such a way that each dataset belongs to only one group that has similar properties.”\n\nTo choose the optimal number of clusters for the model, we use the elbow method:\n\n1. It trains the K-means clustering model on the given dataset with different K values (ranges from 1-10).\n2. For each value of K, calculate the WCSS value.\n3. The plot between calculated WCSS values and the number of clusters K.\n4. The steep bend or a point of the plot looks like an arm, that point is considered as the best value of K.\n\nThe WCSS curve looks like this:",
null,
"From the curve, we deduce that the most suitable number of clusters (K) is 5. Hence the code to fit an unlabeled data by finding the number of clusters using the elbow method to a K-mean clustering model is:\n\n```# Using the elbow method to find the optimal number of clusters\nfrom sklearn.cluster import KMeans\nwcss = []\nfor i in range(1, 11):\nkmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)\nkmeans.fit(X)\nwcss.append(kmeans.inertia_)\nplt.plot(range(1, 11), wcss)\nplt.title('The Elbow Method')\nplt.xlabel('Number of clusters')\nplt.ylabel('WCSS')\nplt.show()\n\n# Fitting K-Means to the dataset\nkmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 42)\ny_kmeans = kmeans.fit_predict(X)\n```\n\nThe number of clusters and the data segmentation and centroid of the clusters created by the model is:",
null,
"### 10) Principal Component Analysis\n\nThe main purpose of PCA is to decrease the complexity of the model. PCA simplifies the model and improves model performance. In cases of the datasets which have a lot of features we just extract much fewer independent variables that explain the variance the most.\n\nThe principal component analysis is used to extract linear composites of the observed variables. Factor analysis is basically a formal model predicting observed variables from theoretical latent factors from the dataset. We use PCA to maximize the total variance to find distinguishable patterns, and Factor analysis to maximize the shared variance for latent constructs or variables.\n\n## Reinforcement Learning\n\nReinforcement learning is used to make a sequence of decisions. The model learns to achieve a goal for an uncertain, potentially complex dataset. The concept of reinforcement learning is very similar to a game. The model uses a trial and error method to solve the problem. To achieve the goal the model gets either rewards or penalties for the actions it performs. Its primary goal of the model is to maximize the total reward.\n\nThere are two types of Reinforcement:\n\n### Positive Reinforcement\n\nIt is when an event occurs due to a particular behavior and resulting in an increase in the strength and the frequency of the behavior. Consequently, it has a positive effect on the behavior of the model.\n\n• Maximizes Performance\n• Sustain Change for a long period of time\n\n• Too much Reinforcement can lead to an overload of states which can diminish the results.\n\n### Negative Reinforcement\n\nIt is defined as the strengthening of behavior because a negative condition is stopped or avoided.\n\n• Increases Behavior\n• Provide defiance to a minimum standard of performance\n\n• It facilitates enough to meet up the minimum behavior\n\n## Conclusion\n\nNow that you know all about machine learning algorithms, you can start working with machine learning projects to apply your knowledge to real-world problems.\n\nMachine learning is all about handling and processing the data and speculating the best machine learning algorithm to train your model to get optimal results. Python libraries like scikit-learn make it pretty easy to train your data without working out the actual mathematics behind the machine learning algorithm but understanding the algorithm to its core is what makes you a good data scientist.\n\nWe have covered 10 of the most prominent machine learning algorithms for beginners in this tutorial. Hope this article helps you create a clear understanding of the buzzword nowadays. That’s right Machine Learning.\n\nHappy Learning :)\n\n### FavTutor - 24x7 Live Coding Help from Expert Tutors!",
null,
""
]
| [
null,
"https://favtutor.com/resources/images/uploads/mceu_98910096311608112565080.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_58019826121608112854682.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_31138775731608112950920.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_75735761041608113265265.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_48646141951608113425324.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_83842499671608113668883.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_34916476381608113822065.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_91987944291608113996880.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_779345047101608114169831.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_918675748111608114295034.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_218915888121608114495166.png",
null,
"https://favtutor.com/resources/images/uploads/mceu_710559578131608114649954.png",
null,
"https://favtutor.com/resources/images/uploads/apurva_sharma_writer.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85006326,"math_prob":0.9610382,"size":18668,"snap":"2023-40-2023-50","text_gpt3_token_len":3698,"char_repetition_ratio":0.1577904,"word_repetition_ratio":0.10117146,"special_character_ratio":0.19562888,"punctuation_ratio":0.096190475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986844,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T21:43:20Z\",\"WARC-Record-ID\":\"<urn:uuid:0c7bc0ba-9b5a-45dc-9c87-7535b7446248>\",\"Content-Length\":\"73617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23119a15-b3e7-4fd4-8cef-f9857812acd3>\",\"WARC-Concurrent-To\":\"<urn:uuid:72a4c739-17c0-4c27-93a7-418e60067905>\",\"WARC-IP-Address\":\"172.67.190.139\",\"WARC-Target-URI\":\"https://favtutor.com/blogs/machine-learning-algorithms-for-beginners\",\"WARC-Payload-Digest\":\"sha1:6A4MVEJKARBPTOYNQKGUDUNRI2LS6HCE\",\"WARC-Block-Digest\":\"sha1:YN6J3IYO66NBZ7J46QXSLZAZZ37ILXFE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506423.70_warc_CC-MAIN-20230922202444-20230922232444-00322.warc.gz\"}"} |
https://symbolaris.com/orbital/Orbital-doc/api/orbital/math/UnivariatePolynomial.html | [
"Orbital library\n\norbital.math Interface UnivariatePolynomial\n\nAll Superinterfaces:\nArithmetic, Euclidean, Function, Functor, MathFunctor, Normed, Polynomial\n\npublic interface UnivariatePolynomial\nextends Euclidean, Polynomial, Function\n\n(Univariate) polynomial p∈R[X].\n\nLet R be a commutative ring with 1. The polynomial ring over R in one variable X is\n\nR[X] := {∑i∈N aiXi = (ai)i∈N ¦ ai=0 p.t. i∈N ∧ ∀i∈N ai∈R}\nwith the convolution as multiplication. It is an associative, graded R-algebra, and as commutative or unital as R. R[X] inherits the properties of being an integrity domain, factorial (a unique factorization domain), Noetherian from R. Additionally, if R is an integrity domain, then R[X]× = R×.\n\nThe polynomial ring over a field in one variable even is Euclidean.\n\nAuthor:\nAndré Platzer\nPolynomial, ValueFactory.polynomial(Arithmetic[]), ValueFactory.polynomial(Object), ValueFactory.asPolynomial(Vector), NumericalAlgorithms.polynomialInterpolation(Matrix)\n\nNested Class Summary\n\nNested classes/interfaces inherited from interface orbital.math.functional.Function\nFunction.Composite\n\nNested classes/interfaces inherited from interface orbital.logic.functor.Functor\nFunctor.Specification\n\nNested classes/interfaces inherited from interface orbital.logic.functor.Functor\nFunctor.Specification\n\nField Summary\n\nFields inherited from interface orbital.logic.functor.Function\ncallTypeDeclaration\n\nMethod Summary\n\njava.lang.Object apply(java.lang.Object a)\nEvaluate this polynomial at a.\nInteger degree()\nGet the degree of this polynomial.\nArithmetic get(int i)\nGet the coefficient of Xi.\nArithmetic[] getCoefficients()\nReturns an array containing all the coefficients of this polynomial.\nVector getCoefficientVector()\nReturns a vector view of all the coefficients of this polynomial.\njava.util.ListIterator iterator()\nReturns an iterator over all coefficients (up to degree).\nUnivariatePolynomial modulo(UnivariatePolynomial g)\n\nUnivariatePolynomial multiply(UnivariatePolynomial b)\n\nUnivariatePolynomial quotient(UnivariatePolynomial g)\n\nUnivariatePolynomial subtract(UnivariatePolynomial b)\n\nMethods inherited from interface orbital.math.Euclidean\nmodulo, quotient\n\nMethods inherited from interface orbital.math.Polynomial\nadd, degrees, degreeValue, get, indexSet, indices, monomials, multiply, rank, subtract\n\nMethods inherited from interface orbital.math.functional.Function\nderive, integrate\n\nMethods inherited from interface orbital.logic.functor.Functor\nequals, hashCode, toString\n\nMethods inherited from interface orbital.logic.functor.Functor\nequals, hashCode, toString\n\nMethod Detail\n\ndegree\n\nInteger degree()\nGet the degree of this polynomial.\n\nThis is the Euclidean degree function δ and also the graduation function for polynomials. 0 is an element of undefined or all or none degrees. So for 0 we should return null (or Integer.MIN_VALUE, but this is not recommended).\n\nSpecified by:\ndegree in interface Euclidean\nSpecified by:\ndegree in interface Polynomial\nReturns:\ndeg(this) = max {i∈N ¦ ai≠0}\n\nget\n\nArithmetic get(int i)\nGet the coefficient of Xi. Convenience method for Polynomial.get(Arithmetic).\n\nReturns:\nai if i≤deg(this), or 0 if i>deg(this).\n\niterator\n\njava.util.ListIterator iterator()\nReturns an iterator over all coefficients (up to degree).\n\nSpecified by:\niterator in interface Polynomial\nPostconditions:\nalways (RES.succeedes(#next()))\n\napply\n\njava.lang.Object apply(java.lang.Object a)\nEvaluate this polynomial at a. Using the \"Einsetzungshomomorphismus\".\n\nSpecified by:\napply in interface Function\nSpecified by:\napply in interface Polynomial\nParameters:\na - generic Object as argument\nReturns:\nf(a) = f(X)|X=a = (f(X) mod (X-a))\n\nsubtract\n\nUnivariatePolynomial subtract(UnivariatePolynomial b)\n\nmultiply\n\nUnivariatePolynomial multiply(UnivariatePolynomial b)\n\nquotient\n\nUnivariatePolynomial quotient(UnivariatePolynomial g)\n\nmodulo\n\nUnivariatePolynomial modulo(UnivariatePolynomial g)\n\ngetCoefficients\n\nArithmetic[] getCoefficients()\nReturns an array containing all the coefficients of this polynomial.\n\nReturns:\na new array containing all our coefficients.\nObject.clone()\nPostconditions:\nRES[i]==get(i) ∧ RES.length==degree()+1 ∧ RES!=RES\n\ngetCoefficientVector\n\nVector getCoefficientVector()\nReturns a vector view of all the coefficients of this polynomial.\n\nPostconditions:\nRES[i]==get(i) ∧ RES.length==degree()+1\n\nOrbital library\n1.3.0: 11 Apr 2009"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5881129,"math_prob":0.7894298,"size":3337,"snap":"2022-05-2022-21","text_gpt3_token_len":750,"char_repetition_ratio":0.21542154,"word_repetition_ratio":0.17368421,"special_character_ratio":0.18849266,"punctuation_ratio":0.17706238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99944013,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T05:40:14Z\",\"WARC-Record-ID\":\"<urn:uuid:758b9eeb-b601-40e5-9055-7ed9566509db>\",\"Content-Length\":\"30218\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0bfc0726-2f1a-4bc8-b1db-c7cd2e81b2c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e651133a-d57c-41d7-adcd-21afe0340139>\",\"WARC-IP-Address\":\"5.175.14.99\",\"WARC-Target-URI\":\"https://symbolaris.com/orbital/Orbital-doc/api/orbital/math/UnivariatePolynomial.html\",\"WARC-Payload-Digest\":\"sha1:V73WV4WRTJHIHL4EVAAW7KOBQV33HMIH\",\"WARC-Block-Digest\":\"sha1:4GSCZPTRTOWOQMQLVS6P4WX6YXAOTUON\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304915.53_warc_CC-MAIN-20220126041016-20220126071016-00479.warc.gz\"}"} |
https://www.stat.math.ethz.ch/pipermail/r-devel/2011-September/062105.html | [
"# [Rd] array extraction\n\nrobin hankin hankin.robin at gmail.com\nWed Sep 28 00:12:26 CEST 2011\n\n```thank you Simon.\n\nI find a[M] working to be unexpected, but consistent with (a close\n\nCan we reproduce a[,M]?\n\n[I would expect this to extract a[,j,k] where M[j,k] is TRUE]\n\ntry this:\n\n> a <- array(1:30,c(3,5,2))\n> M <- matrix(1:10,5,2) %% 3==1\n> a[M]\n 1 4 7 10 11 14 17 20 21 24 27 30\n\nThis is not doing what I would want a[,M] to do.\n\nI'll checkout afill() right now....\n\nbest wishes\n\nRobin\n\nOn Wed, Sep 28, 2011 at 10:39 AM, Simon Knapp <sleepingwell at gmail.com> wrote:\n> a[M] gives the same as your `cobbled together' code.\n>\n> On Wed, Sep 28, 2011 at 6:35 AM, robin hankin <hankin.robin at gmail.com>\n> wrote:\n>>\n>> hello everyone.\n>>\n>> Look at the following R idiom:\n>>\n>> a <- array(1:30,c(3,5,2))\n>> M <- (matrix(1:15,c(3,5)) %% 4) < 2\n>> a[M,] <- 0\n>>\n>> Now, I think that \"a[M,]\" has an unambiguous meaning (to a human).\n>> However, the last line doesn't work as desired, but I expected it\n>> to...and it recently took me an indecent amount of time to debug an\n>> analogous case. Just to be explicit, I would expect a[M,] to extract\n>> a[i,j,] where M[i,j] is TRUE. (Extract.Rd is perfectly clear here, and R\n>> is\n>> behaving as documented).\n>>\n>> The best I could cobble together was the following:\n>>\n>> ind <- which(M,arr.ind=TRUE)\n>> n <- 3\n>> ind <-\n>> cbind(kronecker(ind,rep(1,dim(a)[n])),rep(seq_len(dim(a)[n]),nrow(ind)))\n>> a[ind] <- 0\n>>\n>>\n>> but the intent is hardly clear, certainly compared to \"a[M,]\"\n>>\n>> I've been pondering how to implement such indexing, and its\n>> generalization.\n>>\n>> Suppose 'a' is a seven-dimensional array, and M1 a matrix and M2 a\n>> three-dimensional array (both Boolean). Then \"a[,M1,,M2]\" is a\n>> natural generalization of the above. I would want a[,M1,,M2] to\n>> extract a[i1,i2,i3,i4,i5,i6,i7] where M1[i2,i3] and M[i5,i6,i7] are\n>> TRUE.\n>>\n>> One would need all(dim(a)[2:3] == dim(M1)) and all(dim(a)[5:7] ==\n>> dim(M2)) for consistency.\n>>\n>> Can any R-devel subscribers advise?\n>>\n>>\n>>\n>>\n>> --\n>> Robin Hankin\n>> Uncertainty Analyst\n>> hankin.robin at gmail.com\n>>\n>> ______________________________________________\n>> R-devel at r-project.org mailing list\n>> https://stat.ethz.ch/mailman/listinfo/r-devel\n>\n>\n\n--\nRobin Hankin\nUncertainty Analyst\nhankin.robin at gmail.com\n\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.866812,"math_prob":0.6502647,"size":2394,"snap":"2023-14-2023-23","text_gpt3_token_len":789,"char_repetition_ratio":0.116736405,"word_repetition_ratio":0.010025063,"special_character_ratio":0.39807853,"punctuation_ratio":0.20036429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95094883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T09:40:40Z\",\"WARC-Record-ID\":\"<urn:uuid:03a45087-a11a-401d-a8a5-d8925a80fce5>\",\"Content-Length\":\"6174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd5d4c80-d872-450a-9436-02b044142189>\",\"WARC-Concurrent-To\":\"<urn:uuid:21887038-247f-498c-94ea-72042a183a61>\",\"WARC-IP-Address\":\"129.132.119.195\",\"WARC-Target-URI\":\"https://www.stat.math.ethz.ch/pipermail/r-devel/2011-September/062105.html\",\"WARC-Payload-Digest\":\"sha1:RG42PLMBQDC2P5CKVQLYNTVH7YG56T5F\",\"WARC-Block-Digest\":\"sha1:HOC554R2RADOKGE5QEFJKFKMPNNETMPR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948965.80_warc_CC-MAIN-20230329085436-20230329115436-00340.warc.gz\"}"} |
https://www.programiz.com/javascript/library/array/entries | [
"",
null,
"# Javascript Array entries()\n\nIn this tutorial, you will learn about the JavaScript Array entries() method with the help of examples.\n\nThe `entries()` method returns a new Array Iterator object containing key/value pairs for each array index.\n\n### Example\n\n``````// defining an array named alphabets\nconst alphabets = [\"A\", \"B\", \"C\"];\n\n// array iterator object that contains\n// key-value pairs for each index in the array\nlet iterator = alphabets.entries();\n\n// iterating through key-value pairs in the array\nfor (let entry of iterator) {\nconsole.log(entry);\n}\n\n// Output:\n// [ 0, 'A' ]\n// [ 1, 'B' ]\n// [ 2, 'C' ]``````\n\n## entries() Syntax\n\nThe syntax of the `entries()` method is:\n\n``arr.entries()``\n\nHere, arr is an array.\n\n## entries() Parameters\n\nThe `entries()` method does not take any parameters.\n\n## entries() Return Value\n\n• Returns a new `Array` iterator object.\n\nNote: The `entries()` method does not change the original array.\n\n## Example 1: Using entries() Method\n\n``````// defining an array\nconst languages = [\"Java\", \"C\", \"C++\", \"Python\"];\n\n// array iterator object that contains\n// key-value pairs for each index in the array\nlet iterator = languages.entries();\n\n// looping through key-value pairs in the array\nfor (let entry of iterator) {\nconsole.log(entry);\n}``````\n\nOutput\n\n```[ 0, 'Java' ]\n[ 1, 'C' ]\n[ 2, 'C++' ]\n[ 3, 'Python' ]```\n\nIn the above example, we have used the `entries()` method to get an Array iterator object of the key/value pair of each index in the language array.\n\nWe have then looped through iterator that prints the key/value pairs of each index.\n\n## Example 2: Using next() Method in Array Iterator Object\n\nArray Iterator object has a built-in method called `next()` which is used to get the next value in the object.\n\nInstead of looping through the iterator, we can get the key/value pairs using `next().value`. For example:\n\n``````// defining an array\nconst symbols = [\"#\", \"\\$\", \"*\"];\n\n// Array iterator object that holds key/value pairs\nlet iterator = symbols.entries();\n\n// using built-in next() method in Array iterator object\nconsole.log(iterator.next().value);\nconsole.log(iterator.next().value);\nconsole.log(iterator.next().value);``````\n\nOutput\n\n```[ 0, '#' ]\n[ 1, '\\$' ]\n[ 2, '*' ]```"
]
| [
null,
"https://www.facebook.com/tr",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5218504,"math_prob":0.68074167,"size":2030,"snap":"2022-27-2022-33","text_gpt3_token_len":503,"char_repetition_ratio":0.1786772,"word_repetition_ratio":0.1590214,"special_character_ratio":0.29014778,"punctuation_ratio":0.16535433,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.978538,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T13:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:72d5a143-a099-478c-8443-ba5928d9d121>\",\"Content-Length\":\"90108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ef7107d-4028-4fc2-a372-32146875029d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b71d4c7-ae97-4ec8-b730-d838f89d5a09>\",\"WARC-IP-Address\":\"174.138.49.232\",\"WARC-Target-URI\":\"https://www.programiz.com/javascript/library/array/entries\",\"WARC-Payload-Digest\":\"sha1:R7OWWHI65NQFYYA7NTWKMRHCAPH4MS5L\",\"WARC-Block-Digest\":\"sha1:Z55WLEMRGILORAYT2HKVZFZMIMVG7PRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103941562.52_warc_CC-MAIN-20220701125452-20220701155452-00178.warc.gz\"}"} |
https://tripprivacy.com/minutes-in-day/ | [
"# Minutes in day\n\n## How Many Minutes in a Day?\n\nA day consists of 24 hours and 1 hour consists of 60 minutes\nSo:\n24 x 60 = 1440 minutes\nThere are 1440 minutes in a day\n\nEssential Life Hacks To Save You Ti...\nEssential Life Hacks To Save You Time!\n\n### Convert minutes to day\n\nday =min * 0.00069444\n\nTo convert minutes into a day, you need to divide the number of minutes by 1440.\n\nNUMBER OF DAYS (DAYS) = NUMBER OF MINUTES / 1440\n\nFor example, in order to find out how many days are in 10080 minutes, you need 10080/1440 = 7 days.\n\n### Convert day to minutes\n\nTo convert a day to minutes, multiply the number of days by 1440.\n\nNUMBER OF MINUTES = NUMBER OF DAYS (DAYS) * 1440\n\nFor example, in order to find out how many minutes are in 2 days, you need 2 * 1440 = 2880 minutes.\n\nMinutes\nA minute is a unit of time that is equal to 60 seconds or 1/60 of an hour. In the universal coordinated time standard, a minute in rare cases can be equal to 59 or 61 seconds.\n\nDays\nA day (symbol: “d”) is a unit of time that is equal to 24 hours or 86,400 seconds. Officially, this is an off-system unit, but it can also be used in the International System of Units. In addition to the fact that a day is 86,400 seconds, this indicator is also used to determine some other periods of time based on the rotation of the Earth on its axis.\n\nThe year has 365 days, which means that it has = 365 * 1440 = 525,600 minutes per year.\n\nAnd in the year 12 months, which means that the average minutes per month = 525600 / 12 = 43800 minutes per month.\n\n## Units of Time\n\nMeasurement The time unit used to express time. Represents the continuous sequence of events. The units of time measurement in the metric system popular unit.\n\nMinutes in day\nScroll to top"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90836746,"math_prob":0.96208864,"size":1579,"snap":"2022-40-2023-06","text_gpt3_token_len":429,"char_repetition_ratio":0.13460317,"word_repetition_ratio":0.07096774,"special_character_ratio":0.31158963,"punctuation_ratio":0.097345136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9867983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T19:22:23Z\",\"WARC-Record-ID\":\"<urn:uuid:d8266190-25e4-407c-b2e1-16d58628db84>\",\"Content-Length\":\"138492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b161d328-dbb5-467c-a16f-f5db43b61566>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0d309c5-d203-4e68-928f-03a43a535ab1>\",\"WARC-IP-Address\":\"52.86.133.10\",\"WARC-Target-URI\":\"https://tripprivacy.com/minutes-in-day/\",\"WARC-Payload-Digest\":\"sha1:MQ5WZBBRZ7HSCUBZ5UMTAJGERKH2AKKY\",\"WARC-Block-Digest\":\"sha1:EP776E3HRLIDJIJ2SBH5Q6FMZPYKYQNW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337339.70_warc_CC-MAIN-20221002181356-20221002211356-00228.warc.gz\"}"} |
https://socratic.org/questions/58e3251f7c01491e20de705d | [
"# For the 1/2 cell sf(MnO_4^(-)+8H^(+)+5erightleftharpoonsMn^(2+)+7H_2O) the value of sf(E^(@)=+1.51V). What is the value for E if sf([MnO_4^(-)]=0.0001M) and sf([Mn^(2+)]=0.0005M) ?\n\nApr 7, 2017\n\n$\\textsf{E = + 1.50 \\textcolor{w h i t e}{x} V}$\n\n#### Explanation:\n\nBefore the calculation it is helpful to make a prediction for the electrode potential using Le Chatelier's Principle:\n\nThe 1/2 cell reaction is:\n\nstackrel(color(white)(xxxxxxxxxxxxxxxxxxxxxxxxxxxxx))(color(blue)(larr)\n\n$\\textsf{M n {O}_{4}^{-} + 8 {H}^{+} + 5 e r i g h t \\le f t h a r p \\infty n s M {n}^{2 +} + 4 {H}_{2} O}$\n\nsf(color(red)(0.0001Mcolor(white)(xxxxxxxxx)0.0005M)\n\n$\\textsf{{E}^{\\circ} = + 1.51 \\textcolor{w h i t e}{x} V}$\n\nYou can see that the concentration of $\\textsf{M n {O}_{4}^{-}}$ has been reduced relative to the concentration of $\\textsf{M {n}^{2 +}}$.\n\nAccording to Le Chatelier we would predict that the position of equilibrium will shift to the left to oppose that change, as shown by the blue arrow.\n\nYou can see from the 1/2 equation that this will tend to push out more electrons so we would expect the electrode potential to be less positive.\n\nWe can calculate this using The Nernst Equation:\n\nsf(E=E^(@)-(RT)/(zF)ln(([red])/([\"ox\"]))\n\nAt 298K this simplifies to:\n\n$\\textsf{E = {E}^{\\circ} + \\frac{0.05916}{z} \\log \\left(\\frac{\\left[\\text{ox}\\right]}{\\left[red\\right]}\\right)}$\n\nWhere z is the number of moles of electrons transferred.\n\nThis becomes:\n\nsf(E=E^@+(0.05916)/(5)log(([MnO_4^-][H^+]^8)/([Mn^(2+)]))\n\nSince we are at pH = 0 we can say that $\\textsf{\\left[{H}^{+}\\right] = 1 \\textcolor{w h i t e}{x} M}$\n\nThis becomes:\n\nsf(E=E^@+(0.05916)/(5)log(([MnO_4^-])/([Mn^(2+)]))\n\nPutting in the numbers:\n\nsf(E=+1.51+(0.05916)/(5)log((0.0001)/(0.0005))\n\n$\\textsf{E = + 1.51 - 0.00827 = + 1.50 \\textcolor{w h i t e}{x} V}$\n\nAs you can see, this is in accordance with our prediction. The potential of the electrode has been made slightly less positive.\n\nPut simply, reducing the concentration of Mn(VII) relative to Mn(II) has made the Mn(VII) slightly less effective as an oxidising agent."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8590879,"math_prob":0.99984443,"size":1226,"snap":"2022-27-2022-33","text_gpt3_token_len":325,"char_repetition_ratio":0.10638298,"word_repetition_ratio":0.0,"special_character_ratio":0.25856444,"punctuation_ratio":0.09166667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998018,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T02:44:29Z\",\"WARC-Record-ID\":\"<urn:uuid:7ee162df-e6ac-4576-8eaa-52abc61a54cf>\",\"Content-Length\":\"37811\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f90c5c99-a315-458a-b73b-8800c7ea9655>\",\"WARC-Concurrent-To\":\"<urn:uuid:90456f33-894f-4e97-bce6-17efa7e2fa3d>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/58e3251f7c01491e20de705d\",\"WARC-Payload-Digest\":\"sha1:7JX7SVKU7EHROOUZMJKM2NKLHS4Q2VOW\",\"WARC-Block-Digest\":\"sha1:QKRTTKJDEEROYTQMCAZZFDFIO5BP5N24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573540.20_warc_CC-MAIN-20220819005802-20220819035802-00105.warc.gz\"}"} |
https://support.nag.com/numeric/nl/nagdoc_24/nagdoc_fl24/html/f01/f01crf.html | [
"F01 Chapter Contents\nF01 Chapter Introduction\nNAG Library Manual\n\n# NAG Library Routine DocumentF01CRF\n\nNote: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.\n\n## 1 Purpose\n\nF01CRF transposes a rectangular matrix in-situ.\n\n## 2 Specification\n\n SUBROUTINE F01CRF ( A, M, N, MN, MOVE, LMOVE, IFAIL)\n INTEGER M, N, MN, MOVE(LMOVE), LMOVE, IFAIL REAL (KIND=nag_wp) A(MN)\n\n## 3 Description\n\nF01CRF requires that the elements of an $m$ by $n$ matrix $A$ are stored consecutively by columns in a one-dimensional array. It reorders the elements so that on exit the array holds the transpose of $A$ stored in the same way. For example, if $m=4$ and $n=3$, on entry the array must hold:\n $a11 a21 a31 a41 a12 a22 a32 a42 a13 a23 a33 a43$\nand on exit it holds\n $a11 a12 a13 a21 a22 a23 a31 a32 a33 a41 a42 a43.$\nCate E G and Twigg D W (1977) Algorithm 513: Analysis of in-situ transposition ACM Trans. Math. Software 3 104–110\n\n## 5 Parameters\n\n1: A(MN) – REAL (KIND=nag_wp) arrayInput/Output\nOn entry: the elements of the $m$ by $n$ matrix $A$, stored by columns.\nOn exit: the elements of the transpose matrix, also stored by columns.\n2: M – INTEGERInput\nOn entry: $m$, the number of rows of the matrix $A$.\n3: N – INTEGERInput\nOn entry: $n$, the number of columns of the matrix $A$.\n4: MN – INTEGERInput\nOn entry: $n$, the value $m×n$.\n5: MOVE(LMOVE) – INTEGER arrayWorkspace\n6: LMOVE – INTEGERInput\nOn entry: the dimension of the array MOVE as declared in the (sub)program from which F01CRF is called.\nSuggested value: ${\\mathbf{LMOVE}}=\\left(m+n\\right)/2$.\nConstraint: ${\\mathbf{LMOVE}}\\ge 1$.\n7: IFAIL – INTEGERInput/Output\nOn entry: IFAIL must be set to $0$, $-1\\text{ or }1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.\nFor environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\\mathbf{1}\\text{ or }\\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.\nOn exit: ${\\mathbf{IFAIL}}={\\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).\n\n## 6 Error Indicators and Warnings\n\nIf on entry ${\\mathbf{IFAIL}}={\\mathbf{0}}$ or $-{\\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF).\nErrors or warnings detected by the routine:\n${\\mathbf{IFAIL}}=1$\n On entry, ${\\mathbf{MN}}\\ne {\\mathbf{M}}×{\\mathbf{N}}$.\n${\\mathbf{IFAIL}}=2$\n On entry, ${\\mathbf{LMOVE}}\\le 0$.\n${\\mathbf{IFAIL}}<0$\nA serious error has occurred. Check all subroutine calls and array sizes. Seek expert help.\n\n## 7 Accuracy\n\nExact results are produced.\n\nThe time taken by F01CRF is approximately proportional to $mn$.\n\n## 9 Example\n\nThis example transposes a $7$ by $3$ matrix and prints out, for convenience, its transpose.\n\n### 9.1 Program Text\n\nProgram Text (f01crfe.f90)\n\n### 9.2 Program Data\n\nProgram Data (f01crfe.d)\n\n### 9.3 Program Results\n\nProgram Results (f01crfe.r)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75193465,"math_prob":0.99123836,"size":2457,"snap":"2023-40-2023-50","text_gpt3_token_len":646,"char_repetition_ratio":0.11699959,"word_repetition_ratio":0.028235294,"special_character_ratio":0.25111926,"punctuation_ratio":0.15226337,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99735254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T05:09:40Z\",\"WARC-Record-ID\":\"<urn:uuid:f6f5b687-a152-4fa5-b242-f9a2dc3d8740>\",\"Content-Length\":\"14999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29dd7a25-58f7-44ab-9da9-980ba70f9bd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:455ab42d-1065-4f9f-8d16-62f61f344a0c>\",\"WARC-IP-Address\":\"78.129.168.4\",\"WARC-Target-URI\":\"https://support.nag.com/numeric/nl/nagdoc_24/nagdoc_fl24/html/f01/f01crf.html\",\"WARC-Payload-Digest\":\"sha1:4NXFGVJKOEHXXPRZVQPMLSENELIAAAYR\",\"WARC-Block-Digest\":\"sha1:XV6ALIUSB2MEVTI3MHCAST5OQQK4YQMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100164.87_warc_CC-MAIN-20231130031610-20231130061610-00003.warc.gz\"}"} |
https://www.maplesoft.com/support/help/maple/view.aspx?path=Student%2FLinearAlgebra%2FLinearSolveTutor | [
"",
null,
"LinearSolveTutor - Maple Help\n\nStudent[LinearAlgebra][LinearSolveTutor] - interactive solution of linear systems",
null,
"Calling Sequence LinearSolveTutor(M) LinearSolveTutor(M, v)",
null,
"Parameters\n\n M - Matrix v - Vector",
null,
"Description\n\n • The LinearSolveTutor(M) command allows you to interactively solve the system of equations represented by the Matrix M. The Matrix is interpreted as an augmented matrix whose last column specifies the right-hand side. It returns the solution as a column Vector.\n • The LinearSolveTutor(M, v) command allows you to interactively solve the system $M·x=v$. It returns the solution as a column Vector.\n • For both calling sequences, you must choose whether to use Gaussian elimination (producing the row echelon form of the Matrix) or Gauss-Jordan elimination (producing the reduced row echelon form of the Matrix). To make a selection, click the corresponding button on the initially displayed Maplet application.\n • Floating-point numbers in M or v are converted to rationals before computation begins.\n • The dimensions of the Matrix must be no greater than 5x5.",
null,
"Examples\n\n > $\\mathrm{with}\\left(\\mathrm{Student}\\left[\\mathrm{LinearAlgebra}\\right]\\right):$\n > $M≔⟨⟨1,2,0⟩|⟨2,3,2⟩|⟨0,2,1⟩|⟨3,5,5⟩⟩$\n ${M}{≔}\\left[\\begin{array}{cccc}{1}& {2}& {0}& {3}\\\\ {2}& {3}& {2}& {5}\\\\ {0}& {2}& {1}& {5}\\end{array}\\right]$ (1)\n > $v≔⟨5,4,2⟩$\n ${v}{≔}\\left[\\begin{array}{c}{5}\\\\ {4}\\\\ {2}\\end{array}\\right]$ (2)\n > $\\mathrm{LinearSolveTutor}\\left(M\\right)$\n > $\\mathrm{LinearSolveTutor}\\left(M,v\\right)$",
null,
"See Also"
]
| [
null,
"https://bat.bing.com/action/0",
null,
"https://www.maplesoft.com/support/help/maple/arrow_down.gif",
null,
"https://www.maplesoft.com/support/help/maple/arrow_down.gif",
null,
"https://www.maplesoft.com/support/help/maple/arrow_down.gif",
null,
"https://www.maplesoft.com/support/help/maple/arrow_down.gif",
null,
"https://www.maplesoft.com/support/help/maple/arrow_down.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.71988386,"math_prob":0.99702597,"size":1455,"snap":"2021-43-2021-49","text_gpt3_token_len":377,"char_repetition_ratio":0.16678153,"word_repetition_ratio":0.10404624,"special_character_ratio":0.195189,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99765146,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T00:44:20Z\",\"WARC-Record-ID\":\"<urn:uuid:3602032b-ab32-4f20-a8fd-523a75f56ad9>\",\"Content-Length\":\"152517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f37a9c1-0816-4f42-8307-cdfd412e9665>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3293946-d0ff-43ef-a515-0ac77bc3f3a3>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://www.maplesoft.com/support/help/maple/view.aspx?path=Student%2FLinearAlgebra%2FLinearSolveTutor\",\"WARC-Payload-Digest\":\"sha1:I2OJLPMF3QSNVXLXNLL65AILNLA6LKCB\",\"WARC-Block-Digest\":\"sha1:GI4BSJOE552TF65VBVWXCYVUYOP6DZXN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359082.76_warc_CC-MAIN-20211130232232-20211201022232-00104.warc.gz\"}"} |
https://www.npmjs.com/package/speedy-vision | [
"# npm\n\n## speedy-vision",
null,
"0.9.0-wip • Public • Published\n\n# speedy-vision.js\n\nBuild real-time stuff with speedy-vision.js, a GPU-accelerated Computer Vision library for JavaScript.",
null,
"## Features\n\n• Feature detection\n• Harris corner detector\n• FAST feature detector\n• ORB feature descriptor\n• Feature tracking\n• KLT feature tracker\n• LK optical flow\n• Feature matching\n• Soon\n• Geometric transformations\n• Homography matrix\n• Affine transform\n• Image processing\n• Convert to greyscale\n• Convolution\n• Gaussian blur, box & median filters\n• Image normalization & warping\n• Image pyramids\n• Linear Algebra\n• Beautiful matrix algebra with a fluent interface\n• Efficient computations with WebAssembly\n• Systems of linear equations\n• QR decomposition\n\n... and more in development!\n\nThere are plenty of demos available!\n\n## Author\n\nspeedy-vision.js is developed by Alexandre Martins, a computer scientist from Brazil. It is released under the Apache-2.0 license.\n\nIf my work is of value to you, make a donation. Thank you.\n\nFor general enquiries, contact me at `alemartf` `at` `gmail` `dot` `com`.\n\n## Demos\n\nTry the demos and take a look at their source code:\n\n## Installation\n\nDownload the latest release of speedy-vision.js and include it in the `<head>` section of your HTML page:\n\n`<script src=\"dist/speedy-vision.min.js\"></script>`\n\nOnce you import the library, the `Speedy` object will be exposed. Check out the Hello World demo for a working example.\n\n## Motivation\n\nDetecting features in an image is an important step of many computer vision algorithms. Traditionally, the computationally expensive nature of this process made it difficult to bring interactive Computer Vision applications to the web browser. The framerates were unsatisfactory for a compelling user experience. Speedy, a short name for speedy-vision.js, is a JavaScript library created to address this issue.\n\nSpeedy's real-time performance in the web browser is possible thanks to its efficient WebGL2 backend and to its GPU implementations of fast computer vision algorithms. With an easy-to-use API, Speedy is an excellent choice for real-time computer vision projects involving tasks such as: object detection in videos, pose estimation, Simultaneous Location and Mapping (SLAM), and others.\n\n## The Pipeline\n\nThe pipeline is a central concept in Speedy. It's a powerful structure that lets you organize the computations that take place in the GPU. It's a very flexible, yet conceptually simple, way of working with computer vision and image processing. Let's define a few things:\n\n• A pipeline is a network of nodes in which data flows downstream from one or more sources to one or more sinks.\n• Nodes have input and/or output ports. A node with no input ports is called a source. A node with no output ports is called a sink. A node with both input and output ports transforms the input data in some way and writes the results to its output port(s).\n• A link connects an output port of a node to an input port of another node. Two nodes are said to be connected if there is a link connecting their ports. Data flows from one node to another by means of a link. An input port may only be connected to a single output port, but an output port may be connected to multiple input ports.\n• Input ports expect data of a certain type (e.g., an image). Output ports hold data of a certain type. Two ports may only be connected if their types match.\n• Ports may impose additional constraints on the data passing through them. For example, an input port may expect an image and also impose the constraint that this image must be greyscale.\n• Different nodes may have different parameters. These parameters can be adjusted and are meant to modify the output of the nodes in some way.\n• Nodes and their ports have names. An input port is typically called `\"in\"`. An output port is typically called `\"out\"`. These names can vary, e.g., if a node has more than one input / output port. Speedy automatically assigns names to the nodes, but you can assign your own names as well.\n\nThe picture below shows a visual representation of a pipeline that converts an image or video to greyscale. Data gets into the pipeline via the image source. It is then passed to the Convert to greyscale node. Finally, a greyscale image goes into the image sink, where it gets out of the pipeline.",
null,
"Here's a little bit of code:\n\n```// Load an image\nconst img = document.querySelector('img');\n\n// Create the pipeline and the nodes\nconst pipeline = Speedy.Pipeline();\nconst source = Speedy.Image.Source();\nconst sink = Speedy.Image.Sink();\nconst greyscale = Speedy.Filter.Greyscale();\n\n// Set the media source\nsource.media = media; // media is a SpeedyMedia object\n\n// Connect the nodes\nsource.output().connectTo(greyscale.input());\ngreyscale.output().connectTo(sink.input());\n\n// Specify the nodes to initialize the pipeline\npipeline.init(source, sink, greyscale);\n\n// Run the pipeline\nconst { image } = await pipeline.run(); // image is a SpeedyMedia\n\n// Create a <canvas> to display the result\nconst canvas = document.createElement('canvas');\ncanvas.width = image.width;\ncanvas.height = image.height;\ndocument.body.appendChild(canvas);\n\n// Display the result\nconst ctx = canvas.getContext('2d');\nctx.drawImage(media.source, 0, 0);```\n\nSpeedy provides many types of nodes. You can connect these nodes in a way that is suitable to your application, and Speedy will bring back the results you ask for.\n\n## API Reference\n\n### Media routines\n\nA `SpeedyMedia` object encapsulates a media object: an image, a video, a canvas or a bitmap.\n\n`Speedy.load(source: HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | ImageBitmap, options?: object): SpeedyPromise<SpeedyMedia>`\n\nTells Speedy to load `source`. The `source` parameter may be an image, a video, a canvas or a bitmap.\n\n###### Arguments\n• `source: HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | ImageBitmap`. The media source.\n• `options: object, optional`. Additional options for advanced configuration. See SpeedyMedia.options for details.\n###### Returns\n\nA `SpeedyPromise<SpeedyMedia>` that resolves as soon as the media source is loaded.\n\n###### Example\n```window.onload = async function() {\nlet image = document.getElementById('my-image'); // <img id=\"my-image\" src=\"...\">\n}```\n##### Speedy.camera()\n\n`Speedy.camera(width?: number, height?: number): SpeedyPromise<SpeedyMedia>`\n\n`Speedy.camera(constraints: MediaStreamConstraints): SpeedyPromise<SpeedyMedia>`\n\nLoads a camera stream into a new `SpeedyMedia` object. This is a wrapper around `navigator.mediaDevices.getUserMedia()`, provided for your convenience.\n\n###### Arguments\n• `width: number, optional`. The ideal width of the stream. The browser will use this value or a close match. Defaults to `640`.\n• `height: number, optional`. The ideal height of the stream. The browser will use this value or a close match. Defaults to `360`.\n• `constraints: MediaStreamConstraints`. A MediaStreamConstraints dictionary to be passed to `getUserMedia()` for complete customization.\n###### Returns\n\nA `SpeedyPromise<SpeedyMedia>` that resolves as soon as the media source is loaded with the camera stream.\n\n###### Example\n```// Display the contents of a webcam\nconst media = await Speedy.camera();\nconst canvas = createCanvas(media.width, media.height);\nconst ctx = canvas.getContext('2d');\n\nfunction render()\n{\nctx.drawImage(media.source, 0, 0);\nrequestAnimationFrame(render);\n}\n\nrender();\n}\n\nfunction createCanvas(width, height)\n{\nconst canvas = document.createElement('canvas');\n\ncanvas.width = width;\ncanvas.height = height;\ndocument.body.appendChild(canvas);\n\nreturn canvas;\n}```\n##### SpeedyMedia.release()\n\n`SpeedyMedia.release(): null`\n\nReleases internal resources associated with this `SpeedyMedia`.\n\n###### Returns\n\nReturns `null`.\n\n#### Media properties\n\n##### SpeedyMedia.source\n\n`SpeedyMedia.source: HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | ImageBitmap, read-only`\n\nThe media source associated with the `SpeedyMedia` object.\n\n##### SpeedyMedia.type\n\n`SpeedyMedia.type: string, read-only`\n\nThe type of the media source. One of the following: `\"image\"`, `\"video\"`, `\"canvas\"`, `\"bitmap\"`.\n\n##### SpeedyMedia.width\n\n`SpeedyMedia.width: number, read-only`\n\nThe width of the media source, in pixels.\n\n##### SpeedyMedia.height\n\n`SpeedyMedia.height: number, read-only`\n\nThe height of the media source, in pixels.\n\n##### SpeedyMedia.size\n\n`SpeedyMedia.size: SpeedySize, read-only`\n\nThe size of the media, in pixels.\n\n##### SpeedyMedia.options\n\n`SpeedyMedia.options: object, read-only`\n\n##### SpeedyMedia.clone()\n\n`SpeedyMedia.clone(): SpeedyPromise<SpeedyMedia>`\n\nClones the `SpeedyMedia` object.\n\n###### Returns\n\nA `SpeedyPromise` that resolves to a clone of the `SpeedyMedia` object.\n\n###### Example\n`const clone = await media.clone();`\n##### SpeedyMedia.toBitmap()\n\n`SpeedyMedia.toBitmap(): SpeedyPromise<ImageBitmap>`\n\nConverts the media to an `ImageBitmap`.\n\n###### Returns\n\nA `SpeedyPromise` that resolves to an `ImageBitmap`.\n\n### Pipeline\n\n#### Basic routines\n\n##### Speedy.Pipeline.Pipeline()\n\n`Speedy.Pipeline.Pipeline(): SpeedyPipeline`\n\nCreates a new, empty pipeline.\n\n###### Returns\n\nA new `SpeedyPipeline` object.\n\n##### SpeedyPipeline.init()\n\n`SpeedyPipeline.init(...nodes: ...SpeedyPipelineNode): SpeedyPipeline`\n\nInitializes a pipeline with the specified `nodes`.\n\n###### Arguments\n• `...nodes: ...SpeedyPipelineNode`. The list of nodes that belong to the pipeline.\n###### Returns\n\nThe pipeline itself.\n\n###### Example\n```const pipeline = Speedy.Pipeline(); // create the pipeline and the nodes\nconst source = Speedy.Image.Source();\nconst sink = Speedy.Image.Sink();\nconst greyscale = Speedy.Filter.Greyscale();\n\nsource.media = media; // set the media source\n\nsource.output().connectTo(greyscale.input()); // connect the nodes\ngreyscale.output().connectTo(sink.input());\n\npipeline.init(source, sink, greyscale); // add the nodes to the pipeline```\n##### SpeedyPipeline.release()\n\n`SpeedyPipeline.release(): null`\n\nReleases the resources associated with `this` pipeline.\n\n###### Returns\n\nReturns `null`.\n\n##### SpeedyPipeline.run()\n\n`SpeedyPipeline.run(): SpeedyPromise<object>`\n\nRuns `this` pipeline.\n\n###### Returns\n\nReturns a `SpeedyPromise` that resolves to an object whose keys are the names of the sinks of the pipeline and whose values are the data exported by those sinks.\n\n###### Example\n`const { sink1, sink2 } = await pipeline.run();`\n##### SpeedyPipeline.node()\n\n`SpeedyPipeline.node(name: string): SpeedyPipelineNode | null`\n\nFinds a node by its `name`.\n\n###### Arguments\n• `name: string`. Name of the target node.\n###### Returns\n\nReturns a `SpeedyPipelineNode` that has the specified `name` and that belongs to `this` pipeline, or `null` if there is no such node.\n\n##### SpeedyPipelineNode.input()\n\n`SpeedyPipelineNode.input(portName?: string): SpeedyPipelineNodePort`\n\nThe input port of `this` node whose name is `portName`.\n\n###### Arguments\n• `portName: string, optional`. The name of the port you want to access. Defaults to `\"in\"`.\n###### Returns\n\nThe requested input port.\n\n##### SpeedyPipelineNode.output()\n\n`SpeedyPipelineNode.output(portName?: string): SpeedyPipelineNodePort`\n\nThe output port of `this` node whose name is `portName`.\n\n###### Arguments\n• `portName: string, optional`. The name of the port you want to access. Defaults to `\"out\"`.\n###### Returns\n\nThe requested output port.\n\n##### SpeedyPipelineNodePort.connectTo()\n\n`SpeedyPipelineNodePort.connectTo(port: SpeedyPipelineNodePort): void`\n\nCreates a link connecting `this` port to another `port`.\n\n#### Basic properties\n\n##### SpeedyPipelineNode.name\n\n`SpeedyPipelineNode.name: string, read-only`\n\nThe name of the node.\n\n##### SpeedyPipelineNode.fullName\n\n`SpeedyPipelineNode.fullName: string, read-only`\n\nA string that exhibits the name and the type of the node.\n\n##### SpeedyPipelineNodePort.name\n\n`SpeedyPipelineNodePort.name: string, read-only`\n\nThe name of the port.\n\n##### SpeedyPipelineNodePort.node\n\n`SpeedyPipelineNodePort.node: SpeedyPipelineNode, read-only`\n\nThe node to which `this` port belongs.\n\n#### Basic nodes\n\n##### Speedy.Image.Source()\n\n`Speedy.Image.Source(name?: string): SpeedyPipelineNodeImageInput`\n\nCreates an image source with the specified name. If the name is not specified, Speedy will automatically generate a name for you.\n\n###### Parameters\n• `media: SpeedyMedia`. The media to be imported into the pipeline.\n###### Ports\nPort name Data type Description\n`\"out\"` Image An image corresponding to the `media` of this node.\n##### Speedy.Image.Sink()\n\n`Speedy.Image.Sink(name?: string): SpeedyPipelineNodeImageOutput`\n\nCreates an image sink with the specified name. If the name is not specified, Speedy will call this node `\"image\"`. A `SpeedyMedia` object will be exported from the pipeline.\n\n###### Ports\nPort name Data type Description\n`\"in\"` Image An image to be exported from the pipeline.\n\n### Image processing\n\n#### Image basics\n\n##### Speedy.Image.Pyramid()\n\n`Speedy.Image.Pyramid(name?: string): SpeedyPipelineNodeImagePyramid`\n\nGenerate a Gaussian pyramid. A pyramid is a texture with mipmaps.\n\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image Gaussian pyramid.\n##### Speedy.Image.Multiplexer()\n\n`Speedy.Image.Multiplexer(name?: string): SpeedyPipelineNodeImageMultiplexer`\n\nAn image multiplexer receives two images as input and outputs one of the them.\n\n###### Parameters\n• `port: number`. Which input image should be redirected to the output: `0` or `1`? Defaults to `0`.\n###### Ports\nPort name Data type Description\n`\"in0\"` Image First image.\n`\"in1\"` Image Second image.\n`\"out\"` Image Either the first or the second image, depending on the value of `port`.\n##### Speedy.Image.Buffer()\n\n`Speedy.Image.Buffer(name?: string): SpeedyPipelineNodeImageBuffer`\n\nAn image buffer outputs at time t the input image received at time t-1. It's useful for tracking.\n\nNote: an image buffer cannot be used to store a pyramid at this time.\n\n###### Parameters\n• `frozen: boolean`. A frozen buffer discards the input, effectively increasing the buffering time. Defaults to `false`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image at time t.\n`\"out\"` Image Output image: the input image at time t-1.\n##### Speedy.Image.Mixer()\n\n`Speedy.Image.Mixer(name?: string): SpeedyPipelineNodeImageMixer`\n\nAn image mixer combines two images, image0 and image1, as follows:\n\noutput = `alpha` * image0 + `beta` * image1 + `gamma`\n\nThe above expression will be computed for each pixel of the resulting image and then clamped to the [0,1] interval. The dimensions of the resulting image will be the dimensions of the larger of the input images.\n\nNote: Both input images must have the same format. If they're colored, the above expression will be evaluated in each color channel independently.\n\nTip: if you pick an `alpha` between 0 and 1, set `beta` to `1 - alpha` and set `gamma` to 0, you'll get a nice alpha blending effect.\n\n###### Parameters\n• `alpha: number`. A scalar value. Defaults to 0.5.\n• `beta: number`. A scalar value. Defaults to 0.5.\n• `gamma: number`. A scalar value. Defaults to 0.0.\n###### Ports\nPort name Data type Description\n`\"in0\"` Image Input image: the image0 above\n`\"in1\"` Image Input image: the image1 above\n`\"out\"` Image Output image\n\n#### Image filters\n\n##### Speedy.Filter.Greyscale()\n\n`Speedy.Filter.Greyscale(name?: string): SpeedyPipelineNodeGreyscale`\n\nConvert an image to greyscale.\n\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image The input image converted to greyscale.\n##### Speedy.Filter.SimpleBlur()\n\n`Speedy.Filter.SimpleBlur(name?: string): SpeedyPipelineNodeSimpleBlur`\n\nBlur an image using a box filter.\n\n###### Parameters\n• `kernelSize: SpeedySize`. The size of the convolution kernel: from 3x3 to 15x15. Defaults to 5x5.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image The input image, blurred.\n##### Speedy.Filter.GaussianBlur()\n\n`Speedy.Filter.SimpleBlur(name?: string): SpeedyPipelineNodeGaussianBlur`\n\nBlur an image using a Gaussian filter.\n\n###### Parameters\n• `kernelSize: SpeedySize`. The size of the convolution kernel: from 3x3 to 15x15. Defaults to 5x5.\n• `sigma: SpeedyVector2`. The sigma of the Gaussian function in both x and y axes. If set to the zero vector, Speedy will automatically pick a sigma according to the selected `kernelSize`. Defaults to (0,0).\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image The input image, blurred.\n##### Speedy.Filter.MedianBlur()\n\n`Speedy.Filter.MedianBlur(name?: string): SpeedyPipelineNodeMedianBlur`\n\nMedian filter.\n\n###### Parameters\n• `kernelSize: SpeedySize`. One of the following: 3x3, 5x5 or 7x7. Defaults to 5x5.\n###### Ports\nPort name Data type Description\n`\"in\"` Image A greyscale image.\n`\"out\"` Image The result of the median blur.\n###### Example\n```const median = Speedy.Filter.MedianBlur();\nmedian.kernelSize = Speedy.Size(7,7);```\n##### Speedy.Filter.Convolution()\n\n`Speedy.Filter.Convolution(name?: string): SpeedyPipelineNodeConvolution`\n\nCompute the convolution of an image using a 2D kernel.\n\n###### Parameters\n• `kernel: SpeedyMatrixExpr`. A 3x3, 5x5 or 7x7 matrix.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image The result of the convolution.\n###### Example\n```// Sharpening an image\nconst sharpen = Speedy.Filter.Convolution();\nsharpen.kernel = Speedy.Matrix(3, 3, [\n0,-1, 0,\n-1, 5,-1,\n0,-1, 0\n]);```\n##### Speedy.Filter.Normalize()\n\n`Speedy.Filter.Normalize(name?: string): SpeedyPipelineNodeNormalize`\n\nNormalize the intensity values of the input image to the [`minValue`, `maxValue`] interval.\n\n###### Parameters\n• `minValue: number`. A value in [0,255].\n• `maxValue: number`. A value in [0,255] greater than or equal to `minValue`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Greyscale image.\n`\"out\"` Image Normalized image.\n##### Speedy.Filter.Nightvision()\n\n`Speedy.Filter.Nightvision(name?: string): SpeedyPipelineNodeNightvision`\n\nNightvision filter for local contrast stretching and brightness control.\n\n###### Parameters\n• `gain: number`. A value in [0,1]: the larger the number, the higher the contrast. Defaults to `0.5`.\n• `offset: number`. A value in [0,1] that controls the brightness. Defaults to `0.5`.\n• `decay: number`. A value in [0,1] specifying a contrast decay from the center of the image. Defaults to zero (no decay).\n• `quality: string`. Quality level: `\"high\"`, `\"medium\"` or `\"low\"`. Defaults to `\"medium\"`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image Output image.\n\n#### General transformations\n\n##### Speedy.Transform.Resize()\n\n`Speedy.Transform.Resize(name?: string): SpeedyPipelineNodeResize`\n\nResize an image.\n\n###### Parameters\n• `size: SpeedySize`. The size of the output image, in pixels. If set to zero, `scale` will be used to determine the size of the output. Defaults to zero.\n• `scale: SpeedyVector2`. The size of the output image relative to the size of the input image. This parameter is only applied if `size` is zero. Defaults to (1,1), meaning: keep the original size.\n• `method: string`. Resize method. One of the following: `\"bilinear\"` (bilinear interpolation) or `\"nearest\"` (nearest neighbors). Defaults to `\"bilinear\"`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image Resized image.\n##### Speedy.Transform.PerspectiveWarp()\n\n`Speedy.Transform.PerspectiveWarp(name?: string): SpeedyPipelineNodePerspectiveWarp`\n\nWarp an image using a homography matrix.\n\n###### Parameters\n• `transform: SpeedyMatrixExpr`. A 3x3 perspective transformation. Defaults to the identity matrix.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Input image.\n`\"out\"` Image Warped image.\n\n### Keypoints and descriptors\n\nA keypoint is a small patch in an image that is somehow distinctive. For example, a small patch with significant intensity changes in both x and y axes (i.e., a \"corner\") is distinctive. If we pick two \"similar\" images, we should be able to locate a set of keypoints in each of them and then match those keypoints based on their similarity.\n\nA descriptor is a mathematical object that somehow describes a keypoint. Two keypoints are considered to be \"similar\" if their descriptors are \"similar\". Speedy works with binary descriptors, meaning that keypoints are described using bit vectors of fixed length.\n\nThere are different ways to detect and describe keypoints. For example, in order to detect a keypoint, you may take a look at the pixel intensities around a point or perhaps study the image derivatives. You may describe a keypoint by comparing the pixel intensities of the image patch in a special way. Additionally, it's possible to conceive a way to describe a keypoint in such a way that, if you rotate the patch, the descriptor stays roughly the same. This is called rotational invariance and is usually a desirable property for a descriptor.\n\nSpeedy offers different options for processing keypoints in multiple ways. A novelty of this work is that Speedy's implementations have been either adapted from the literature or conceived from scratch to work on the GPU. Therefore, keypoint processing is done in parallel and is often very fast.\n\n#### Keypoint basics\n\n##### SpeedyKeypoint\n\nA `SpeedyKeypoint` object represents a keypoint.\n\n###### SpeedyKeypoint.position\n\n`SpeedyKeypoint.position: SpeedyPoint2`\n\nThe position of the keypoint in the image.\n\n###### SpeedyKeypoint.x\n\n`SpeedyKeypoint.x: number`\n\nThe x position of the keypoint in the image. A shortcut to `position.x`.\n\n###### SpeedyKeypoint.y\n\n`SpeedyKeypoint.y: number`\n\nThe y position of the keypoint in the image. A shortcut to `position.y`.\n\n###### SpeedyKeypoint.lod\n\n`SpeedyKeypoint.lod: number`\n\nThe level-of-detail (pyramid level) from which the keypoint was extracted, starting from zero. Defaults to `0.0`.\n\n###### SpeedyKeypoint.scale\n\n`SpeedyKeypoint.scale: number`\n\nThe scale of the keypoint. This is equivalent to 2 ^ lod. Defaults to `1.0`.\n\n###### SpeedyKeypoint.rotation\n\n`SpeedyKeypoint.rotation: number`\n\nThe orientation angle of the keypoint, in radians. Defaults to `0.0`.\n\n###### SpeedyKeypoint.score\n\n`SpeedyKeypoint.score: number`\n\nThe score is a measure associated with the keypoint. Although different detection methods employ different measurement strategies, the larger the score, the \"better\" the keypoint is considered to be. The score is always a positive value.\n\n###### SpeedyKeypoint.descriptor\n\n`SpeedyKeypoint.descriptor: SpeedyKeypointDescriptor | null, read-only`\n\nThe descriptor associated with the keypoint, if it exists.\n\n##### SpeedyKeypointDescriptor\n\nA `SpeedyKeypointDescriptor` represents a keypoint descriptor.\n\n###### SpeedyKeypointDescriptor.data\n\n`SpeedyKeypointDescriptor.data: Uint8Array, read-only`\n\nThe bytes of the keypoint descriptor.\n\n###### SpeedyKeypointDescriptor.size\n\n`SpeedyKeypointDescriptor.size: number, read-only`\n\nThe size of the keypoint descriptor, in bytes.\n\n###### SpeedyKeypointDescriptor.toString()\n\n`SpeedyKeypointDescriptor.toString(): string`\n\nReturns a string representation of the keypoint descriptor.\n\n##### SpeedyTrackedKeypoint\n\nA `SpeedyTrackedKeypoint` is a `SpeedyKeypoint` with the following additional properties:\n\n###### SpeedyTrackerKeypoint.flow\n\n`SpeedyTrackedKeypoint.flow: SpeedyVector2`\n\nA displacement vector associated with the tracked keypoint.\n\n##### Speedy.Keypoint.Source()\n\n`Speedy.Keypoint.Source(name?: string): SpeedyPipelineNodeKeypointSource`\n\nCreates a source of keypoints. Only the position, score and scale of the provided keypoints will be imported to the pipeline. Descriptors, if present, will be lost.\n\n###### Parameters\n• `keypoints: SpeedyKeypoint[]`. The keypoints you want to import.\n• `capacity: number`. The maximum number of keypoints that can be imported to the GPU. If you have an idea of how many keypoints you expect (at most), use a tight bound to make processing more efficient. The default capacity is `2048`. It can be no larger than `8192`.\n###### Ports\nPort name Data type Description\n`\"out\"` Keypoints The imported set of keypoints.\n##### Speedy.Keypoint.Sink()\n\n`Speedy.Keypoint.Sink(name?: string): SpeedyPipelineNodeKeypointSink`\n\nCreates a sink of keypoints using the specified name. If the name is not specified, Speedy will call this node `\"keypoints\"`. An array of `SpeedyKeypoint` objects will be exported from the pipeline.\n\n###### Parameters\n• `turbo: boolean`. Accelerate GPU-CPU transfers. You'll get the data from the previous frame. Defaults to `false`.\n• `includeDiscarded: boolean`. Set discarded keypoints (e.g., by a tracker) to `null` in the exported set. Defaults to `false`, meaning that discarded keypoints will simply be dropped from the exported set rather than being set to `null`.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints to be exported from the pipeline.\n##### Speedy.Keypoint.SinkOfTrackedKeypoints()\n\n`Speedy.Keypoint.SinkOfTrackedKeypoints(name?: string): SpeedyPipelineNodeTrackedKeypointSink`\n\nCreates a sink of tracked keypoints using the specified name. If the name is not specified, Speedy will call this node `\"keypoints\"`. An array of `SpeedyTrackedKeypoint` objects will be exported from the pipeline.\n\n###### Parameters\n\nThe same as `SpeedyPipelineNodeKeypointSink`.\n\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints to be exported from the pipeline.\n`\"flow\"` Vector2 A set of displacement vectors associated with each keypoint.\n##### Speedy.Keypoint.Clipper()\n\n`Speedy.Keypoint.Clipper(name?: string): SpeedyPipelineNodeKeypointClipper`\n\nClips a set of keypoints, so that it outputs no more than a fixed quantity of them. When generating the output, it will choose the \"best\" keypoints according to their score metric. The keypoint clipper is a very useful tool to reduce processing time, since it can discard \"bad\" keypoints regardless of the sensitivity of their detector. The clipping must be applied before computing any descriptors.\n\n###### Parameters\n• `size: number`. A positive integer. No more than this number of keypoints will be available in the output.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"out\"` Keypoints A set of at most `size` keypoints.\n##### Speedy.Keypoint.BorderClipper()\n\n`Speedy.Keypoint.BorderClipper(name?: string): SpeedyPipelineNodeKeypointBorderClipper`\n\nRemoves all keypoints within a specified border of the edge of an image. The border is specified in pixels as an ordered pair of integers: the first is the size of the horizontal border and the second is the size of the vertical border.\n\n###### Parameters\n• `imageSize: SpeedySize`. Image size, in pixels.\n• `borderSize: SpeedyVector2`. Border size in both x and y axes. Defaults to zero, meaning that no clipping takes place.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"out\"` Keypoints The clipped set of keypoints.\n##### Speedy.Keypoint.Mixer()\n\n`Speedy.Keypoint.Mixer(name?: string): SpeedyPipelineNodeKeypointMixer`\n\nMixes (merges) two sets of keypoints.\n\n###### Ports\nPort name Data type Description\n`\"in0\"` Keypoints A set of keypoints.\n`\"in1\"` Keypoints Another set of keypoints.\n`\"out\"` Keypoints The union of the two input sets.\n##### Speedy.Keypoint.Buffer()\n\n`Speedy.Keypoint.Buffer(name?: string): SpeedyPipelineNodeKeypointBuffer`\n\nA keypoint buffer outputs at time t the keypoints received at time t-1.\n\n###### Parameters\n• `frozen: boolean`. A frozen buffer discards the input, effectively increasing the buffering time. Defaults to `false`.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints at time t.\n`\"out\"` Keypoints The set of keypoints received at time t-1.\n##### Speedy.Keypoint.Multiplexer()\n\n`Speedy.Keypoint.Multiplexer(name?: string): SpeedyPipelineNodeKeypointMultiplexer`\n\nA keypoint multiplexer receives two sets of keypoints as input and outputs one of the them.\n\n###### Parameters\n• `port: number`. Which input set of keypoints should be redirected to the output: `0` or `1`? Defaults to `0`.\n###### Ports\nPort name Data type Description\n`\"in0\"` Image First set of keypoints.\n`\"in1\"` Image Second set of keypoints.\n`\"out\"` Image Either the first or the second set of keypoints, depending on the value of `port`.\n##### Speedy.Keypoint.Transformer()\n\n`Speedy.Keypoint.Transformer(name?: string): SpeedyPipelineNodeKeypointTransformer`\n\nApplies a transformation matrix to a set of keypoints.\n\n###### Parameters\n• `transform: SpeedyMatrix`. A 3x3 homography matrix. Defaults to the identity matrix.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"out\"` Keypoints A transformed set of keypoints.\n##### Speedy.Keypoint.SubpixelRefiner()\n\n`Speedy.Keypoint.SubpixelRefiner(name?: string): SpeedyPipelineNodeKeypointSubpixelRefiner`\n\nRefines the position of a set of keypoints down to the subpixel level.\n\nNote 1: filter the image to reduce the noise before working at the subpixel level.\n\nNote 2: if there are keypoints in multiple scales, make sure to provide a pyramid as input.\n\nNote 3: the position of the keypoints is stored as fixed-point. This representation may introduce a loss of accuracy (~0.1 pixel). This is probably enough already, but if you need higher accuracy, ignore the output keypoints and work with the displacement vectors instead. These are encoded as floating-point. In addition, use the upsampling methods.\n\n###### Parameters\n• `method: string`. The method to be used to compute the subpixel displacement. See the table below.\n• `maxIterations: number`. The maximum number of iterations used by methods `\"bicubic-upsample\"` and `\"bilinear-upsample\"`. Defaults to 6.\n• `epsilon: number`. The threshold used to determine when the subpixel displacement has reached convergence. Used with methods `\"bicubic-upsample\"` and `\"bilinear-upsample\"`. Defaults to 0.1 pixel.\n\nTable of methods:\n\nMethod Description\n`\"quadratic1d\"` Maximize a 1D parabola fit to a corner strength function. This is the default method.\n`\"taylor2d\"` Maximize a second-order 2D Taylor expansion of a corner strength function. Method `\"quadratic1d\"` seems to perform slightly better than this, but your mileage may vary.\n`\"bicubic-upsample\"` Iteratively upsample the image using bicubic interpolation in order to maximize a corner strength function. Repeat until convergence or until a maximum number of iterations is reached.\n`\"bilinear-upsample\"` Analogous to bicubic upsample, but this method uses bilinear interpolation instead.\n###### Ports\nPort name Data type Description\n`\"image\"` Image An image or pyramid from which you extracted the keypoints.\n`\"keypoints\"` Keypoints Input set of keypoints.\n`\"out\"` Keypoints Subpixel-refined output set of keypoints.\n`\"displacements\"` Vector2 Displacement vectors (output).\n##### Speedy.Keypoint.DistanceFilter()\n\n`Speedy.Keypoint.DistanceFilter(name?: string): SpeedyPipelineNodeKeypointDistanceFilter`\n\nGiven a set of pairs of keypoints, discard all pairs whose distance is above a user-defined threshold. This is useful for implementing bidirectional optical-flow.\n\nThe pairs of keypoints are provided as two separate sets, \"in\" and \"reference\". Keypoints that are kept will have their data extracted from the \"in\" set.\n\n###### Parameters\n• `threshold: number`. Distance threshold, given in pixels.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"reference\"` Keypoints A reference set of keypoints.\n`\"out\"` Keypoints Filtered set of keypoints.\n##### Speedy.Keypoint.HammingDistanceFilter()\n\n`Speedy.Keypoint.HammingDistanceFilter(name?: string): SpeedyPipelineNodeKeypointHammingDistanceFilter`\n\nGiven a set of pairs of keypoints with descriptors, discard all pairs whose Hamming distance between their descriptors is above a user-defined threshold.\n\nThe pairs of keypoints are provided as two separate sets, \"in\" and \"reference\". Keypoints that are kept will have their data extracted from the \"in\" set.\n\n###### Parameters\n• `threshold: number`. Distance threshold, an integer.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"reference\"` Keypoints A reference set of keypoints.\n`\"out\"` Keypoints Filtered set of keypoints.\n##### Speedy.Keypoint.Shuffler()\n\n`Speedy.Keypoint.Shuffler(name?: string): SpeedyPipelineNodeKeypointShuffler`\n\nShuffles the input keypoints, optionally clipping the output set.\n\n###### Parameters\n• `maxKeypoints: number`. Maximum number of keypoints of the output set. If unspecified, the number of keypoints of the output set will be the number of keypoints of the input set.\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n`\"out\"` Keypoints The input set of keypoints, shuffled and possibly clipped.\n\n#### Keypoint detection\n\nThe following nodes expect greyscale images as input. They output a set of keypoints.\n\n##### Speedy.Keypoint.Detector.FAST()\n\n`Speedy.Keypoint.Detector.FAST(name?: string): SpeedyPipelineNodeFASTKeypointDetector`\n\nFAST keypoint detector. Speedy implements the FAST-9,16 variant of the algorithm.\n\nTo use the multi-scale version of the algorithm, pass a pyramid as input, set the number of levels you want to scan and optionally set the scale factor. After scanning all levels and performing non-maximum suppression, the scale of the keypoints will be set by means of interpolation using the scale that maximizes a response measure and its adjacent scales.\n\n###### Parameters\n• `threshold: number`. An integer between `0` and `255`, inclusive. The larger the number, the \"stronger\" your keypoints will be. The smaller the number, the more keypoint you will get. Numbers between `20` and `50` are usually meaningful.\n• `levels: number`. The number of pyramid levels you want to use. Defaults to `1` (i.e., no pyramid is used). When using a pyramid, a value such as `7` is a reasonable choice.\n• `scaleFactor: number`. The scale factor between two consecutive levels of the pyramid. This is a value between `1` (exclusive) and `2` (inclusive). Defaults to the square root of two. This is applicable only when using a pyramid.\n• `capacity: number`. The maximum number of keypoints that can be detected by this node. The default capacity is `2048`. It can be no larger than `8192`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Greyscale image or pyramid.\n`\"out\"` Keypoints Detected keypoints.\n##### Speedy.Keypoint.Detector.Harris()\n\n`Speedy.Keypoint.Detector.Harris(name?: string): SpeedyPipelineNodeHarrisKeypointDetector`\n\nHarris corner detector. Speedy implements the Shi-Tomasi corner response for best results.\n\nTo use the multi-scale version of the algorithm, pass a pyramid as input, set the number of levels you want to scan and optionally set the scale factor. After scanning all levels and performing non-maximum suppression, the scale of the keypoints will be set by means of interpolation using the scale that maximizes a response measure and its adjacent scales.\n\n###### Parameters\n• `quality: number`. A value between `0` and `1` representing the minimum \"quality\" of the returned keypoints. Speedy will discard any keypoint whose score is lower than the specified percentage of the maximum keypoint score found in the image. A typical value for this parameter is `0.10` (10%).\n• `levels: number`. The number of pyramid levels you want to use. Defaults to `1` (i.e., no pyramid is used). When using a pyramid, a value such as `7` is a reasonable choice.\n• `scaleFactor: number`. The scale factor between two consecutive levels of the pyramid. This is a value between `1` (exclusive) and `2` (inclusive). Defaults to the square root of two. This is applicable only when using a pyramid.\n• `capacity: number`. The maximum number of keypoints that can be detected by this node. The default capacity is `2048`. It can be no larger than `8192`.\n###### Ports\nPort name Data type Description\n`\"in\"` Image Greyscale image or pyramid.\n`\"out\"` Keypoints Detected keypoints.\n\n#### Keypoint description\n\n##### Speedy.Keypoint.Descriptor.ORB()\n\n`Speedy.Keypoint.Descriptor.ORB(name?: string): SpeedyPipelineNodeORBKeypointDescriptor`\n\nORB descriptors. In order to improve robustness to noise, apply a Gaussian filter to the image before computing the descriptors.\n\n###### Ports\nPort name Data type Description\n`\"image\"` Image Input image. Must be greyscale.\n`\"keypoints\"` Keypoints Input keypoints.\n`\"out\"` Keypoints Keypoints with descriptors.\n###### Example\n```/*\n\nThis is our pipeline:\n\nImage ---> Convert to ---> Image ------> FAST corner -----> Keypoint ---> ORB ----------> Keypoint\nSource greyscale Pyramid detector Clipper descriptors Sink\n| ^\n| |\n+-------------------------> Gaussian ---------------------------+\nBlur\n*/\n\nconst pipeline = Speedy.Pipeline();\nconst source = Speedy.Image.Source();\nconst greyscale = Speedy.Filter.Greyscale();\nconst pyramid = Speedy.Image.Pyramid();\nconst fast = Speedy.Keypoint.Detector.FAST();\nconst blur = Speedy.Filter.GaussianBlur();\nconst clipper = Speedy.Keypoint.Clipper();\nconst descriptor = Speedy.Keypoint.Descriptor.ORB();\nconst sink = Speedy.Keypoint.Sink();\n\nsource.media = media;\nblur.kernelSize = Speedy.Size(9, 9);\nblur.sigma = Speedy.Vector2(2, 2);\nfast.threshold = 50;\nfast.levels = 8; // pyramid levels\nfast.scaleFactor = 1.19; // approx. 2^0.25\nclipper.size = 800; // up to how many features?\n\nsource.output().connectTo(greyscale.input());\n\ngreyscale.output().connectTo(pyramid.input());\npyramid.output().connectTo(fast.input());\nfast.output().connectTo(clipper.input());\nclipper.output().connectTo(descriptor.input('keypoints'));\n\ngreyscale.output().connectTo(blur.input());\nblur.output().connectTo(descriptor.input('image'));\n\ndescriptor.output().connectTo(sink.input());\n\npipeline.init(source, greyscale, pyramid, blur, fast, clipper, descriptor, sink);```\n\n#### Keypoint tracking\n\nKeypoint tracking is the process of tracking keypoints across a sequence of images. It allows you to get a sense of how keypoints are moving in time - i.e., how fast they are moving and where they are going.\n\nSpeedy uses sparse optical-flow algorithms to track keypoints in a video. Applications of optical-flow are numerous: you may get a sense of how objects are moving in a scene, estimate how the camera itself is moving, detect a transition in a film (a cut between two shots), and so on.\n\n##### Speedy.Keypoint.Tracker.LK()\n\n`Speedy.Keypoint.Tracker.LK(name?: string): SpeedyPipelineNodeLKKeypointTracker`\n\nPyramid-based LK optical-flow.\n\n###### Parameters\n• `windowSize: SpeedySize`. The size of the window to be used by the feature tracker. The algorithm will read neighbor pixels to determine the motion of a keypoint. You must specify a square window. Typical sizes include: 7x7, 11x11, 15x15 (use positive odd integers). Defaults to 11x11.\n• `levels: number`. Specifies how many pyramid levels will be used in the computation. The more levels you use, the faster the motions you can capture. Defaults to `3`.\n• `discardThreshold: number`. A threshold used to discard keypoints that are not \"good\" candidates for tracking. The higher the value, the more keypoints will be discarded. Defaults to `0.0001`.\n• `numberOfIterations: number`. Maximum number of iterations for computing the local optical-flow on each level of the pyramid. Defaults to `30`.\n• `epsilon: number`. An accuracy threshold used to stop the computation of the local optical-flow of any level of the pyramid. The local optical-flow is computed iteratively and in small increments. If the length of an increment is too small, we discard it. This property defaults to `0.01`.\n###### Ports\nPort name Data type Description\n`\"previousImage\"` Image Input image at time t-1. Must be greyscale.\n`\"nextImage\"` Image Input image at time t. Must be greyscale.\n`\"previousKeypoints\"` Keypoints Input keypoints at time t-1.\n`\"out\"` Keypoints Output keypoints at time t.\n`\"flow\"` Vector2 Flow vectors (output) at time t.\n\nNote: you need to provide pyramids as input if `levels > 1`.\n\nSoon!\n\n### Portals\n\nPortals let you create loops within a pipeline. They also let you transfer data between different pipelines.\n\nA portal is defined by a set of nodes: a portal sink and one or more portal sources. The portal sink receives data from a node of a pipeline, which is then read by the portal source(s). The portal source(s) feed(s) one or more pipelines. The portal nodes may or may not belong to the same pipeline.\n\n#### Image Portals\n\n##### Speedy.Image.Portal.Source()\n\n`Speedy.Image.Portal.Source(name?: string): SpeedyPipelineNodeImagePortalSource`\n\nCreate a source of an Image Portal.\n\n###### Parameters\n• `source: SpeedyPipelineNodeImagePortalSink`. A sink of an Image Portal.\n###### Ports\nPort name Data type Description\n`\"out\"` Image An image.\n##### Speedy.Image.Portal.Sink()\n\n`Speedy.Image.Portal.Sink(name?: string): SpeedyPipelineNodeImagePortalSink`\n\nCreate a sink of an Image Portal.\n\nNote: pyramids can't travel through portals at this time.\n\n###### Ports\nPort name Data type Description\n`\"in\"` Image An image.\n\n#### Keypoint Portals\n\n##### Speedy.Keypoint.Portal.Source()\n\n`Speedy.Keypoint.Portal.Source(name?: string): SpeedyPipelineNodeKeypointPortalSource`\n\nCreate a source of a Keypoint Portal.\n\n###### Parameters\n• `source: SpeedyPipelineNodeKeypointPortalSink`. A sink of a Keypoint Portal.\n###### Ports\nPort name Data type Description\n`\"out\"` Keypoints A set of keypoints.\n##### Speedy.Keypoint.Portal.Sink()\n\n`Speedy.Keypoint.Portal.Sink(name?: string): SpeedyPipelineNodeKeypointPortalSink`\n\nCreate a sink of a Keypoint Portal.\n\n###### Ports\nPort name Data type Description\n`\"in\"` Keypoints A set of keypoints.\n\n### Linear Algebra\n\nMatrix computations play a crucial role in computer vision applications. Speedy includes its own implementation of numerical linear algebra algorithms.\n\nMatrix operations are specified using a fluent interface that has been crafted to be easy to use and to mirror how we write matrix algebra using pen-and-paper.\n\nSince numerical algorithms may be computationally demanding, Speedy uses WebAssembly for extra performance. Most matrix-related routines are written in C language. Matrices are stored in column-major format. Typed Arrays are used for storage.\n\nThere are two basic classes you need to be aware of: `SpeedyMatrix` and `SpeedyMatrixExpr`. The latter represents a symbolic expression, whereas the former represents an actual matrix with data. A `SpeedyMatrix` is a `SpeedyMatrixExpr`. A `SpeedyMatrixExpr` may be evaluated to a `SpeedyMatrix`.\n\n#### Creating new matrices\n\n##### Speedy.Matrix()\n\n`Speedy.Matrix(rows: number, columns: number, entries?: number[]): SpeedyMatrix`\n\n`Speedy.Matrix(expr: SpeedyMatrixExpr): SpeedyMatrix`\n\nFirst form: create a new matrix with the specified size and entries.\n\nSecond form: synchronously evaluate a matrix expression and store the result in a new matrix.\n\n###### Arguments\n• `rows: number`. The number of rows of the matrix.\n• `columns: number, optional`. The number of columns of the matrix. If not specified, it will be set to `rows` (i.e., you'll get a square matrix).\n• `entries: number[], optional`. The elements of the matrix in column-major format. The length of this array must be `rows * columns`.\n• `expr: SpeedyMatrixExpr`. The matrix expression to be evaluated.\n###### Returns\n\nA new `SpeedyMatrix`.\n\n###### Example\n```//\n// We use the column-major format to specify\n// the elements of the new matrix. For example,\n// to create the 2x3 matrix (2 rows, 3 columns)\n// below, we first specify the elements of the\n// first column, then the elements of the second\n// column, and finally the elements of the third\n// column.\n//\n// M = [ 1 3 5 ]\n// [ 2 4 6 ]\n//\nconst mat = Speedy.Matrix(2, 3, [\n1,\n2,\n3,\n4,\n5,\n6\n]);\n\n// Alternatively, we may write the data in\n// column-major format in a compact form:\nconst mat1 = Speedy.Matrix(2, 3, [\n1, 2, // first column\n3, 4, // second column\n5, 6 // third column\n]);\n\n// Print the matrices to the console\nconsole.log(mat.toString());\nconsole.log(mat1.toString());```\n##### Speedy.Matrix.Zeros()\n\n`Speedy.Matrix.Zeros(rows: number, columns?: number): SpeedyMatrix`\n\nCreate a new matrix filled with zeros.\n\n###### Arguments\n• `rows: number`. The number of rows of the matrix.\n• `columns: number, optional`. The number of columns of the matrix. If not specified, it will be set to `rows` (square matrix).\n###### Returns\n\nA new `rows` x `columns` `SpeedyMatrix` filled with zeros.\n\n###### Example\n```// A 3x3 matrix filled with zeros\nconst zeros = Speedy.Matrix.Zeros(3);```\n##### Speedy.Matrix.Ones()\n\n`Speedy.Matrix.Ones(rows: number, columns?: number): SpeedyMatrix`\n\nCreate a new matrix filled with ones.\n\n###### Arguments\n• `rows: number`. The number of rows of the matrix.\n• `columns: number, optional`. The number of columns of the matrix. If not specified, it will be set to `rows` (square matrix).\n###### Returns\n\nA new `rows` x `columns` `SpeedyMatrix` filled with ones.\n\n##### Speedy.Matrix.Eye()\n\n`Speedy.Matrix.Eye(rows: number, columns?: number): SpeedyMatrix`\n\nCreate a new matrix with ones on the main diagonal and zeros elsewhere.\n\n###### Arguments\n• `rows: number`. The number of rows of the matrix.\n• `columns: number, optional`. The number of columns of the matrix. If not specified, it will be set to `rows` (identity matrix).\n###### Returns\n\nA new `SpeedyMatrix` with the specified configuration.\n\n###### Example\n```// A 3x3 identity matrix\nconst eye = Speedy.Matrix.Eye(3);```\n\n#### Matrix properties\n\n##### SpeedyMatrixExpr.rows\n\n`SpeedyMatrixExpr.rows: number, read-only`\n\nThe number of rows of the matrix expression.\n\n##### SpeedyMatrixExpr.columns\n\n`SpeedyMatrixExpr.columns: number, read-only`\n\nThe number of columns of the matrix expression.\n\n##### SpeedyMatrixExpr.dtype\n\n`SpeedyMatrixExpr.dtype: string, read-only`\n\nThe constant `\"float32\"`.\n\n##### SpeedyMatrix.data\n\n`SpeedyMatrix.data: ArrayBufferView, read-only`\n\nData storage.\n\n##### SpeedyMatrix.step\n\n`SpeedyMatrix.step0: number, read-only`\n\n`SpeedyMatrix.step1: number, read-only`\n\nStorage steps. The (`i`, `j`) entry of the matrix is stored at `data[i * step0 + j * step1]`.\n\n`SpeedyMatrix.read(): number[]`\n\nRead the entries of the matrix.\n\n###### Returns\n\nAn array containing the entries of the matrix in column-major format.\n\n###### Example\n```const mat = Speedy.Matrix(2, 2, [\n1,\n2,\n3,\n4\n]);\n\nconsole.log(entries); // [ 1, 2, 3, 4 ]```\n##### SpeedyMatrix.at()\n\n`SpeedyMatrix.at(row: number, column: number): number`\n\nRead a single entry of the matrix.\n\n###### Arguments\n• `row: number`. Index of the row of the desired element (0-based).\n• `column: number`. Index of the column of the desired element (0-based).\n###### Returns\n\nThe requested entry of the matrix, or a NaN if the entry is outside bounds.\n\n###### Example\n```const A = Speedy.Matrix(2, 2, [\n1,\n2,\n3,\n4\n]);\n\nconst a00 = A.at(0, 0); // first row, first column\nconst a10 = A.at(1, 0); // second row, first column\nconst a01 = A.at(0, 1); // first row, second column\nconst a11 = A.at(1, 1); // second row, second column\n\nconsole.log([ a00, a10, a01, a11 ]); // [ 1, 2, 3, 4 ]```\n##### SpeedyMatrixExpr.toString()\n\n`SpeedyMatrixExpr.toString(): string`\n\nConvert a matrix expression to a string. Entries will only be included if `this` expression is a `SpeedyMatrix`.\n\n###### Returns\n\nA string representation of the matrix expression.\n\n#### Writing to the matrices\n\n##### SpeedyMatrix.setTo()\n\n`SpeedyMatrix.setTo(expr: SpeedyMatrixExpr): SpeedyPromise<SpeedyMatrix>`\n\nEvaluate a matrix expression and store the result in `this` matrix.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. A matrix expression.\n###### Returns\n\nA `SpeedyPromise` that resolves to `this` matrix after evaluating `expr`.\n\n###### Example\n```//\n//\n// A = [ 1 3 ] B = [ 4 2 ]\n// [ 2 4 ] [ 3 1 ]\n//\n// We'll set C to the sum A + B\n//\nconst matA = Speedy.Matrix(2, 2, [\n1, 2,\n3, 4\n]);\nconst matB = Speedy.Matrix(2, 2, [\n4, 3,\n2, 1\n]);\n\n// Set C = A + B\nconst matC = Speedy.Matrix.Zeros(2, 2);\nawait matC.setTo(matA.plus(matB));\n\n//\n// Print the result:\n//\n// C = [ 5 5 ]\n// [ 5 5 ]\n//\nconsole.log(matC.toString());```\n##### SpeedyMatrix.fill()\n\n`SpeedyMatrix.fill(value: number): SpeedyPromise<SpeedyMatrix>`\n\nFill `this` matrix with a scalar.\n\n###### Arguments\n• `value: number`. Scalar value.\n###### Returns\n\nA `SpeedyPromise` that resolves to `this` matrix.\n\n###### Example\n```// Create a 5x5 matrix filled with twos\nconst twos = Speedy.Matrix.Zeros(5);\nawait twos.fill(2);```\n\n#### Synchronous writing\n\nSpeedy provides synchronous writing methods for convenience.\n\n`Speedy.Matrix.ready(): SpeedyPromise<void>`\n\nThis method lets you know that the matrix routines are initialized and ready to be used (the WebAssembly routines need to be loaded before usage). You should only use the synchronous writing methods when the matrix routines are ready.\n\n###### Returns\n\nA `SpeedyPromise` that resolves immediately if the matrix routines are already initialized, or as soon as they are initialized.\n\n##### SpeedyMatrix.setToSync()\n\n`SpeedyMatrix.setToSync(expr: SpeedyMatrixExpr): SpeedyMatrix`\n\nSynchronously evaluate a matrix expression and store the result in `this` matrix.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. A matrix expression.\n###### Returns\n\nReturns `this` matrix after setting it to the result of `expr`.\n\n###### Example\n```Speedy.Matrix.ready().then(() => {\nconst mat = Speedy.Matrix.Eye(3); // I := identity matrix\nconst pot = 3; // power-of-two\n\nfor(let i = 0; i < pot; i++)\nmat.setToSync(mat.plus(mat)); // mat := mat + mat\n\nconsole.log(mat.toString()); // mat will be (2^pot) * I\n});```\n##### SpeedyMatrix.fillSync()\n\n`SpeedyMatrix.fillSync(value: number): SpeedyMatrix`\n\nSynchronously fill `this` matrix with a scalar.\n\n###### Arguments\n• `value: number`. Scalar value.\n###### Returns\n\nReturns `this` matrix after filling it with the provided `value`.\n\n#### Access by block\n\nSpeedy lets you work with blocks of matrices. This is a very handy feature! Blocks share memory with the originating matrices. If you modify the entries of a block of a matrix M, you'll modify the corresponding entries of M. Columns and rows are examples of blocks.\n\n##### SpeedyMatrix.block()\n\n`SpeedyMatrix.block(firstRow: number, lastRow: number, firstColumn: number, lastColumn: number): SpeedyMatrix`\n\nExtract a `lastRow - firstRow + 1` x `lastColumn - firstColumn + 1` block from the matrix. All indices are 0-based. They are all inclusive. The memory of the matrix is shared with the block.\n\n###### Arguments\n• `firstRow: number`. Index of the first row (0-based).\n• `lastRow: number`. Index of the last row (0-based). Use `lastRow >= firstRow`.\n• `firstColumn: number`. Index of the first column (0-based).\n• `lastColumn: number`. Index of the last column (0-based). Use `lastColumn >= firstColumn`.\n###### Returns\n\nA new `SpeedyMatrix` representing the specified block.\n\n###### Example\n```//\n// We'll create the following 4x4 matrix:\n// (a dot represents a zero)\n//\n// [ 5 5 5 . ]\n// [ 5 5 5 . ]\n// [ 5 5 5 . ]\n// [ . . . . ]\n//\nconst mat = Speedy.Matrix.Zeros(4);\nawait mat.block(0, 2, 0, 2).fill(5);\nconsole.log(mat.toString());```\n##### SpeedyMatrix.column()\n\n`SpeedyMatrix.column(index: number): SpeedyMatrix`\n\nExtract a column of the matrix.\n\n###### Arguments\n• `index: number`. Index of the column (0-based).\n###### Returns\n\nA new `SpeedyMatrix` representing the specified column.\n\n###### Example\n```const mat = Speedy.Matrix(2, 3, [\n1,\n2,\n3,\n4,\n5,\n6\n]);\n\nconst firstColumn = mat.column(0); // [1, 2]^T\nconst secondColumn = mat.column(1); // [3, 4]^T\nconst thirdColumn = mat.column(2); // [5, 6]^T\n\nconsole.log(firstColumn.toString());\nconsole.log(secondColumn.toString());\nconsole.log(thirdColumn.toString());```\n##### SpeedyMatrix.row()\n\n`SpeedyMatrix.row(index: number): SpeedyMatrix`\n\nExtract a row of the matrix.\n\n###### Arguments\n• `index: number`. Index of the row (0-based).\n###### Returns\n\nA new `SpeedyMatrix` representing the specified row.\n\n###### Example\n```//\n// We'll create the following matrix:\n// [ 0 0 0 0 ]\n// [ 1 1 1 1 ]\n// [ 2 2 2 2 ]\n// [ 0 0 0 0 ]\n//\nconst mat = Speedy.Matrix.Zeros(4);\nawait mat.row(1).fill(1);\nawait mat.row(2).fill(2);\nconsole.log(mat.toString());```\n##### SpeedyMatrix.diagonal()\n\n`SpeedyMatrix.diagonal(): SpeedyMatrix`\n\nExtract the main diagonal of `this` matrix as a column vector.\n\n###### Returns\n\nA new `SpeedyMatrix` representing the main diagonal of `this` matrix.\n\n###### Example\n```//\n// We'll create the following matrix:\n// (a dot represents a zero)\n//\n// [ 5 . . . . ]\n// [ . 5 . . . ]\n// [ . . 5 . . ]\n// [ . . . . . ]\n// [ . . . . . ]\n//\nconst mat = Speedy.Matrix.Zeros(5); // create a 5x5 matrix filled with zeros\nconst submat = mat.block(0, 2, 0, 2); // extract 3x3 submatrix at the \"top-left\"\nconst diag = submat.diagonal(); // extract the diagonal of the submatrix\n\nawait diag.fill(5); // fill the diagonal of the submatrix with a constant\nconsole.log(mat.toString()); // print the entire matrix\n\n// Alternatively, we may use this compact form:\nawait mat.block(0, 2, 0, 2).diagonal().fill(5);```\n\n#### Elementary operations\n\n##### SpeedyMatrixExpr.transpose()\n\n`SpeedyMatrixExpr.transpose(): SpeedyMatrixExpr`\n\nTranspose `this` matrix expression.\n\n###### Returns\n\nA `SpeedyMatrixExpr` representing the tranpose of `this` matrix expression.\n\n###### Example\n```// Create a 2x3 matrix\nconst mat = Speedy.Matrix(2, 3, [\n1, 2, // first column\n3, 4, // second column\n5, 6 // third column\n]);\n\n// We'll store the transpose of mat in matT\nconst matT = Speedy.Matrix.Zeros(mat.columns, mat.rows);\nawait matT.setTo(mat.transpose());\n\n// Print the matrix and its transpose\nconsole.log(mat.toString());\nconsole.log(matT.toString());```\n##### SpeedyMatrixExpr.plus()\n\n`SpeedyMatrixExpr.plus(expr: SpeedyMatrixExpr): SpeedyMatrixExpr`\n\nCompute the sum between `this` matrix expression and `expr`. Both expressions must have the same shape.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. Another matrix expression.\n###### Returns\n\nA `SpeedyMatrixExpr` representing the sum between `this` matrix expression and `expr`.\n\n###### Example\n```const matA = Speedy.Matrix(3, 3, [\n1, 2, 3,\n4, 5, 6,\n7, 8, 9\n]);\nconst ones = Speedy.Matrix.Ones(3);\n\n// set B = A + 1\nconst matB = Speedy.Matrix.Zeros(3);\nawait matB.setTo(matA.plus(ones));```\n##### SpeedyMatrixExpr.minus()\n\n`SpeedyMatrixExpr.minus(expr: SpeedyMatrixExpr): SpeedyMatrixExpr`\n\nCompute the difference between `this` matrix expression and `expr`. Both expressions must have the same shape.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. Another matrix expression.\n###### Returns\n\nA `SpeedyMatrixExpr` representing the difference between `this` matrix expression and `expr`.\n\n##### SpeedyMatrixExpr.times()\n\n`SpeedyMatrixExpr.times(expr: SpeedyMatrixExpr): SpeedyMatrixExpr`\n\n`SpeedyMatrixExpr.times(scalar: number): SpeedyMatrixExpr`\n\nMatrix multiplication.\n\nIn the first form, compute the matrix multiplication between `this` matrix expression and `expr`. The shape of `expr` must be compatible with the shape of `this` matrix expression.\n\nIn the second form, multiply `this` matrix expression by a `scalar`.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. Matrix expression.\n• `scalar: number`. A number.\n###### Returns\n\nA `SpeedyMatrixExpr` representing the result of the multiplication.\n\n###### Example\n```const col = Speedy.Matrix(3, 1, [0, 5, 2]);\nconst row = Speedy.Matrix(1, 3, [1, 2, 3]);\n\nconst dot = row.times(col); // 1x1 matrix expression: inner product\nconst out = col.times(row); // 3x3 matrix expression: outer product\nconst len = col.transpose().times(col); // 1x1 matrix expression: squared length of col\n\nconst mat = Speedy.Matrix.Zeros(1);\nawait mat.setTo(len); // evaluate len\nconsole.log(mat.read()); // 29 = 0*0 + 5*5 + 2*2```\n##### SpeedyMatrixExpr.compMult()\n\n`SpeedyMatrixExpr.compMult(expr: SpeedyMatrixExpr): SpeedyMatrixExpr`\n\nCompute the component-wise multiplication between `this` matrix expression and `expr`. Both matrices must have the same shape.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. Matrix expression.\n###### Returns\n\nA `SpeedyMatrixExpr` representing the component-wise multiplication.\n\n##### SpeedyMatrixExpr.inverse()\n\n`SpeedyMatrixExpr.inverse(): SpeedyMatrixExpr`\n\nCompute the inverse of `this` matrix expression. Make sure it's square.\n\n###### Returns\n\nA `SpeedyMatrixExpr` representing the inverse of `this` matrix expression.\n\n##### SpeedyMatrixExpr.ldiv()\n\n`SpeedyMatrixExpr.ldiv(expr: SpeedyMatrixExpr): SpeedyMatrixExpr`\n\nLeft division `this` \\ `expr`. This is equivalent to solving a system of linear equations Ax = b, where A is `this` and b is `expr` (in a least squares sense if A is not square). The number of rows of `this` must be greater or equal than its number of columns. `expr` must be a column vector.\n\n###### Arguments\n• `expr: SpeedyMatrixExpr`. Matrix expression.\n###### Returns\n\nA `SpeedyMatrixExpr` representing the left division.\n\n#### Systems of equations\n\n##### Speedy.Matrix.solve()\n\n`Speedy.Matrix.solve(solution: SpeedyMatrix, A: SpeedyMatrix, b: SpeedyMatrix, options?: object): SpeedyPromise<SpeedyMatrix>`\n\nSolve a system of linear equations Ax = b for x, the `solution`, where `A` is a n x n square matrix, `b` is a n x 1 column vector and `solution` is a n x 1 column vector of unknowns. n is the number of equations and the number of unknowns.\n\n###### Arguments\n• `solution: SpeedyMatrix`. The output column vector.\n• `A: SpeedyMatrix`. A square matrix.\n• `b: SpeedyMatrix`. A column vector.\n• `options: object, optional`. Options to be passed to the solver. Available keys:\n• `method: string`. One of the following: `\"qr\"`. Defaults to `\"qr\"`.\n###### Returns\n\nA `SpeedyPromise` that resolves to `solution`.\n\n###### Example\n```//\n// We'll solve the following system of equations:\n// y - z = 9\n// y + z = 6\n//\n// Let's write it in matrix form:\n// [ 1 -1 ] [ y ] = [ 9 ]\n// [ 1 1 ] [ z ] [ 6 ]\n//\n// The code below solves Ax = b for x, where\n// x = (y, z) is the column vector of unknowns.\n//\nconst A = Speedy.Matrix(2, 2, [\n1, 1, // first column\n-1, 1 // second column\n]);\nconst b = Speedy.Matrix(2, 1, [\n9, 6 // column vector\n]);\n\n// Solve Ax = b for x\nconst solution = Speedy.Matrix.Zeros(2, 1);\nawait Speedy.Matrix.solve(solution, A, b);\n\n// get the result\nconsole.log(solution.read()); // [ 7.5, -1.5 ]```\n##### Speedy.Matrix.ols()\n\n`Speedy.Matrix.ols(solution: SpeedyMatrix, A: SpeedyMatrix, b: SpeedyMatrix): SpeedyPromise<SpeedyMatrix>`\n\nOrdinary least squares.\n\nGiven an overdetermined system of linear equations Ax = b, where `A` is a m x n matrix, `b` is a m x 1 column vector and `solution` x is a n x 1 column vector of unknowns, find a `solution` x that minimizes the Euclidean norm of the residual b - Ax.\n\nm is the number of equations and n is the number of unknowns. We require m >= n.\n\n###### Arguments\n• `solution: SpeedyMatrix`. The output column vector.\n• `A: SpeedyMatrix`. A matrix.\n• `b: SpeedyMatrix`. A column vector.\n• `options: object, optional`. Options to be passed to the solver. Available keys:\n• `method: string`. One of the following: `\"qr\"`. Defaults to `\"qr\"`.\n###### Returns\n\nA `SpeedyPromise` that resolves to `solution`.\n\n#### Matrix factorization\n\n##### Speedy.Matrix.qr()\n\n`Speedy.Matrix.qr(Q: SpeedyMatrix, R: SpeedyMatrix, A: SpeedyMatrix, options?: object): SpeedyPromise<void>`\n\nCompute a QR decomposition of a m x n matrix `A` using Householder reflectors. `Q` will be orthogonal and `R` will be upper-triangular. We require m >= n.\n\n###### Arguments\n• `Q: SpeedyMatrix`. Output matrix (m x n if reduced, m x m if full).\n• `R: SpeedyMatrix`. Output matrix (n x n if reduced, m x n if full).\n• `A: SpeedyMatrix`. The matrix to be decomposed.\n• `options: object, optional`. A configuration object that accepts the following keys:\n• `mode: string`. Either `\"full\"` or `\"reduced\"`. Defaults to `\"reduced\"`.\n###### Returns\n\nReturns a `SpeedyPromise` that resolves as soon as the computation is complete.\n\n###### Example\n```// We'll find a QR decomposition of this matrix\nconst A = Speedy.Matrix(3, 3, [\n0, 1, 0, // first column\n1, 1, 0, // second column\n1, 2, 3, // third column\n]);\n\n// Compute a QR decomposition of A\nconst Q = Speedy.Matrix.Zeros(3, 3);\nconst R = Speedy.Matrix.Zeros(3, 3);\nawait Speedy.Matrix.qr(Q, R, A);\n\n// Print the result\nconsole.log(Q.toString());\nconsole.log(R.toString());\n\n// Check the answer (A = QR)\nconst QR = await Speedy.Matrix.Zeros(Q.rows, R.columns).setTo(Q.times(R));\nconsole.log(QR.toString());```\n\n### Geometric transformations\n\n#### Perspective transformation\n\n##### Speedy.Matrix.applyPerspectiveTransform()\n\n`Speedy.Matrix.applyPerspectiveTransform(dest: SpeedyMatrix, src: SpeedyMatrix, transform: SpeedyMatrix): SpeedyPromise<SpeedyMatrix>`\n\nApply a perspective `transform` to a set of 2D points described by `src` and store the results in `dest`.\n\n###### Arguments\n• `dest: SpeedyMatrix`. A 2 x n output matrix.\n• `src: SpeedyMatrix`. A 2 x n matrix encoding a set of n points, one per column.\n• `transform: SpeedyMatrix`. A 3x3 homography matrix.\n###### Returns\n\nA `SpeedyPromise` that resolves to `dest`.\n\n###### Example\n```const transform = Speedy.Matrix(3, 3, [\n3, 0, 0, // first column\n0, 2, 0, // second column\n2, 1, 1, // third column\n]);\n\nconst src = Speedy.Matrix(2, 4, [\n0, 0,\n1, 0,\n1, 1,\n0, 1,\n]);\n\nconst dest = Speedy.Matrix.Zeros(src.rows, src.columns);\nawait Speedy.Matrix.applyPerspectiveTransform(dest, src, transform);\nconsole.log(dest.toString());\n\n//\n// Result:\n// [ 2 5 5 2 ]\n// [ 1 1 3 3 ]\n//```\n##### Speedy.Matrix.perspective()\n\n`Speedy.Matrix.perspective(homography: SpeedyMatrix, src: SpeedyMatrix, dest: SpeedyMatrix): SpeedyPromise<SpeedyMatrix>`\n\nCompute a `homography` matrix using four correspondences of points.\n\n###### Arguments\n• `homography: SpeedyMatrix`. A 3x3 output matrix.\n• `src: SpeedyMatrix`. A 2x4 matrix with the coordinates of four points (one per column) representing the corners of the source space.\n• `dest: SpeedyMatrix`. A 2x4 matrix with the coordinates of four points (one per column) representing the corners of the destination space.\n###### Returns\n\nA `SpeedyPromise` that resolves to `homography`.\n\n###### Example\n```const src = Speedy.Matrix(2, 4, [\n0, 0, // first point\n1, 0, // second point\n1, 1, // third point\n0, 1, // fourth point\n]);\n\nconst dest = Speedy.Matrix(2, 4, [\n0, 0,\n3, 0,\n3, 2,\n0, 2,\n]);\n\nconst homography = Speedy.Matrix.Zeros(3, 3);\nawait Speedy.Matrix.perspective(homography, src, dest);\n\nconsole.log(homography.toString());```\n##### Speedy.Matrix.findHomography()\n\n`Speedy.Matrix.findHomography(homography: SpeedyMatrix, src: SpeedyMatrix, dest: SpeedyMatrix, options?: object): SpeedyPromise<SpeedyMatrix>`\n\nCompute a `homography` matrix using a set of n >= 4 correspondences of points, possibly with noise.\n\n###### Arguments\n• `homography: SpeedyMatrix`. A 3x3 output matrix.\n• `src: SpeedyMatrix`. A 2 x n matrix with the coordinates of n points (one per column) representing the corners of the source space.\n• `dest: SpeedyMatrix`. A 2 x n matrix with the coordinates of n points (one per column) representing the corners of the destination space.\n• `options: object, optional`. A configuration object.\n• `method: string`. The method to be employed to compute the homography (see the table of methods below).\n\nTable of methods:\n\nMethod Description\n`\"default\"` Normalized Direct Linear Transform (DLT). All points will be used to estimate the homography. Use this method if your data set is not polluted with outliers.\n`\"pransac\"` PRANSAC is a variant of RANSAC with bounded runtime that is designed for real-time tasks. It is able to reject outliers in the data set.\n\nTable of parameters:\n\nParameter Supported methods Description\n`reprojectionError: number` `\"pransac\"` A threshold, measured in pixels, that lets Speedy decide if a data point is an inlier or an outlier for a given model. A data point is an inlier for a given model if the model maps its `src` coordinates near its `dest` coordinates (i.e., if the Euclidean distance is not greater than the threshold). A data point is an outlier if it's not an inlier. Defaults to 3 pixels.\n`mask: SpeedyMatrix` `\"pransac\"` An optional output matrix of shape 1 x n. Its i-th entry will be set to 1 if the i-th data point is an inlier for the best model found by the method, or 0 if it's an outlier.\n`numberOfHypotheses: number` `\"pransac\"` A positive integer specifying the number of models that will be generated and tested. The best model found by the method will be refined and then returned. If your inlier ratio is \"high\", this parameter can be set to a \"low\" number, making the algorithm run even faster. Defaults to 500.\n`bundleSize: number` `\"pransac\"` A positive integer specifying the number of data points to be tested against all viable models before the set of viable models gets cut in half, over and over again. Defaults to 100.\n###### Returns\n\nA `SpeedyPromise` that resolves to `homography`.\n\n###### Example\n```//\n// Map random points\n// from [0,100] x [0,100]\n// to [200,600] x [200,600]\n//\nconst numPoints = 50;\nconst noiseLevel = 2;\n\nconst transform = x => 4*x + 200; // simulated model\nconst randCoord = () => 100 * Math.random(); // in [0, 100)\nconst randNoise = () => (Math.random() - 0.5) * noiseLevel;\n\nconst srcCoords = new Array(numPoints * 2).fill(0).map(() => randCoord());\nconst dstCoords = srcCoords.map(x => transform(x) + randNoise());\n\nconst src = Speedy.Matrix(2, numPoints, srcCoords);\nconst dst = Speedy.Matrix(2, numPoints, dstCoords);\n\nconst homography = Speedy.Matrix.Zeros(3, 3);\nawait Speedy.Matrix.findHomography(homography, src, dst, {\nmethod: \"pransac\",\nreprojectionError: 1\n});\n\nconsole.log('homography:', homography.toString());\n\n// Now let's test the homography using a few test points.\n// The points need to be mapped in line with our simulated model (see above)\nconst tstCoords = Speedy.Matrix(2, 5, [\n0, 0,\n100, 0,\n100, 100,\n0, 100,\n50, 50,\n]);\n\nconst chkCoords = Speedy.Matrix.Zeros(2, 5);\nawait Speedy.Matrix.applyPerspectiveTransform(chkCoords, tstCoords, homography);\nconsole.log(chkCoords.toString());```\n\n#### Affine transformation\n\n##### Speedy.Matrix.applyAffineTransform()\n\n`Speedy.Matrix.applyAffineTransform(dest: SpeedyMatrix, src: SpeedyMatrix, transform: SpeedyMatrix): SpeedyPromise<SpeedyMatrix>`\n\nApply an affine `transform` to a set of 2D points described by `src` and store the results in `dest`.\n\n###### Arguments\n• `dest: SpeedyMatrix`. A 2 x n output matrix.\n• `src: SpeedyMatrix`. A 2 x n matrix encoding a set of n points, one per column.\n• `transform: SpeedyMatrix`. A 2x3 affine transformation matrix.\n###### Returns\n\nA `SpeedyPromise` that resolves to `dest`.\n\n###### Example\n```const transform = Speedy.Matrix(2, 3, [\n3, 0, // first column\n0, 2, // second column\n2, 1, // third column\n]);\n\nconst src = Speedy.Matrix(2, 4, [\n0, 0,\n1, 0,\n1, 1,\n0, 1,\n]);\n\nconst dest = Speedy.Matrix.Zeros(src.rows, src.columns);\nawait Speedy.Matrix.applyAffineTransform(dest, src, transform);\nconsole.log(dest.toString());\n\n//\n// Result:\n// [ 2 5 5 2 ]\n// [ 1 1 3 3 ]\n//```\n##### Speedy.Matrix.affine()\n\n`Speedy.Matrix.affine(transform: SpeedyMatrix, src: SpeedyMatrix, dest: SpeedyMatrix): SpeedyPromise<SpeedyMatrix>`\n\nCompute an `affine` transform using three correspondences of points.\n\n###### Arguments\n• `transform: SpeedyMatrix`. A 2x3 output matrix.\n• `src: SpeedyMatrix`. A 2x3 matrix with the coordinates of three points (one per column) representing the corners of the source space.\n• `dest: SpeedyMatrix`. A 2x3 matrix with the coordinates of three points (one per column) representing the corners of the destination space.\n###### Returns\n\nA `SpeedyPromise` that resolves to `transform`.\n\n###### Example\n```const src = Speedy.Matrix(2, 3, [\n0, 0, // first point\n1, 0, // second point\n1, 1, // third point\n]);\n\nconst dest = Speedy.Matrix(2, 3, [\n0, 0,\n3, 0,\n3, 2,\n]);\n\nconst transform = Speedy.Matrix.Zeros(2, 3);\nawait Speedy.Matrix.affine(transform, src, dest);\n\nconsole.log(transform.toString());```\n##### Speedy.Matrix.findAffineTransform()\n\n`Speedy.Matrix.findAffineTransform(transform: SpeedyMatrix, src: SpeedyMatrix, dest: SpeedyMatrix, options?: object): SpeedyPromise<SpeedyMatrix>`\n\nCompute an affine `transform` using a set of n >= 3 correspondences of points, possibly with noise.\n\n###### Arguments\n• `transform: SpeedyMatrix`. A 2x3 output matrix.\n• `src: SpeedyMatrix`. A 2 x n matrix with the coordinates of n points (one per column) representing the corners of the source space.\n• `dest: SpeedyMatrix`. A 2 x n matrix with the coordinates of n points (one per column) representing the corners of the destination space.\n• `options: object, optional`. A configuration object.\n• `method: string`. The method to be employed to compute the affine transform (see the table of methods below).\n\nTable of methods:\n\nTable of parameters:\n\n###### Returns\n\nA `SpeedyPromise` that resolves to `transform`.\n\n### Geometric Utilities\n\n#### 2D Vectors\n\n##### Speedy.Vector2()\n\n`Speedy.Vector2(x: number, y: number): SpeedyVector2`\n\nCreates a new 2D vector with the given coordinates.\n\n###### Arguments\n• `x: number`. The x-coordinate of the vector.\n• `y: number`. The y-coordinate of the vector.\n###### Returns\n\nA new `SpeedyVector2` instance.\n\n###### Example\n`const zero = Speedy.Vector2(0, 0);`\n##### Speedy.Vector2.Sink()\n\n`Speedy.Vector2.Sink(name?: string): SpeedyPipelineNodeVector2Sink`\n\nCreates a sink of 2D vectors using the specified name. If the name is not specified, Speedy will call this node `\"vec2\"`. An array of `SpeedyVector2` objects will be exported from the pipeline.\n\n###### Parameters\n• `turbo: boolean`. Accelerate GPU-CPU transfers. You'll get the data from the previous frame. Defaults to `false`.\n###### Ports\nPort name Data type Description\n`\"in\"` Vector2 A set of 2D vectors to be exported from the pipeline.\n##### SpeedyVector2.x\n\n`SpeedyVector2.x: number`\n\nThe x-coordinate of the vector.\n\n##### SpeedyVector2.y\n\n`SpeedyVector2.y: number`\n\nThe y-coordinate of the vector.\n\n##### SpeedyVector2.plus()\n\n`SpeedyVector2.plus(offset: SpeedyVector2): SpeedyVector2`\n\n###### Returns\n\nA new vector corresponding to `this` + `offset`.\n\n##### SpeedyVector2.minus()\n\n`SpeedyVector2.minus(offset: SpeedyVector2): SpeedyVector2`\n\nVector subtraction.\n\n###### Returns\n\nA new vector corresponding to `this` - `offset`.\n\n##### SpeedyVector2.times()\n\n`SpeedyVector2.times(scalar: number): SpeedyVector2`\n\nMultiply a vector by a scalar.\n\n###### Returns\n\nA new vector corresponding to `this` * `scalar`.\n\n##### SpeedyVector2.length()\n\n`SpeedyVector2.length(): number`\n\nComputes the length of the vector (Euclidean norm).\n\n###### Returns\n\nThe length of the vector.\n\n###### Example\n```const v = Speedy.Vector2(3, 4);\n\nconsole.log('Coordinates', v.x, v.y);\nconsole.log('Length', v.length()); // 5```\n##### SpeedyVector2.normalized()\n\n`SpeedyVector2.normalized(): SpeedyVector2`\n\nReturns a normalized version of this vector.\n\n###### Returns\n\nA new vector with the same direction as the original one and with length equal to one.\n\n##### SpeedyVector2.dot()\n\n`SpeedyVector2.dot(v: SpeedyVector2): number`\n\nDot product.\n\n###### Arguments\n• `v: SpeedyVector2`. A vector.\n###### Returns\n\nThe dot product between the two vectors.\n\n##### SpeedyVector2.distanceTo()\n\n`SpeedyVector2.distanceTo(v: SpeedyVector2): number`\n\nComputes the distance between two vectors.\n\n###### Arguments\n• `v: SpeedyVector2`. A vector.\n###### Returns\n\nThe Euclidean distance between the two vectors.\n\n###### Example\n```const u = Speedy.Vector2(1, 0);\nconst v = Speedy.Vector2(5, 0);\n\nconsole.log(u.distanceTo(v)); // 4```\n##### SpeedyVector2.toString()\n\n`SpeedyVector2.toString(): string`\n\nGet a string representation of the vector.\n\n###### Returns\n\nA string representation of the vector.\n\n##### SpeedyVector2.equals()\n\n`SpeedyVector2.equals(v: SpeedyVector2): boolean`\n\nEquality comparison.\n\n###### Returns\n\nReturns `true` if the coordinates of `this` are equal to the coordinates of `v`, or `false` otherwise.\n\n#### 2D Points\n\n##### Speedy.Point2()\n\n`Speedy.Point2(x: number, y: number): SpeedyPoint2`\n\nCreates a new 2D point with the given coordinates.\n\n###### Arguments\n• `x: number`. The x-coordinate of the point.\n• `y: number`. The y-coordinate of the point.\n###### Returns\n\nA new `SpeedyPoint2` instance.\n\n###### Example\n`const p = Speedy.Point2(5, 10);`\n##### SpeedyPoint2.x\n\n`SpeedyPoint2.x: number`\n\nThe x-coordinate of the point.\n\n##### SpeedyPoint2.y\n\n`SpeedyPoint2.y: number`\n\nThe y-coordinate of the point.\n\n##### SpeedyPoint2.plus()\n\n`SpeedyPoint2.plus(v: SpeedyVector2): SpeedyPoint2`\n\nAdds a vector to this point.\n\n###### Arguments\n• `v: SpeedyVector2`. A 2D vector.\n###### Returns\n\nA new `SpeedyPoint2` instance corresponding to this point translated by `v`.\n\n##### SpeedyPoint2.minus()\n\n`SpeedyPoint2.minus(p: SpeedyPoint2): SpeedyVector2`\n\nSubtracts point `p` from this.\n\n###### Arguments\n• `p: SpeedyPoint2`. A 2D point.\n###### Returns\n\nA new `SpeedyVector2` instance such that `p` plus that vector equals this point.\n\n##### SpeedyPoint2.equals()\n\n`SpeedyPoint2.equals(p: SpeedyPoint2): boolean`\n\nEquality comparison.\n\n###### Returns\n\nReturns `true` if the coordinates of `this` are equal to the coordinates of `p`, or `false` otherwise.\n\n#### 2D Size\n\n##### Speedy.Size()\n\n`Speedy.Size(width: number, height: number): SpeedySize`\n\nCreates a new object that represents the size of a rectangle.\n\n###### Arguments\n• `width: number`. A non-negative number.\n• `height: number`. A non-negative number.\n###### Returns\n\nA new `SpeedySize` instance.\n\n###### Example\n`const size = Speedy.Size(640, 360);`\n##### SpeedySize.width\n\n`SpeedySize.width: number`\n\nWidth property.\n\n##### SpeedySize.height\n\n`SpeedySize.height: number`\n\nHeight property.\n\n##### SpeedySize.equals()\n\n`SpeedySize.equals(anotherSize: SpeedySize): boolean`\n\nChecks if two size objects have the same dimensions.\n\n###### Returns\n\nReturns `true` if the dimensions of `this` and `anotherSize` are equal.\n\n##### SpeedySize.toString()\n\n`SpeedySize.toString(): string`\n\nConvert to string.\n\n###### Returns\n\nA string representation of the object.\n\n### Extras\n\n#### Promises\n\nSpeedy includes its own implementation of Promises, called SpeedyPromises. SpeedyPromises can interoperate with standard ES6 Promises and are based on the Promises/A+ specification. The main difference between SpeedyPromises and standard ES6 Promises is that, under certain circunstances, SpeedyPromises can be made to run faster than ES6 Promises.\n\nSpeedyPromises are specially beneficial when you have a chain of them. When (and if) their \"turbocharged\" mode is invoked, they will adopt a special (non-standard) behavior and skip the microtask queue when settling promises in a chain. This will save you a few milliseconds. While \"a few milliseconds\" doesn't sound much in terms of standard web development, for a real-time library such as Speedy it means a lot. Simply put, we're squeezing out performance. SpeedyPromises are used internally by the library.\n\n##### Speedy.Promise\n\n`Speedy.Promise: Function`\n\nUsed to create a new `SpeedyPromise` object.\n\n###### Example\n```let promise = new Speedy.Promise((resolve, reject) => {\nsetTimeout(resolve, 2000);\n});\n\npromise.then(() => {\nconsole.log(`The SpeedyPromise is now fulfilled.`);\n}).catch(() => {\nconsole.log(`The SpeedyPromise is now rejected.`);\n}).finally(() => {\nconsole.log(`The SpeedyPromise is now settled.`);\n});```\n\n#### Settings\n\n##### Speedy.Settings.powerPreference\n\n`Speedy.Settings.powerPreference: \"default\" | \"low-power\" | \"high-performance\"`\n\nExperimental. The desired power preference for the WebGL context. This option should be set before creating any pipelines. The browser uses this setting as a hint to balance rendering performance and battery life (especially on mobile devices).\n\n##### Speedy.Settings.gpuPollingMode\n\n`Speedy.Settings.gpuPollingMode: \"raf\" | \"asap\"`\n\nExperimental. GPU polling mode. `\"asap\"` has slightly better performance than `\"raf\"`, at the cost of higher CPU usage.\n\n#### Utilities\n\n##### Speedy.version\n\n`Speedy.version: string, read-only`\n\nThe version of the library.\n\n##### Speedy.fps\n\n`Speedy.fps: number, read-only`\n\nSpeedy includes a frames per second (FPS) counter for testing purposes. It will be created as soon as you access it.\n\n###### Example\n`console.log(Speedy.fps);`\n##### Speedy.isSupported()\n\n`Speedy.isSupported(): boolean`\n\nChecks if Speedy is supported in this machine & browser.\n\n###### Returns\n\nReturns a boolean telling whether or not Speedy is supported in the client environment.\n\n###### Example\n```if(!Speedy.isSupported())\nalert('This application is not supported in this browser. Please use a different browser.');```\n\n## Keywords\n\n### Install\n\n`npm i speedy-vision`\n\n### Repository\n\ngithub.com/alemart/speedy-vision\n\n### Homepage\n\ngithub.com/alemart/speedy-vision\n\n0\n\n0.9.0-wip"
]
| [
null,
"https://static.npmjs.com/255a118f56f5346b97e56325a1217a16.svg",
null,
"https://raw.githubusercontent.com/alemart/speedy-vision/HEAD/assets/demo-bestfeatures.gif",
null,
"https://raw.githubusercontent.com/alemart/speedy-vision/HEAD/assets/network-example.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6297927,"math_prob":0.89117736,"size":73349,"snap":"2023-14-2023-23","text_gpt3_token_len":18779,"char_repetition_ratio":0.20252232,"word_repetition_ratio":0.2508162,"special_character_ratio":0.236145,"punctuation_ratio":0.21603547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97268456,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T01:28:36Z\",\"WARC-Record-ID\":\"<urn:uuid:1cddf49b-c6cf-4dd1-827b-5b1a7ef7105c>\",\"Content-Length\":\"312530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd4a98a9-3ac4-43e4-8ade-fcc12a48285c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fcdf253c-618e-4724-b801-ee460de2c397>\",\"WARC-IP-Address\":\"104.16.92.83\",\"WARC-Target-URI\":\"https://www.npmjs.com/package/speedy-vision\",\"WARC-Payload-Digest\":\"sha1:GENYH7BMT34UF3IC5EBWK2KWI2IZT2LK\",\"WARC-Block-Digest\":\"sha1:KEE64YRFNFV4KK2GDFXYP76NRZOL7NEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944606.5_warc_CC-MAIN-20230323003026-20230323033026-00267.warc.gz\"}"} |
https://www.cnblogs.com/jpfss/p/9928747.html | [
"# Java泛型详解:<T>和Class<T>的使用。泛型类,泛型方法的详细使用实例\n\n## 一、引入\n\n### 1、泛型是什么\n\n[java] view plain\n1. ArrayList<String> strList = new ArrayList<String>();\n2. ArrayList<Integer> intList = new ArrayList<Integer>();\n3. ArrayList<Double> doubleList = new ArrayList<Double>();\n\n### 2、没有泛型会怎样\n\n[java] view plain\n1. //设置Integer类型的点坐标\n2. class IntegerPoint{\n3. private Integer x ; // 表示X坐标\n4. private Integer y ; // 表示Y坐标\n5. public void setX(Integer x){\n6. this.x = x ;\n7. }\n8. public void setY(Integer y){\n9. this.y = y ;\n10. }\n11. public Integer getX(){\n12. return this.x ;\n13. }\n14. public Integer getY(){\n15. return this.y ;\n16. }\n17. }\n18. //设置Float类型的点坐标\n19. class FloatPoint{\n20. private Float x ; // 表示X坐标\n21. private Float y ; // 表示Y坐标\n22. public void setX(Float x){\n23. this.x = x ;\n24. }\n25. public void setY(Float y){\n26. this.y = y ;\n27. }\n28. public Float getX(){\n29. return this.x ;\n30. }\n31. public Float getY(){\n32. return this.y ;\n33. }\n34. }\n\n[java] view plain\n1. class ObjectPoint{\n2. private Object x ;\n3. private Object y ;\n4. public void setX(Object x){\n5. this.x = x ;\n6. }\n7. public void setY(Object y){\n8. this.y = y ;\n9. }\n10. public Object getX(){\n11. return this.x ;\n12. }\n13. public Object getY(){\n14. return this.y ;\n15. }\n16. }\n\n[java] view plain\n1. ObjectPoint integerPoint = new ObjectPoint();\n2. integerPoint.setX(new Integer(100));\n3. Integer integerX=(Integer)integerPoint.getX();\n\n[java] view plain\n1. integerPoint.setX(new Integer(100));\n\n[java] view plain\n1. Integer integerX=(Integer)integerPoint.getX();\n\n[java] view plain\n1. ObjectPoint floatPoint = new ObjectPoint();\n2. floatPoint.setX(new Float(100.12f));\n3. Float floatX = (Float)floatPoint.getX();\n\n[java] view plain\n1. ObjectPoint floatPoint = new ObjectPoint();\n2. floatPoint.setX(new Float(100.12f));\n3. String floatX = (String)floatPoint.getX();\n\n[java] view plain\n1. String floatX = (String)floatPoint.getX();\n\n## 二、各种泛型定义及使用\n\n### 1、泛型类定义及使用\n\n[java] view plain\n1. //定义\n2. class Point<T>{// 此处可以随便写标识符号\n3. private T x ;\n4. private T y ;\n5. public void setX(T x){//作为参数\n6. this.x = x ;\n7. }\n8. public void setY(T y){\n9. this.y = y ;\n10. }\n11. public T getX(){//作为返回值\n12. return this.x ;\n13. }\n14. public T getY(){\n15. return this.y ;\n16. }\n17. };\n18. //IntegerPoint使用\n19. Point<Integer> p = new Point<Integer>() ;\n20. p.setX(new Integer(100)) ;\n21. System.out.println(p.getX());\n22.\n23. //FloatPoint使用\n24. Point<Float> p = new Point<Float>() ;\n25. p.setX(new Float(100.12f)) ;\n26. System.out.println(p.getX());",
null,
"(1)、定义泛型:Point<T>\n\n(2)类中使用泛型\n\n[java] view plain\n1. //定义变量\n2. private T x ;\n3. //作为返回值\n4. public T getX(){\n5. return x ;\n6. }\n7. //作为参数\n8. public void setX(T x){\n9. this.x = x ;\n10. }\n(3)使用泛型类\n\n[java] view plain\n1. //IntegerPoint使用\n2. Point<Integer> p = new Point<Integer>() ;\n3. p.setX(new Integer(100)) ;\n4. System.out.println(p.getX());\n5.\n6. //FloatPoint使用\n7. Point<Float> p = new Point<Float>() ;\n8. p.setX(new Float(100.12f)) ;\n9. System.out.println(p.getX());\n\n[java] view plain\n1. Point<String> p = new Point<String>() ;\n\n[java] view plain\n1. //IntegerPoint使用\n2. Point<Integer> p = new Point<Integer>() ;\n3. //FloatPoint使用\n4. Point<Float> p = new Point<Float>() ;\n\n[java] view plain\n1. public class ArrayList<E>{\n2. …………\n3. }\n\n(4)使用泛型实现的优势\n\n(1)、不用强制转换\n\n[java] view plain\n1. //使用Object作为返回值,要强制转换成指定类型\n2. Float floatX = (Float)floatPoint.getX();\n3. //使用泛型时,不用强制转换,直接出来就是String\n4. System.out.println(p.getVar());\n(2)、在settVar()时如果传入类型不对,编译时会报错",
null,
"### 2、多泛型变量定义及字母规范\n\n(1)、多泛型变量定义\n\n[java] view plain\n1. class MorePoint<T,U>{\n2. }\n\n[java] view plain\n1. class MorePoint<T,U,A,B,C>{\n2. }\n\n[java] view plain\n1. class MorePoint<T,U> {\n2. private T x;\n3. private T y;\n4.\n5. private U name;\n6.\n7. public void setX(T x) {\n8. this.x = x;\n9. }\n10. public T getX() {\n11. return this.x;\n12. }\n13. …………\n14. public void setName(U name){\n15. this.name = name;\n16. }\n17.\n18. public U getName() {\n19. return this.name;\n20. }\n21. }\n22. //使用\n23. MorePoint<Integer,String> morePoint = new MorePoint<Integer, String>();\n24. morePoint.setName(\"harvic\");\n25. Log.d(TAG, \"morPont.getName:\" + morePoint.getName());\n\n(2)、字母规范\n\n[java] view plain\n1. class Point<T>{\n2. …………\n3. }\n\n• E — Element,常用在java Collection里,如:List<E>,Iterator<E>,Set<E>\n• K,V — Key,Value,代表Map的键值对\n• N — Number,数字\n• T — Type,类型,如String,Integer等等\n\n### 3、泛型接口定义及使用\n\n[java] view plain\n1. interface Info<T>{ // 在接口上定义泛型\n2. public T getVar() ; // 定义抽象方法,抽象方法的返回值就是泛型类型\n3. public void setVar(T x);\n4. }\n\n(1)、使用方法一:非泛型类\n\n[java] view plain\n1. class InfoImpl implements Info<String>{ // 定义泛型接口的子类\n2. private String var ; // 定义属性\n3. public InfoImpl(String var){ // 通过构造方法设置属性内容\n4. this.setVar(var) ;\n5. }\n6. @Override\n7. public void setVar(String var){\n8. this.var = var ;\n9. }\n10. @Override\n11. public String getVar(){\n12. return this.var ;\n13. }\n14. }\n15.\n16. public class GenericsDemo24{\n17. public void main(String arsg[]){\n18. InfoImpl i = new InfoImpl(\"harvic\");\n19. System.out.println(i.getVar()) ;\n20. }\n21. };\n\n[java] view plain\n1. class InfoImpl implements Info<String>{\n2. …………\n3. }\n\n[java] view plain\n1. public class GenericsDemo24{\n2. public void main(String arsg[]){\n3. InfoImpl i = new InfoImpl(\"harvic\");\n4. System.out.println(i.getVar()) ;\n5. }\n6. };\n(2)、使用方法二:泛型类\n\n[java] view plain\n1. interface Info<T>{ // 在接口上定义泛型\n2. public T getVar() ; // 定义抽象方法,抽象方法的返回值就是泛型类型\n3. public void setVar(T var);\n4. }\n5. class InfoImpl<T> implements Info<T>{ // 定义泛型接口的子类\n6. private T var ; // 定义属性\n7. public InfoImpl(T var){ // 通过构造方法设置属性内容\n8. this.setVar(var) ;\n9. }\n10. public void setVar(T var){\n11. this.var = var ;\n12. }\n13. public T getVar(){\n14. return this.var ;\n15. }\n16. }\n17. public class GenericsDemo24{\n18. public static void main(String arsg[]){\n19. InfoImpl<String> i = new InfoImpl<String>(\"harvic\");\n20. System.out.println(i.getVar()) ;\n21. }\n22. };\n\n[java] view plain\n1. class InfoImpl<T> implements Info<T>{ // 定义泛型接口的子类\n2. private T var ; // 定义属性\n3. public InfoImpl(T var){ // 通过构造方法设置属性内容\n4. this.setVar(var) ;\n5. }\n6. public void setVar(T var){\n7. this.var = var ;\n8. }\n9. public T getVar(){\n10. return this.var ;\n11. }\n12. }\n\n[java] view plain\n1. public class GenericsDemo24{\n2. public static void main(String arsg[]){\n3. Info<String> i = new InfoImpl<String>(\"harvic\");\n4. System.out.println(i.getVar()) ;\n5. }\n6. };\n\n[java] view plain\n1. class InfoImpl<T,K,U> implements Info<U>{ // 定义泛型接口的子类\n2. private U var ;\n3. private T x;\n4. private K y;\n5. public InfoImpl(U var){ // 通过构造方法设置属性内容\n6. this.setVar(var) ;\n7. }\n8. public void setVar(U var){\n9. this.var = var ;\n10. }\n11. public U getVar(){\n12. return this.var ;\n13. }\n14. }\n\n[java] view plain\n1. public class GenericsDemo24{\n2. public void main(String arsg[]){\n3. InfoImpl<Integer,Double,String> i = new InfoImpl<Integer,Double,String>(\"harvic\");\n4. System.out.println(i.getVar()) ;\n5. }\n6. }\n\n### 4、泛型函数定义及使用\n\n[java] view plain\n1. public class StaticFans {\n2. //静态函数\n3. public static <T> void StaticMethod(T a){\n4. Log.d(\"harvic\",\"StaticMethod: \"+a.toString());\n5. }\n6. //普通函数\n7. public <T> void OtherMethod(T a){\n8. Log.d(\"harvic\",\"OtherMethod: \"+a.toString());\n9. }\n10. }\n\n[java] view plain\n1. //静态方法\n4.\n5. //常规方法\n6. StaticFans staticFans = new StaticFans();\n7. staticFans.OtherMethod(new Integer(123));//使用方法一\n8. staticFans.<Integer>OtherMethod(new Integer(123));//使用方法二",
null,
"[java] view plain\n\n[java] view plain\n1. StaticFans staticFans = new StaticFans();\n2. staticFans.OtherMethod(new Integer(123));//使用方法一\n3. staticFans.<Integer>OtherMethod(new Integer(123));//使用方法二\n\n[java] view plain\n1. public static <T> List<T> parseArray(String response,Class<T> object){\n2. List<T> modelList = JSON.parseArray(response, object);\n3. return modelList;\n4. }\n\n### 5、其它用法:Class<T>类传递及泛型数组\n\n(1)、使用Class<T>传递泛型类Class对象\n\n[java] view plain\n1. public static List<SuccessModel> parseArray(String response){\n2. List<SuccessModel> modelList = JSON.parseArray(response, SuccessModel.class);\n3. return modelList;\n4. }\n\n[java] view plain\n1. public class SuccessModel {\n2. private boolean success;\n3.\n4. public boolean isSuccess() {\n5. return success;\n6. }\n7.\n8. public void setSuccess(boolean success) {\n9. this.success = success;\n10. }\n11. }\n\n[java] view plain\n1. public static List<SuccessModel> parseArray(String response){\n2. List<SuccessModel> modelList = JSON.parseArray(response, SuccessModel.class);\n3. return modelList;\n4. }\n\n[java] view plain\n1. public static <T> List<T> parseArray(String response,Class<T> object){\n2. List<T> modelList = JSON.parseArray(response, object);\n3. return modelList;\n4. }\n\n[java] view plain\n1. public final class Class<T> implements Serializable {\n2. …………\n3. }\n\n(2)、定义泛型数组\n\n[java] view plain\n1. //定义\n2. public static <T> T[] fun1(T...arg){ // 接收可变参数\n3. return arg ; // 返回泛型数组\n4. }\n5. //使用\n6. public static void main(String args[]){\n7. Integer i[] = fun1(1,2,3,4,5,6) ;\n8. Integer[] result = fun1(i) ;\n9. }\n\n[java] view plain\n1. public static <T> T[] fun1(T...arg){ // 接收可变参数\n2. return arg ; // 返回泛型数组\n3. }\n\n# 下面是我自己实际使用泛型的几个实例。\n\n## 关于泛型类的使用实例\n\nimport lombok.Data; @Datapublic class MultiObject<T> { \t/** * 成功状态 */ private boolean success; /** * 异常 */ private Exception ex; /** * 数据 */ private T obj;\t\tpublic MultiObject() {\t} \t/**\t * 注意:当传入的泛型是Boolean时,就和第三个构造函数冲突了。\t */\tpublic MultiObject(boolean success) {\t\tthis.success = success;\t}\t\tpublic MultiObject(Exception ex) {\t\tthis.success = false;\t\tthis.ex = ex;\t}\t\tpublic MultiObject(T value) {\t\tthis.success = true;\t\tthis.obj = value;\t}}\n\n1,成功与否。对应属性success。\n2,异常信息。对应属性ex。若是操作正常执行,则就不在意这个属性的值。\n3,我们操作的最终目的对象。对应属性obj。\n\n## 关于泛型方法的使用实例\n\n1,一个是泛型表示某一个类型的参数。为的传递某一类的参数对象\n2,另一个则是传递的不是参数,而是代表Class,某一个类。\n\n /** * 将Json字符串信息转换成对应的Java对象 * * @param json json字符串对象 * @param c 对应的类型 */ public static <T> T parseJsonToObj(String json, Class<T> c) { try { JSONObject jsonObject = JSONObject.parseObject(json); return JSON.toJavaObject(jsonObject, c); } catch (Exception e) { LOG.error(e.getMessage()); } return null; }\n\nCollector collectorObj = JSONUtils.parseJsonToObj(collector, Collector.class);Flume flume = JSONUtils.parseJsonToObj(flumeJson, Flume.class);Probe probe = JSONUtils.parseJsonToObj(probeJson, Probe.class);\n\n /** * @param dest 目的集合 * @param source 源集合 * @param <T> 集合参数的类型 */ private static <T> void listAddAllAvoidNPE(List<T> dest, List<T> source) { if (source == null) { return; } dest.addAll(source); } private static <T> void listAddAvoidNull(List<T> dest, T source) { if (source == null) { return; } dest.add(source); }\n\n\tList<ProbeObject> list = Lists.newArrayList();\tlistAddAllAvoidNPE(list, decoder.getProperties());\n\nposted @ 2018-11-08 14:10 星朝 阅读(71761) 评论(8编辑 收藏"
]
| [
null,
"https://img-blog.csdn.net/20151116222626759",
null,
"https://img-blog.csdn.net/20151116222950942",
null,
"https://img-blog.csdn.net/20151117082843103",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.95732635,"math_prob":0.8375387,"size":7518,"snap":"2020-34-2020-40","text_gpt3_token_len":5266,"char_repetition_ratio":0.08810221,"word_repetition_ratio":0.0631068,"special_character_ratio":0.1915403,"punctuation_ratio":0.09744214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96792406,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T09:41:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5f582840-87b3-4339-b7b7-94c5d38682c0>\",\"Content-Length\":\"376372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7682d426-db1c-4f72-8ce4-bdc963c2f08a>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d6681d3-02ea-4d36-8e33-6d8ad3fd42eb>\",\"WARC-IP-Address\":\"118.31.180.41\",\"WARC-Target-URI\":\"https://www.cnblogs.com/jpfss/p/9928747.html\",\"WARC-Payload-Digest\":\"sha1:W762WVSXB4QBOQFDKMYHNLHX3MDGLZ6V\",\"WARC-Block-Digest\":\"sha1:DOOF4T37IIPRAVJVHPIOIVFZKS5H72OM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400274441.60_warc_CC-MAIN-20200927085848-20200927115848-00240.warc.gz\"}"} |
http://alexminnaar.com/2015/02/14/deep-learning-basics.html | [
"Alex Minnaar\n\n## Deep Learning Basics: Neural Networks, Backpropagation and Stochastic Gradient Descent\n\n14 Feb 2015\n\nIn the last couple of years Deep Learning has received a great deal of press. This press is not without warrant - Deep Learning has produced stat-of-the-art results in many computer vision and speech processing tasks. However, I believe that the press has given people the impression that Deep Learning is some kind of imprenetrable, esoteric field that can only be understood by academics. In this blog post I want to try to erase that impression and provide a practical overview of some of Deep Learning’s basic concepts.\n\nAt its core, Deep Learning is a class of of neural network models. That is, a model with an input layer, an output layer, and an arbitrary number of hidden layers. These layers are made up of neurons or neural units. They are called neurons because they share some similarities with the behaviour of the neurons present in the human brain (though this comparison has drawn a lot of criticism from neuroscientists). For our purposes, we can think of a neuron as a nonlinear function of the weighted sum of its inputs. Since the neuron is really the most basic part of any Deep Learning model it is a good place to start.\n\n## The Single Neuron Model\n\nA neuron is a function that maps an input vector $$\\{x_1,...,x_K\\}$$ to a scalar output $$y$$ via a weight vector $$\\{w_1,...,w_K\\}$$ and a nonlinear function $$f$$.",
null,
"The function $$f$$ takes a weighted sum of the inputs and returns $$y$$.\n\n$y=f(\\sum_{i=0}^Kw_ix_i)=f(\\mathbf{w^Tx})$\n\nOften an additional element is added to the input vector that is always equal to $$1$$ with a corresponding additional weight element which acts as a bias. The function $$f$$ is called the link function which provides the nonlinearity between the input and output. A common choice for this link function is the logistic function which is defined as\n\n$f(u)=\\frac{1}{1+e^{u}}$\n\nWith the appropriate substitutions the final formula for the single neuron model becomes\n\n$y=\\frac{1}{1+e^{\\mathbf{w^Tx}}}$\n\nIf you plot the logistic function,",
null,
"you can see that it is smooth and differentiable and bound between $$0$$ and 1\\$. We shall see that these are two important properties. The derivative of the logistic function is simply\n\n$\\frac{d f(u)}{d u}=f(u)(1-f(u))=f(u)f(-u)$\n\nThis derivative will be used when we learn the weight vector $$\\bf{w}$$ via stochastic gradient descent.\n\nLike any optimization problem, our goal is to minimize an objective function. Traditionally, the objective function measures the difference between the actual output $$t$$ and the predicted output $$f(\\mathbf{w^Tx})$$. In this case we will be using the squared loss function\n\n$E=\\frac{1}{2}(t - y)^2=\\frac{1}{2}(t-f(\\mathbf{w^Tx}))^2$\n\nWe want to find the weights $$\\mathbf{w}$$ such that the above objective is minimized. We do this with stochastic gradient descent (SGD). In SGD we iteratively update our weight parameters in the direction of the gradient of the loss function until we have reached a minimum. Unlike traditional gradient descent, we do not use the entire dataset to compute the gradient at each iteration. Instead, at each iteration we randomly select a single data point from our dataset and move in the direction of the gradient with respect to that data point. Obviously this is only an approximation of the true gradient but it can be proven that we will eventually reach the minimum by following this noisey gradient. There are several advantages to using stochastic gradient descent over traditional gradient descent.\n\n1. Gradient descent requires loading the entire dataset into main memory. If your dataset is large this can be problematic. Stochastic gradient descent only requires one data point at a time (or sometimes a minibatch of data points) which is much less memory intensive.\n2. Most datasets have redundancy in them. Traditional gradient descent requires one full pass over the data until an update is made. Due to redundancy, a meaningful update can often be made without iterating over the entire dataset as with stochastic gradient descent.\n3. As a consequence of the previous point, stochastic gradient descent can often converge faster than traditional gradient descent. It is also guaranteed to find the global minimum if the loss function is convex.\n\nOur objective function $$E$$ is already defined in terms of a single data point so let’s procede to compute its gradient with respect to an aribtrary element of our weight vector $$w_i$$.\n\n\\begin{align} \\frac{\\partial E}{\\partial w_i} &= \\frac{\\partial E}{\\partial y} \\cdot \\frac{\\partial y}{\\partial u} \\cdot\\frac{\\partial u}{\\partial w_i} \\\\ &= (y-t) \\cdot y(1-y) \\cdot x_i \\end{align}\n\nNow we are able to obtain the stochastic gradient descent update equation (in vector notation)\n\n$\\mathbf{w}^{new}=\\mathbf{w}^{old}- \\eta \\cdot (y-t) \\cdot y(1-y) \\cdot \\mathbf{x}$\n\nWhere $$\\eta>0$$ is the step size. As stated previously, $$(\\mathbf{x},y)$$ data points are sequentially fed into this update equation until the weights $$\\mathbf{w}$$ converge to their optimal value. This is how we use stochastic gradient descent to learn the weights for the single neuron model.\n\nWhat we just did is also known as logistic regression and if we had replaced our logistic function with a unit step function we would have made what is known as a perceptron! Now let’s extend this relatively simple model to something a bit more complex…\n\n## The Neural Network\n\nA neural network consists of an input layer, output layer, and hidden layer. Our input layer consists of the input vector $$\\mathbf{x}=\\{x_1,...,x_K\\}$$. The hidden layer consists of a vector of $$N$$ neurons $$\\mathbf{h}=\\{h_1,...,h_N\\}$$. Finally there is an output layer with one neuron for every element of the output vector $$\\mathbf{y}=\\{y_1,...,y_M\\}$$. Every element in the input layer is connected to every neuron in the hidden layer with $$w_{ki}$$ indicating the weight associated with the connection between the $$k^{th}$$ input element and the $$i^{th}$$ hidden neuron. The same connection structure is present between the hidden and output layers with $$w'_{ij}$$ indicating the weight associated with the connection between the $$i^{th}$$ hidden neuron and the $$j^{th}$$ output neuron. This network structure is better illustrated in the below diagram.",
null,
"It is helpful to think of the weight $$w_{ki}$$ as the the $$(k,i)^{th}$$ entry in a $$K \\times N$$ weight matrix $$\\mathbf{W}$$ and similarly weight $$w'_{ij}$$ as the $$(i,j)^{th}$$ entry in a $$N \\times M$$ weight matrix $$\\mathbf{W'}$$. The output of each neuron in the hidden and output layer is computed in the exact same way as before. It is simply the logistic function applied to the weighted sum of the neuron’s inputs. For example, the output of an arbitrary neuron in the hidden layer $$h_i$$ is\n\n$h_i=f(u_i)=f(\\sum^K_{k=1}w_{ki}x_k)$\n\nand similarly for the output of an arbitrary output neuron $$y_j$$ is\n\n$y_j=f(u'_j)=f(\\sum^N_{i=1}w'_{ij}h_i)$\n\nThe objective function is also the same as before except now it is summed over all elements in the output layer.\n\n$E=\\frac{1}{2}\\sum^M_{j=1}(y_j-t_j)^2$\n\nUnlike before, we need to construct update equations for both sets of weights - the input-to-hidden layer weights $$w_{ki}$$ and the hidden-to-output weights $$w'_{ij}$$. In order to do this we need to compute the gradient of our objective function $$E$$ with respect to $$w_{ki}$$ as well as the gradient with respect to $$w'_{ij}$$. We must start with the gradient with respect to $$w'_{ij}$$ (the hidden-to-output weights) and we shall see why later. In order to compute $$\\frac{\\partial E}{\\partial{w'_{ij}}}$$ we must recall our high-school calculus, specifically the chain rule. From the chain rule, we must first take the derivative of $$E$$ with respect to $$y'_j$$. Then we must take the derivative of $$y_j$$ (i.e. the logistic function) with respect to $$w'_{ij}$$ which needs yet another application of the chain rule. We first take the derivative of the logistic function with respect to its input $$u'_j$$, then finally we can take the derivative of this input with respect to $$w'_{ij}$$ and we arrive at our desired value. This process is clearly defined below.\n\nFrom the chain rule,\n\n$\\frac{\\partial E}{\\partial w'_{ij}}=\\frac{\\partial E}{\\partial y_j} \\cdot \\frac{\\partial y_j}{\\partial u'_j} \\cdot \\frac{\\partial u'_j}{\\partial w'_{ij}}$\n\nThe derivative of $$E$$ with respect to $$y_j$$ is simply,\n\n$\\frac{\\partial E}{\\partial y_j}=y_j-t_j$\n\nFrom the last section we saw that the derivative of the logistic function $$f$$ with respect to its input $$u$$ is $$f(u)(1-f(u))$$. If we apply this we get,\n\n$\\frac{\\partial y_j}{\\partial u'_j}=y_j(1-y_j)$\n\nwhere $$y_j=f(u'_j)$$. Next we compute the derivative of $$u'_j=\\sum^N_{i=1}w'_{ij}h_i$$ with respect to a particular $$w'_{ij}$$ which is simply $$h_i$$. So, after making the appropriate subsitutions, we get\n\n$\\frac{\\partial E}{\\partial w'_{ij}}=(y_j-t_j) \\cdot y_j(1-y_j) \\cdot h_i$\n\nWith this gradient we can construct the update equation for $$w'_{ij}$$\n\n$w'^{new}_{ij}=w'^{old}_{ij} - \\eta \\cdot (y_j-t_j) \\cdot y_j(1-y_j) \\cdot h_i$\n\nNow let’s turn our attention to the gradient of the objective function with respect to the input-to-hidden weights $$w_{ki}$$. As we shall see, this gradient has already been partially computed when we computed the previous gradient.\n\nUsing the chain rule, the full gradient is\n\n$\\frac{\\partial E}{\\partial w_{ki}}=\\sum^M_{j=1}(\\frac{\\partial E}{\\partial y_j}\\cdot \\frac{\\partial y_j}{\\partial u'_j} \\cdot \\frac{\\partial u'_j}{\\partial h_i} )\\cdot \\frac{\\partial h_i}{\\partial u_i} \\cdot \\frac{\\partial u_i}{\\partial w_{ki}}$\n\nThe sum is due to the fact that the hidden unit that $$w_{ki}$$ connects to is itself connected to every output unit, thus each of these gradients need to be taken into account as well. We have already computed both $$\\frac{\\partial E}{\\partial y_j}$$ and $$\\frac{\\partial y_j}{\\partial u'_j}$$ which means that\n\n$\\frac{\\partial E}{\\partial y_j}\\cdot \\frac{\\partial y_j}{\\partial u'_j} = (y_j-t_j) \\cdot y_j(1-y_j)$\n\nNow we need to compute the remaining derivatives $$\\frac{\\partial u'_j}{\\partial h_i}$$, $$\\frac{\\partial h_i}{\\partial u_i}$$, and $$\\frac{\\partial u_i}{\\partial w_{ki}}$$. So let’s do just that.\n\n$\\frac{\\partial u'_j}{\\partial h_i}=\\frac{\\partial \\sum^N_{i=1}w'_{ij}h_i}{\\partial h_i}=w'_{ij}$\n\nand, again using the derivative of the logistic function\n\n$\\frac{\\partial h_i}{\\partial u_i}=h_i(1-h_i)$\n\nand finally\n\n$$\\frac{\\partial u_i}{\\partial w_{ki}}=\\frac{\\partial \\sum^K_{k=1}w_{ki}x_k}{\\partial w_{ki}}=x_k$$ƒ\n\nAfter making the appropriate substitutions we arrive at the gradient\n\n$\\frac{\\partial E}{\\partial w_{ki}}=\\sum^M_{j=1}[(y_j-t_j) \\cdot y_j(1-y_j) \\cdot w'_{ij}] \\cdot h_i(1-h_i) \\cdot x_k$\n\nAnd the update equation becomes\n\n$w^{new}_{ki}=w^{old}_{ki} - \\eta \\cdot \\sum^M_{j=1}[(y_j-t_j) \\cdot y_j(1-y_j) \\cdot w'_{ij}] \\cdot h_i(1-h_i) \\cdot x_k$\n\nThis process is known as backpropagation because we begin with the final output error $$y_j-t_j$$ for the output neuron $$j$$ and this error gets propagated backwards throughout the network in order to update the weights.\n\n## Wrapping Everything Up\n\nIn this blog post we started with the simple single neuron model and we learned the model weights by computing the gradient of the objective function and using it in the stochastic gradient descent update equation. Then we moved on to the slightly more complicated neural network model. In this case we computed the required gradients using a procedure known as backpropagation and we again used these gradients in the SGD update equations. True Deep Learning models either contain many more hidden layers or neurons in different configurations but they still adhere to the basic principles described here. Hopefully this post has made Deep Learning seem like a more understandable and less daunting field of machine learning."
]
| [
null,
"http://alexminnaar.com/assets/neuron.png",
null,
"http://alexminnaar.com/assets/logistic.png",
null,
"http://alexminnaar.com/assets/neural_network.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84297603,"math_prob":0.99936646,"size":11936,"snap":"2022-40-2023-06","text_gpt3_token_len":3077,"char_repetition_ratio":0.16694602,"word_repetition_ratio":0.0328341,"special_character_ratio":0.26918566,"punctuation_ratio":0.062075216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999881,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T05:55:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f5dbd1ad-39a2-482f-8e36-61374a8b92d3>\",\"Content-Length\":\"16616\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16613117-eed9-4075-ac29-d487c033bf86>\",\"WARC-Concurrent-To\":\"<urn:uuid:91e29f11-361f-4c05-8896-51f7a89b4158>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://alexminnaar.com/2015/02/14/deep-learning-basics.html\",\"WARC-Payload-Digest\":\"sha1:EJWWF7PF3QCERIS5VRCQ7AV5X6X6GOKV\",\"WARC-Block-Digest\":\"sha1:FNNFHEVCCJ376MPTU56RQI4GJSKH35RL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500384.17_warc_CC-MAIN-20230207035749-20230207065749-00015.warc.gz\"}"} |
https://numpy.org.cn/user/c-info/python-as-glue.html | [
"# # 使用Python作为粘合剂\n\nThere is no conversation more boring than the one where everybody agrees.\n\n— Michel de Montaigne\n\nDuct tape is like the force. It has a light side, and a dark side, and it holds the universe together.\n\n— Carl Zwanzig\n\n## # f2py\n\nF2py允许您自动构建一个扩展模块,该模块与Fortran 77/90/95代码中的例程相连。它能够解析Fortran 77/90/95代码并自动为它遇到的子程序生成Python签名,或者你可以通过构造一个接口定义文件(或修改f2py生成的文件)来指导子程序如何与Python接口。 )。\n\n### # 创建基本扩展模块的源\n\nC\nC\nDOUBLE COMPLEX A(*)\nDOUBLE COMPLEX B(*)\nDOUBLE COMPLEX C(*)\nINTEGER N\nDO 20 J = 1, N\nC(J) = A(J)+B(J)\n20 CONTINUE\nEND\n\n\nf2py -m add add.f\n\n\n### # 创建编译的扩展模块\n\nf2py -c -m add add.f\n\n\n>>> import add\nRequired arguments:\na : input rank-1 array('D') with bounds (*)\nb : input rank-1 array('D') with bounds (*)\nc : input rank-1 array('D') with bounds (*)\nn : input int\n\n\n### # 改善基本界面\n\n>>> add.zadd([1,2,3], [1,2], [3,4], 1000)\n\n\nf2py -h add.pyf -m add add.f\n\n\nsubroutine zadd(a,b,c,n) ! in :add:add.f\ndouble complex dimension(*) :: a\ndouble complex dimension(*) :: b\ndouble complex dimension(*) :: c\ninteger :: n\n\n\nsubroutine zadd(a,b,c,n) ! in :add:add.f\ndouble complex dimension(n) :: a\ndouble complex dimension(n) :: b\ndouble complex intent(out),dimension(n) :: c\ninteger intent(hide),depend(a) :: n=len(a)\n\n\nintent指令,intent(out)用于告诉c作为输出变量的f2py,并且应该在传递给底层代码之前由接口创建。intent(hide)指令告诉f2py不允许用户指定变量n,而是从大小中获取它a。depend(a)指令必须告诉f2py n的值取决于输入a(因此在创建变量a之前它不会尝试创建变量n)。\n\nf2py -c add.pyf add.f95\n\n\n>>> import add\nRequired arguments:\na : input rank-1 array('D') with bounds (n)\nb : input rank-1 array('D') with bounds (n)\nReturn objects:\nc : rank-1 array('D') with bounds (n)\n\n\n>>> add.zadd([1,2,3],[4,5,6])\narray([ 5.+0.j, 7.+0.j, 9.+0.j])\n\n\n### # 在Fortran源中插入指令\n\nC\nC\nCF2PY INTENT(OUT) :: C\nCF2PY INTENT(HIDE) :: N\nCF2PY DOUBLE COMPLEX :: A(N)\nCF2PY DOUBLE COMPLEX :: B(N)\nCF2PY DOUBLE COMPLEX :: C(N)\nDOUBLE COMPLEX A(*)\nDOUBLE COMPLEX B(*)\nDOUBLE COMPLEX C(*)\nINTEGER N\nDO 20 J = 1, N\nC(J) = A(J) + B(J)\n20 CONTINUE\nEND\n\n\nf2py -c -m add add.f\n\n\n### # 过滤示例\n\nSUBROUTINE DFILTER2D(A,B,M,N)\nC\nDOUBLE PRECISION A(M,N)\nDOUBLE PRECISION B(M,N)\nINTEGER N, M\nCF2PY INTENT(OUT) :: B\nCF2PY INTENT(HIDE) :: N\nCF2PY INTENT(HIDE) :: M\nDO 20 I = 2,M-1\nDO 40 J=2,N-1\nB(I,J) = A(I,J) +\n$(A(I-1,J)+A(I+1,J) +$ A(I,J-1)+A(I,J+1) )*0.5D0 +\n$(A(I-1,J-1) + A(I-1,J+1) +$ A(I+1,J-1) + A(I+1,J+1))*0.25D0\n40 CONTINUE\n20 CONTINUE\nEND\n\n\nf2py -c -m filter filter.f\n\n\n### # 从Python中调用f2py\n\nf2py程序是用Python编写的,可以在代码中运行,以便在运行时编译Fortran代码,如下所示:\n\nfrom numpy import f2py\n\n\n### # 自动扩展模块生成\n\ndef configuration(parent_package='', top_path=None)\nfrom numpy.distutils.misc_util import Configuration\nconfig = Configuration('f2py_examples',parent_package, top_path)\nreturn config\n\nif __name__ == '__main__':\nfrom numpy.distutils.core import setup\nsetup(**configuration(top_path='').todict())\n\n\npip install .\n\n\n## # 用Cython\n\nCythonopen in new window是Python方言的编译器,它为速度添加(可选)静态类型,并允许将C或C ++代码混合到模块中。它生成C或C ++扩展,可以在Python代码中编译和导入。\n\nfrom Cython.Distutils import build_ext\nfrom distutils.extension import Extension\nfrom distutils.core import setup\nimport numpy\n\nsetup(name='mine', description='Nothing',\next_modules=[Extension('filter', ['filter.pyx'],\ninclude_dirs=[numpy.get_include()])],\ncmdclass = {'build_ext':build_ext})\n\n\n### # Cython中的复杂添加\n\ncimport cython\ncimport numpy as np\nimport numpy as np\n\n# We need to initialize NumPy.\nnp.import_array()\n\n#@cython.boundscheck(False)\ncdef double complex[:] a = in1.ravel()\ncdef double complex[:] b = in2.ravel()\n\nout = np.empty(a.shape, np.complex64)\ncdef double complex[:] c = out.ravel()\n\nfor i in range(c.shape):\nc[i].real = a[i].real + b[i].real\nc[i].imag = a[i].imag + b[i].imag\n\nreturn out\n\n\n### # Cython中的图像过滤器\n\ncimport numpy as np\nimport numpy as np\n\nnp.import_array()\n\ndef filter(img):\ncdef double[:, :] a = np.asarray(img, dtype=np.double)\nout = np.zeros(img.shape, dtype=np.double)\ncdef double[:, ::1] b = out\n\ncdef np.npy_intp i, j\n\nfor i in range(1, a.shape - 1):\nfor j in range(1, a.shape - 1):\nb[i, j] = (a[i, j]\n+ .5 * ( a[i-1, j] + a[i+1, j]\n+ a[i, j-1] + a[i, j+1])\n+ .25 * ( a[i-1, j-1] + a[i-1, j+1]\n+ a[i+1, j-1] + a[i+1, j+1]))\n\nreturn out\n\n\nimport image\nout = image.filter(img)\n\n\n### # Cython结论\n\nCython是几个科学Python库的首选扩展机制,包括Scipy,Pandas,SAGE,scikit-image和scikit-learn,以及XML处理库LXML。语言和编译器维护良好。\n\n1. 在编写自定义算法时,有时在包装现有C库时,需要熟悉C语言。特别是,当使用C内存管理(malloc和朋友)时,很容易引入内存泄漏。但是,只是编译重命名的Python模块.pyx 已经可以加快速度,并且添加一些类型声明可以在某些代码中提供显着的加速。\n2. 很容易在Python和C之间失去一个清晰的分离,这使得重用你的C代码用于其他非Python相关项目变得更加困难。\n3. Cython生成的C代码难以阅读和修改(并且通常编译有令人烦恼但无害的警告)。\n\nCython生成的扩展模块的一大优势是它们易于分发。总之,Cython是一个非常强大的工具,可以粘合C代码或快速生成扩展模块,不应该被忽视。它对于不能或不会编写C或Fortran代码的人特别有用。\n\n## # ctypes\n\nctypesopen in new window 是一个包含在stdlib中的Python扩展模块,它允许您直接从Python调用共享库中的任意函数。这种方法允许您直接从Python接口C代码。这开辟了大量可供Python使用的库。然而,缺点是编码错误很容易导致丑陋的程序崩溃(就像C中可能发生的那样),因为对参数进行的类型或边界检查很少。当数组数据作为指向原始内存位置的指针传入时尤其如此。那么你应该负责子程序不会访问实际数组区域之外的内存。但,\n\n1. 有一个共享的库。\n2. 加载共享库。\n3. 将python对象转换为ctypes理解的参数。\n4. 使用ctypes参数从库中调用函数。\n\n### # 加载共享库\n\n• 必须以特殊方式编译共享库( 例如, 使用-shared带有gcc 的标志)。\n• 在某些平台( 例如 Windows)上,共享库需要一个.def文件,该文件指定要导出的函数。例如,mylib.def文件可能包含:\nLIBRARY mylib.dll\nEXPORTS\ncool_function1\ncool_function2\n\n\nPython distutils中没有标准的方法来以跨平台的方式创建标准共享库(扩展模块是Python理解的“特殊”共享库)。因此,在编写本书时,ctypes的一大缺点是难以以跨平台的方式分发使用ctypes的Python扩展并包含您自己的代码,这些代码应编译为用户系统上的共享库。\n\n### # 加载共享库\n\nlib = ctypes.cdll[<full_path_name>]\n\n\nNumPy提供称为ctypeslib.load_library(名称,路径)的便利功能 。此函数采用共享库的名称(包括任何前缀,如'lib'但不包括扩展名)和共享库所在的路径。它返回一个ctypes库对象,或者OSError如果找不到库则引发一个或者ImportError如果ctypes模块不可用则引发一个。(Windows用户:使用加载的ctypes库对象 load_library总是在假定cdecl调用约定的情况下加载。请参阅下面的ctypes文档ctypes.windll和/或ctypes.oledll 了解在其他调用约定下加载库的方法)。\n\n### # 转换参数\n\nPython int / long,字符串和unicode对象会根据需要自动转换为等效的ctypes参数None对象也会自动转换为NULL指针。必须将所有其他Python对象转换为特定于ctypes的类型。围绕此限制有两种方法允许ctypes与其他对象集成。\n\n1. 不要设置函数对象的argtypes属性,并_as_parameter_为要传入的对象定义 方法。该 _as_parameter_方法必须返回一个Python int,它将直接传递给函数。\n2. 将argtypes属性设置为一个列表,其条目包含具有名为from_param的类方法的对象,该类方法知道如何将对象转换为ctypes可以理解的对象(具有该_as_parameter_属性的int / long,字符串,unicode或对象)。\n\nNumPy使用两种方法,优先选择第二种方法,因为它可以更安全。ndarray的ctypes属性返回一个对象,该对象具有一个_as_parameter_返回整数的属性,该整数表示与之关联的ndarray的地址。因此,可以将此ctypes属性对象直接传递给期望指向ndarray中数据的指针的函数。调用者必须确保ndarray对象具有正确的类型,形状,并且设置了正确的标志,否则如果传入指向不适当数组的数据指针则会导致令人讨厌的崩溃。\n\nndarray的ctypes属性还赋予了额外的属性,这些属性在将有关数组的其他信息传递给ctypes函数时可能很方便。属性数据形状步幅可以提供与数据区域,形状和数组步幅相对应的ctypes兼容类型。data属性返回c_void_p表示指向数据区域的指针。shape和strides属性各自返回一个ctypes整数数组(如果是0-d数组,则返回None表示NULL指针)。数组的基本ctype是与平台上的指针大小相同的ctype整数。还有一些方法 data_as({ctype}),和shape_as()strides_as()。它们将数据作为您选择的ctype对象返回,并使用您选择的基础类型返回shape / strides数组。为方便起见,该ctypeslib模块还包含c_intp一个ctypes整数数据类型,其大小c_void_p与平台上的大小相同 (如果未安装ctypes,则其值为None)。\n\n### # 调用函数\n\nlib = numpy.ctypeslib.load_library('mylib','.')\nfunc1 = lib.cool_function1 # or equivalently\nfunc1 = lib['cool_function1']\n\n\nfunc1.restype = None\n\n\nndpointerdtype = Nonendim = Noneshape = Noneflags = None\n\nNone不检查具有该值的关键字参数。指定关键字会强制在转换为与ctypes兼容的对象时检查ndarray的该方面。dtype关键字可以是任何被理解为数据类型对象的对象。ndim关键字应为整数,shape关键字应为整数或整数序列。flags关键字指定传入的任何数组所需的最小标志。这可以指定为逗号分隔要求的字符串,指示需求位OR'd在一起的整数,或者从flags的flags属性返回的flags对象。具有必要要求的数组。\n\n### # 完整的例子\n\n/* Add arrays of contiguous data */\ntypedef struct {double real; double imag;} cdouble;\ntypedef struct {float real; float imag;} cfloat;\nvoid zadd(cdouble *a, cdouble *b, cdouble *c, long n)\n{\nwhile (n--) {\nc->real = a->real + b->real;\nc->imag = a->imag + b->imag;\na++; b++; c++;\n}\n}\n\n\nvoid cadd(cfloat *a, cfloat *b, cfloat *c, long n)\n{\nwhile (n--) {\nc->real = a->real + b->real;\nc->imag = a->imag + b->imag;\na++; b++; c++;\n}\n}\nvoid dadd(double *a, double *b, double *c, long n)\n{\nwhile (n--) {\n*c++ = *a++ + *b++;\n}\n}\nvoid sadd(float *a, float *b, float *c, long n)\n{\nwhile (n--) {\n*c++ = *a++ + *b++;\n}\n}\n\n\ncode.c文件还包含以下功能dfilter2d\n\n/*\n* Assumes b is contiguous and has strides that are multiples of\n* sizeof(double)\n*/\nvoid\ndfilter2d(double *a, double *b, ssize_t *astrides, ssize_t *dims)\n{\nssize_t i, j, M, N, S0, S1;\nssize_t r, c, rm1, rp1, cp1, cm1;\n\nM = dims; N = dims;\nS0 = astrides/sizeof(double);\nS1 = astrides/sizeof(double);\nfor (i = 1; i < M - 1; i++) {\nr = i*S0;\nrp1 = r + S0;\nrm1 = r - S0;\nfor (j = 1; j < N - 1; j++) {\nc = j*S1;\ncp1 = j + S1;\ncm1 = j - S1;\nb[i*N + j] = a[r + c] +\n(a[rp1 + c] + a[rm1 + c] +\na[r + cp1] + a[r + cm1])*0.5 +\n(a[rp1 + cp1] + a[rp1 + cm1] +\na[rm1 + cp1] + a[rm1 + cp1])*0.25;\n}\n}\n}\n\n\ngcc -o code.so -shared code.c\n\n\n__all__ = ['add', 'filter2d']\n\nimport numpy as np\nimport os\n\n_path = os.path.dirname('__file__')\nfor name in _typedict.keys():\nval = getattr(lib, name)\nval.restype = None\n_type = _typedict[name]\nval.argtypes = [np.ctypeslib.ndpointer(_type,\nflags='aligned, contiguous'),\nnp.ctypeslib.ndpointer(_type,\nflags='aligned, contiguous'),\nnp.ctypeslib.ndpointer(_type,\nflags='aligned, contiguous,'\\\n'writeable'),\nnp.ctypeslib.c_intp]\n\n\nlib.dfilter2d.restype=None\nlib.dfilter2d.argtypes = [np.ctypeslib.ndpointer(float, ndim=2,\nflags='aligned'),\nnp.ctypeslib.ndpointer(float, ndim=2,\nflags='aligned, contiguous,'\\\n'writeable'),\nctypes.POINTER(np.ctypeslib.c_intp),\nctypes.POINTER(np.ctypeslib.c_intp)]\n\n\ndef select(dtype):\nif dtype.char in ['?bBhHf']:\nelif dtype.char in ['F']:\nelif dtype.char in ['DG']:\nelse:\nreturn func, ntype\n\n\ndef add(a, b):\nrequires = ['CONTIGUOUS', 'ALIGNED']\na = np.asanyarray(a)\nfunc, dtype = select(a.dtype)\na = np.require(a, dtype, requires)\nb = np.require(b, dtype, requires)\nc = np.empty_like(a)\nfunc(a,b,c,a.size)\nreturn c\n\n\ndef filter2d(a):\na = np.require(a, float, ['ALIGNED'])\nb = np.zeros_like(a)\nlib.dfilter2d(a, b, a.ctypes.strides, a.ctypes.shape)\nreturn b\n\n\n### # ctypes结论\n\n• 从Python代码中清除C代码的分离\n• 除了Python和C之外,无需学习新的语法\n• 允许重复使用C代码\n• 可以使用简单的Python包装器获取为其他目的编写的共享库中的功能并搜索库。\n• 通过ctypes属性轻松与NumPy集成\n• 使用ndpointer类工厂进行完整的参数检查\n\n• 由于缺乏在distutils中构建共享库的支持,很难分发使用ctypes创建的扩展模块(但我怀疑这会随着时间的推移而改变)。\n• 您必须拥有代码的共享库(没有静态库)。\n• 很少支持C ++代码及其不同的库调用约定。你可能需要一个围绕C ++代码的C包装器来与ctypes一起使用(或者只是使用Boost.Python)。\n\n## # 您可能会觉得有用的其他工具\n\n### # SIP\n\nSIP是另一种用于包装特定于Python的C / C ++库的工具,似乎对C ++有很好的支持。Riverbank Computing开发了SIP,以便为QT库创建Python绑定。必须编写接口文件以生成绑定,但接口文件看起来很像C / C ++头文件。虽然SIP不是一个完整的C ++解析器,但它理解了相当多的C ++语法以及它自己的特殊指令,这些指令允许修改Python绑定的完成方式。它还允许用户定义Python类型和C / C ++结构和类之间的映射。\n\n### # 提升Python\n\nBoost是C ++库的存储库,Boost.Python是其中一个库,它提供了一个简洁的接口,用于将C ++类和函数绑定到Python。Boost.Python方法的神奇之处在于它完全在纯C ++中工作而不引入新语法。许多C ++用户报告称,Boost.Python可以无缝地结合两者的优点。我没有使用过Boost.Python,因为我不是C ++的大用户,并且使用Boost来包装简单的C子例程通常都是过度杀戮。它的主要目的是使Python中的C ++类可用。因此,如果您有一组需要完全集成到Python中的C ++类,请考虑学习并使用Boost.Python。\n\n### # PyFort\n\nPyFort是一个很好的工具,可以将Fortran和类似Fortran的C代码包装到Python中,并支持数值数组。它由着名计算机科学家Paul Dubois编写,是Numeric(现已退休)的第一个维护者。值得一提的是希望有人会更新PyFort以使用NumPy数组,现在支持Fortran或C风格的连续数组。"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.86248666,"math_prob":0.95853794,"size":19310,"snap":"2023-14-2023-23","text_gpt3_token_len":12074,"char_repetition_ratio":0.08955765,"word_repetition_ratio":0.1010101,"special_character_ratio":0.21196271,"punctuation_ratio":0.18123586,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859268,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T04:05:51Z\",\"WARC-Record-ID\":\"<urn:uuid:af2cda2b-7d6b-43a8-a4f0-a11484499295>\",\"Content-Length\":\"158480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85141848-a138-4aab-b7d6-f1086c9fc857>\",\"WARC-Concurrent-To\":\"<urn:uuid:848e472c-db64-440c-ba83-068b1bf047cc>\",\"WARC-IP-Address\":\"47.243.128.119\",\"WARC-Target-URI\":\"https://numpy.org.cn/user/c-info/python-as-glue.html\",\"WARC-Payload-Digest\":\"sha1:7GKIGURZXQNRQKPTVALTFFMQMOZ3DBO2\",\"WARC-Block-Digest\":\"sha1:YHAZZSANVDAXYTZ4HB7GWFK2YGQ7HSFL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655247.75_warc_CC-MAIN-20230609032325-20230609062325-00284.warc.gz\"}"} |
https://zbmath.org/authors/?q=ai:zariski.oscar | [
"## Zariski, Oscar\n\nCompute Distance To:\n Author ID: zariski.oscar",
null,
"Published as: Zariski, Oscar; Zariski, O.; Zariski, Oskar more...less External Links: MacTutor · MGP · Wikidata · Math-Net.Ru · GND · IdRef Awards: Wolf Prize (1981)\n Documents Indexed: 139 Publications since 1924, including 19 Books Biographic References: 8 Publications Co-Authors: 7 Co-Authors with 17 Joint Publications 79 Co-Co-Authors\nall top 5\n\n### Co-Authors\n\n 116 single-authored 6 Samuel, Pierre 3 Muhly, Harry Townsend 3 Mumford, David Bryant 2 Abhyankar, Shreeram Shankar 2 Artin, Michael 2 Barber, Sherburne F. 2 Cohen, Irvin Sol 2 Kmety, Francois 2 Schilling, Otto Franz Georg 2 Teissier, Bernard 1 Falb, Peter L. 1 Hironaka, Heisuke 1 Lipman, Joseph 1 Mazur, Barry 1 Merle, Michel\nall top 5\n\n### Serials\n\n 42 American Journal of Mathematics 13 Annals of Mathematics. Second Series 12 Bulletin of the American Mathematical Society 8 Transactions of the American Mathematical Society 8 Proceedings of the National Academy of Sciences of the United States of America 3 Atti della Accademia Nazionale dei Lincei, Rendiconti, VI. Serie 3 Ergebnisse der Mathematik und ihrer Grenzgebiete 3 Mathematicians of Our Time 2 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 2 Bulletin des Sciences Mathématiques. Deuxième Série 2 Illinois Journal of Mathematics 2 Atti della Accademia Nazionale dei Lincei. Serie Ottava. Rendiconti. Classe di Scienze Fisiche, Matematiche e Naturali 2 Graduate Texts in Mathematics 1 Uspekhi Matematicheskikh Nauk [N. S.] 1 Annales de l’Institut Fourier 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Memoirs of the American Mathematical Society 1 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Paris 1 Periodico di Matematiche. IV. Serie 1 Memoirs of the College of Science, University of Kyoto, Series A 1 Rendiconti del Circolo Matematico di Palermo 1 Rendiconti Di Matematica e Delle Sue Applicazioni, V. Serie 1 Lecture Notes in Mathematics 1 Princeton Mathematical Series 1 University Lecture Series 1 Rendiconti del Seminario Matematico delle Facoltá di Scienze della R. Universitá di Roma, II. Serie 1 Memorie della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali. 6. Serie 1 Accademia dei Lincei, Rendiconti, V. Serie 1 Revista Matemática Hispano-Americana. II. Seria 1 Classics in Mathematics\nall top 5\n\n### Fields\n\n 28 Algebraic geometry (14-XX) 10 Commutative algebra (13-XX) 3 Field theory and polynomials (12-XX) 2 Several complex variables and analytic spaces (32-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Number theory (11-XX) 1 Geometry (51-XX) 1 Manifolds and cell complexes (57-XX)\n\n### Citations contained in zbMATH Open\n\n105 Publications have been cited 3,334 times in 2,674 Documents Cited by Year\nCommutative algebra. Vol. II. Zbl 0121.27801\nZariski, Oscar; Samuel, Pierre\n1960\nCommutative algebra. Vol. 1. With the cooperation of I. S. Cohen. Zbl 0081.26501\nZariski, Oscar; Samuel, Pierre\n1958\nCommutative algebra. Vol. II. Reprint of the 1958-1960 Van Nostrand edition. Zbl 0322.13001\nZariski, Oscar; Samuel, Pierre\n1976\nCommutative algebra. Vol. II. Übersetzung aus dem Englischen von E. S. Golod, S. P. Demuškin und A. N. Tyurin. Unter Redaktion von A. I. Uzkov. (Коммутативная алгтебра. Том II.) Zbl 0121.27901\nZariski, O.; Samuel, P.\n1963\nThe theorem of Riemann-Roch for high multiples of an effective divisor on an algebraic surface. Zbl 0124.37001\nZariski, Oscar\n1962\nStudies in equisingularity. I: Equivalent singularities of plane algebroid curves. Zbl 0132.41601\nZariski, O.\n1965\nCommutative Algebra. Vol. 1. With the cooperation of I. S. Cohen. 2nd ed. Zbl 0313.13001\nZariski, Oscar; Samuel, Pierre\n1975\nOn the problem of existence of algebraic functions of two variables possessing a given branch curve. JFM 55.0806.01\nZariski, O.\n1929\nAlgebraic surfaces. With appendices by S.S. Abhyankar, J. Lipman, and D. Mumford. 2nd suplemented ed. Zbl 0219.14020\nZariski, O.\n1971\nSome open questions in the theory of singularities. Zbl 0236.14002\nZariski, Oscar\n1971\nStudies in equisingularity. II: Equisingularity in codimension 1 (and characteristic zero). Zbl 0146.42502\nZariski, O.\n1965\nCommutative algebra. Vol. 1. Übersetzung aus dem Englischen von O.N. Vvedenskii, S.P. Demuskin und A.N. Tjurin. Zbl 0112.02902\nZariski, O.; Samuel, P.\n1963\nThe reduction of the singularities of an algebraic surface. JFM 65.1399.03\nZariski, O.\n1939\nStudies in equisingularity. III: Saturation of local rings and equisingularity. Zbl 0189.21405\nZariski, O.\n1968\nLocal uniformization on algebraic varieties. JFM 66.1327.02\nZariski, O.\n1940\nOn the purity of the branch locus of algebraic functions. Zbl 0087.35703\nZariski, Oscar\n1958\nIntroduction to the problem of minimal models in the theory of algebraic surfaces. Zbl 0093.33904\nZariski, Oscar\n1958\nInterprétations algébrico-géométriques du quatorzième problème de Hilbert. Zbl 0056.39602\nZariski, O.\n1954\nOn the topology of algebroid singularities. JFM 58.0614.02\nZariski, O.\n1932\nThe concept of a simple point of an abstract algebraic variety. Zbl 0031.26101\nZariski, Oscar\n1947\nThe reduction of the singularities of an algebraic surface. Zbl 0021.25303\nZariski, Oscar\n1939\nFoundations of a general theory of birational correspondences. Zbl 0061.33004\nZariski, Oscar\n1943\nPolynomial ideals defined by infinitely near base points. Zbl 0018.20101\nZariski, Oscar\n1938\nLe problème des modules pour les branches planes. Redige par Francois Kmety et Michel Merle. Avec un appendice de Bernard Teissier. Zbl 0317.14004\nZariski, Oscar\n1973\nThe compactness of the Riemann manifold of an abstract field of algebraic functions. Zbl 0063.08390\nZariski, Oscar\n1944\nReduction of the singularities of algebraic three dimensional varieties. Zbl 0063.08361\nZariski, Oscar\n1944\nOn Castelnuovo’s criterion of rationality P$$_a$$=P$$_2$$=0 of an algebraic surface. Zbl 0085.36203\nZariski, Oscar\n1958\nOn the irregularity of cyclic multiple planes. Zbl 0001.40301\nZariski, Oscar\n1931\nLocal uniformization on algebraic varieties. Zbl 0025.21601\nZariski, Oscar\n1940\nAlgebraic surfaces. Zbl 0010.37103\nZariski, O.\n1935\nAlgebraic surfaces. JFM 61.0704.01\nZariski, O.\n1935\nCharacterization of plane algebroid curves whose module of differentials has maxim torsion. Zbl 0144.20201\nZariski, O.\n1966\nDimension-theoretic characterization of maximal irreducible algebraic systems of plane nodal curves of a given order n and with a given number d of nodes. Zbl 0516.14023\nZariski, Oscar\n1982\nOn the topology of algebroid singularities. Zbl 0004.36902\nZariski, Oscar\n1932\nA simplified proof for the resolution of singularities of an algebraic surface. Zbl 0063.08389\nZariski, Oscar\n1942\nThe topological discriminant group of a Riemann surface of genus p. Zbl 0016.32502\nZariski, Oscar\n1937\nPolynomial ideals defined by infinitely near base points. JFM 64.0079.01\nZariski, O.\n1938\nThe problem of minimal models in the theory of algebraic surfaces. Zbl 0085.36202\nZariski, Oscar\n1958\nGeneral theory of saturation and of saturated local rings. II: Saturated local rings of dimension 1. Zbl 0228.13007\nZariski, Oscar\n1971\nLe problème des modules pour les branches planes. Cours donné au Centre de Mathématiques de l’École Polytechnique. Nouvelle éd. revue par l’auteur. Rédigé par François Kmety et Michel Merle. Avec un appendice de Bernard Teissier. Zbl 0592.14010\nZariski, Oscar\n1986\nA theorem on the Poincaré group of an algebraic hypersurface. Zbl 0016.04102\nZariski, Oscar\n1937\nPencils on an algebraic variety and a new proof of a theorem of Bertini. Zbl 0025.21502\nZariski, Oscar\n1941\nTheory and applications of holomorpic functions on algebraic varieties over arbitrary ground fields. Zbl 0045.24001\nZariski, Oscar\n1951\nComplete linear systems on normal varieties and a generalization of a lemma of Enriques-Severi. Zbl 0047.14803\nZariski, Oscar\n1952\nThe theorem of Bertini on variable singular points of a linear system of varieties. Zbl 0061.33101\nZariski, Oscar\n1944\nOn the Poincaré group of rational plane curves. Zbl 0014.32801\nZariski, Oscar\n1936\nAnalytical irreducibility of normal varieties. Zbl 0037.22701\nZariski, Oscar\n1948\nThe topological discriminant group of a Riemann surface of genus $$p$$. JFM 63.0338.02\nZariski, O.\n1937\nAn introduction to the theory of algebraic surfaces. Zbl 0177.49001\nZariski, O.\n1969\nThe moduli problem for plane branches. With an appendix by Bernard Teissier. Transl. from the French by Ben Lichtin. Zbl 1107.14021\nZariski, Oscar\n2006\nOn the linear connection index of the algebraic surfaces $$z^n=f(x,y)$$. JFM 55.0806.02\nZariski, O.\n1929\nContributions to the problem of equisingularity. Zbl 0204.54503\nZariski, O.\n1970\nA new proof of Hilbert’s Nullstellensatz. Zbl 0032.26001\nZariski, Oscar\n1947\nOn the Poincaré group of rational plane curves. JFM 62.0758.02\nZariski, O.\n1936\nA fundamental lemma from the theory of holomorphic functions on an algebraic variety. Zbl 0039.03301\nZariski, Oscar\n1949\nPencils on an algebraic variety and a new proof of a theorem of Bertini. JFM 67.0618.01\nZariski, O.\n1941\nSome results in the arithmetic theory of algebraic varieties. JFM 65.0118.02\nZariski, O.\n1939\nExceptional singularities of an algebroid surface and their reduction. Zbl 0168.18903\nZariski, O.\n1967\nA theorem on the Poincaré group of an algebraic hypersurface. JFM 63.0621.03\nZariski, O.\n1937\nFoundations of a general theory of equisingularity on r-dimensional algebroid and algebraic varieties, of embedding dimension r+1. Zbl 0417.14008\nZariski, Oscar\n1979\nOn a theorem of Eddington. JFM 58.0105.02\nZariski, O.\n1932\nSur la normalité analytique des variétés normales. Zbl 0044.26601\nZariski, Oscar\n1950\nScientific report on the Second Summer Institute, Several Complex Variables. Part III: Algebraic sheaf theory. Zbl 0074.15703\nZariski, Oskar\n1956\nSome results in the arithmetic theory of algebraic varieties. Zbl 0020.39101\nZariski, Oscar\n1939\nOn the irregularity of cyclic multiple planes. JFM 57.0440.03\nZariski, O.\n1931\nGeneral theory of saturation and of saturated local rings. I: Saturation of complete local domains of dimension one having arbitrary coefficient fields (of characteristic zero). Zbl 0226.13013\nZariski, Oscar\n1971\nOn equimultiple subvarieties of algebroid hypersurfaces. Zbl 0304.14008\nZariski, Oscar\n1975\nGeneral theory of saturation and of saturated local rings. III: Saturation in arbitrary dimension and, in particular, saturation of algebroid hypersurfaces. Zbl 0306.13009\nZariski, Oscar\n1975\nAlgebraic varieties over ground fields of characteristic zero. Zbl 0022.30501\nZariski, Oscar\n1940\nA simple analytical proof of a fundamental property of birational transformations. Zbl 0037.16401\nZariski, Oscar\n1949\nAlgebraic surfaces: with appendices by S. S. Abhyankar, J. Lipman and D. Mumford. Reprint of the 2nd suppl. ed. 1971. Zbl 0845.14021\nZariski, Oscar\n1995\nNormal varieties and birational correspondences. Zbl 0063.08388\nZariski, Oscar\n1942\nA fundamental inequality in the theory of extensions of valuations. Zbl 0079.05601\nCohen, I. S.; Zariski, Oscar\n1957\nSull’ impossibilità di risolvere parametricamente per radicali un’equazione algebrica $$f (x,y) = 0$$ di genere $$p > 6$$ a moduli generali. JFM 52.0652.05\nZariski, O.\n1926\nCollected papers. Vol. III: Topology of curves and surfaces, and special topics in the theory of algebraic varieties. Edited and with an introduction by M. Artin and B. Mazur. Zbl 0446.14001\nZariski, Oscar\n1978\nGeneralized weight properties of the resultant of $$n + 1$$ polynomials in $$n$$ indeterminates. JFM 63.0047.01\nZariski, O.\n1937\nApplicazioni geometriche della teoria delle valutazioni. Zbl 0055.38801\nZariski, Oscar\n1954\nReducible exceptional curves of the first kind. JFM 61.0707.01\nBarber, S. F.; Zariski, O.\n1935\nOn the non-existence of curves of order 8 with 16 cusps. JFM 57.0823.03\nZariski, O.\n1931\nOn a theorem of Severi. JFM 54.0698.01\nZariski, O.\n1928\nCollected papers. Vol. I: Foundations of algebraic geometry and resolution of singularities. Edited by H. Hironaka and D. Mumford. Zbl 0234.14001\nZariski, Oscar\n1972\nReducible exceptional curves of the first kind. Zbl 0010.37104\nBarber, S. F.; Zariski, Oscar\n1935\nGeneralized weight properties of the resultant of $$n+1$$ polynomials in $$n$$ indeterminates. Zbl 0016.10001\nZariski, Oscar\n1937\nHilbert’s characteristic function and the arithmetic genus of an algebraic variety. Zbl 0041.08403\nMuhly, H. T.; Zariski, O.\n1950\nGeneralized semi-local rings. Zbl 0063.08392\nZariski, Oscar\n1946\nSplitting of valuations in extensions of local domains. I. Zbl 0064.27204\nAbhyankar, Shreeram; Zariski, Oscar\n1955\nThe connectedness theorem for birational transformations. Zbl 0087.35601\nZariski, Oscar\n1957\nCollected papers. Volume II: Holomorphic functions and linear systems. Ed. by M. Artin and D. Mumford. Zbl 0564.14001\nZariski, Oscar\n1973\nAlgebraic varieties over ground fields of characteristic zero. JFM 66.0792.01\nZariski, O.\n1940\nOn the irregularity of cyclic multiple planes. JFM 57.0830.13\nZariski, O.\n1931\nOn hyperelliptic $$\\varTheta$$-functions with rational characteristics. JFM 54.0410.05\nZariski, O.\n1928\nSopra una classe di equazioni algebriche contenenti linearmente un parametro e risolubili per radicali. JFM 52.0653.01\nZariski, O.\n1926\nOn algebraic equations containing linearly a parameter and solvable by radicals. (Sulle equazioni algebriche contenenti linearmente un parametro e risolubili per radicali.) JFM 50.0048.02\nZariski, O.\n1924\nOn differentials in function fields. Zbl 0100.03402\nZariski, Oscar; Falb, Peter\n1961\nLa risoluzione delle singolarita delle superficie algebriche immerse. I, II. Zbl 0105.14301\nZariski, O.\n1961\nA new operation (’saturation’) on local rings and applications to classification of singularities. Zbl 0154.20805\nZariski, O.\n1966\nThe elusive concept of equisingularity and related questions. Zbl 0422.14006\nZariski, Oscar\n1977\nAddendum to my paper ”Foundations of a general theory of equisingularity on r-dimensional algebroid and algebraic varieties, of embedding dimension r+1”. Zbl 0455.14004\nZariski, Oscar\n1980\nOn the problem of irreducibility of the algebraic system of irreducible plane curves of a given order and having a given number of nodes. Zbl 0597.14022\nZariski, Oscar\n1983\nSome open questions in the theory of singularities. Zbl 0241.14003\nZariski, Oscar\n1972\nThe moduli problem for plane branches. With an appendix by Bernard Teissier. Transl. from the French by Ben Lichtin. Zbl 1107.14021\nZariski, Oscar\n2006\nAlgebraic surfaces: with appendices by S. S. Abhyankar, J. Lipman and D. Mumford. Reprint of the 2nd suppl. ed. 1971. Zbl 0845.14021\nZariski, Oscar\n1995\nLe problème des modules pour les branches planes. Cours donné au Centre de Mathématiques de l’École Polytechnique. Nouvelle éd. revue par l’auteur. Rédigé par François Kmety et Michel Merle. Avec un appendice de Bernard Teissier. Zbl 0592.14010\nZariski, Oscar\n1986\nOn the problem of irreducibility of the algebraic system of irreducible plane curves of a given order and having a given number of nodes. Zbl 0597.14022\nZariski, Oscar\n1983\nDimension-theoretic characterization of maximal irreducible algebraic systems of plane nodal curves of a given order n and with a given number d of nodes. Zbl 0516.14023\nZariski, Oscar\n1982\nAddendum to my paper ”Foundations of a general theory of equisingularity on r-dimensional algebroid and algebraic varieties, of embedding dimension r+1”. Zbl 0455.14004\nZariski, Oscar\n1980\nFoundations of a general theory of equisingularity on r-dimensional algebroid and algebraic varieties, of embedding dimension r+1. Zbl 0417.14008\nZariski, Oscar\n1979\nCollected papers. Vol. III: Topology of curves and surfaces, and special topics in the theory of algebraic varieties. Edited and with an introduction by M. Artin and B. Mazur. Zbl 0446.14001\nZariski, Oscar\n1978\nThe elusive concept of equisingularity and related questions. Zbl 0422.14006\nZariski, Oscar\n1977\nCommutative algebra. Vol. II. Reprint of the 1958-1960 Van Nostrand edition. Zbl 0322.13001\nZariski, Oscar; Samuel, Pierre\n1976\nCommutative Algebra. Vol. 1. With the cooperation of I. S. Cohen. 2nd ed. Zbl 0313.13001\nZariski, Oscar; Samuel, Pierre\n1975\nOn equimultiple subvarieties of algebroid hypersurfaces. Zbl 0304.14008\nZariski, Oscar\n1975\nGeneral theory of saturation and of saturated local rings. III: Saturation in arbitrary dimension and, in particular, saturation of algebroid hypersurfaces. Zbl 0306.13009\nZariski, Oscar\n1975\nLe problème des modules pour les branches planes. Redige par Francois Kmety et Michel Merle. Avec un appendice de Bernard Teissier. Zbl 0317.14004\nZariski, Oscar\n1973\nCollected papers. Volume II: Holomorphic functions and linear systems. Ed. by M. Artin and D. Mumford. Zbl 0564.14001\nZariski, Oscar\n1973\nCollected papers. Vol. I: Foundations of algebraic geometry and resolution of singularities. Edited by H. Hironaka and D. Mumford. Zbl 0234.14001\nZariski, Oscar\n1972\nSome open questions in the theory of singularities. Zbl 0241.14003\nZariski, Oscar\n1972\nAlgebraic surfaces. With appendices by S.S. Abhyankar, J. Lipman, and D. Mumford. 2nd suplemented ed. Zbl 0219.14020\nZariski, O.\n1971\nSome open questions in the theory of singularities. Zbl 0236.14002\nZariski, Oscar\n1971\nGeneral theory of saturation and of saturated local rings. II: Saturated local rings of dimension 1. Zbl 0228.13007\nZariski, Oscar\n1971\nGeneral theory of saturation and of saturated local rings. I: Saturation of complete local domains of dimension one having arbitrary coefficient fields (of characteristic zero). Zbl 0226.13013\nZariski, Oscar\n1971\nContributions to the problem of equisingularity. Zbl 0204.54503\nZariski, O.\n1970\nAn introduction to the theory of algebraic surfaces. Zbl 0177.49001\nZariski, O.\n1969\nStudies in equisingularity. III: Saturation of local rings and equisingularity. Zbl 0189.21405\nZariski, O.\n1968\nExceptional singularities of an algebroid surface and their reduction. Zbl 0168.18903\nZariski, O.\n1967\nCharacterization of plane algebroid curves whose module of differentials has maxim torsion. Zbl 0144.20201\nZariski, O.\n1966\nA new operation (’saturation’) on local rings and applications to classification of singularities. Zbl 0154.20805\nZariski, O.\n1966\nStudies in equisingularity. I: Equivalent singularities of plane algebroid curves. Zbl 0132.41601\nZariski, O.\n1965\nStudies in equisingularity. II: Equisingularity in codimension 1 (and characteristic zero). Zbl 0146.42502\nZariski, O.\n1965\nCommutative algebra. Vol. II. Übersetzung aus dem Englischen von E. S. Golod, S. P. Demuškin und A. N. Tyurin. Unter Redaktion von A. I. Uzkov. (Коммутативная алгтебра. Том II.) Zbl 0121.27901\nZariski, O.; Samuel, P.\n1963\nCommutative algebra. Vol. 1. Übersetzung aus dem Englischen von O.N. Vvedenskii, S.P. Demuskin und A.N. Tjurin. Zbl 0112.02902\nZariski, O.; Samuel, P.\n1963\nThe theorem of Riemann-Roch for high multiples of an effective divisor on an algebraic surface. Zbl 0124.37001\nZariski, Oscar\n1962\nOn differentials in function fields. Zbl 0100.03402\nZariski, Oscar; Falb, Peter\n1961\nLa risoluzione delle singolarita delle superficie algebriche immerse. I, II. Zbl 0105.14301\nZariski, O.\n1961\nCommutative algebra. Vol. II. Zbl 0121.27801\nZariski, Oscar; Samuel, Pierre\n1960\nCommutative algebra. Vol. 1. With the cooperation of I. S. Cohen. Zbl 0081.26501\nZariski, Oscar; Samuel, Pierre\n1958\nOn the purity of the branch locus of algebraic functions. Zbl 0087.35703\nZariski, Oscar\n1958\nIntroduction to the problem of minimal models in the theory of algebraic surfaces. Zbl 0093.33904\nZariski, Oscar\n1958\nOn Castelnuovo’s criterion of rationality P$$_a$$=P$$_2$$=0 of an algebraic surface. Zbl 0085.36203\nZariski, Oscar\n1958\nThe problem of minimal models in the theory of algebraic surfaces. Zbl 0085.36202\nZariski, Oscar\n1958\nA fundamental inequality in the theory of extensions of valuations. Zbl 0079.05601\nCohen, I. S.; Zariski, Oscar\n1957\nThe connectedness theorem for birational transformations. Zbl 0087.35601\nZariski, Oscar\n1957\nScientific report on the Second Summer Institute, Several Complex Variables. Part III: Algebraic sheaf theory. Zbl 0074.15703\nZariski, Oskar\n1956\nSplitting of valuations in extensions of local domains. I. Zbl 0064.27204\nAbhyankar, Shreeram; Zariski, Oscar\n1955\nInterprétations algébrico-géométriques du quatorzième problème de Hilbert. Zbl 0056.39602\nZariski, O.\n1954\nApplicazioni geometriche della teoria delle valutazioni. Zbl 0055.38801\nZariski, Oscar\n1954\nLe problème de la réduction des singularités d’une variété algébrique. Zbl 0055.38802\nZariski, O.\n1954\nComplete linear systems on normal varieties and a generalization of a lemma of Enriques-Severi. Zbl 0047.14803\nZariski, Oscar\n1952\nThe fundamental ideas of abstract algebraic geometry. Zbl 0049.22701\nZariski, Oscar\n1952\nTheory and applications of holomorpic functions on algebraic varieties over arbitrary ground fields. Zbl 0045.24001\nZariski, Oscar\n1951\nSur la normalité analytique des variétés normales. Zbl 0044.26601\nZariski, Oscar\n1950\nHilbert’s characteristic function and the arithmetic genus of an algebraic variety. Zbl 0041.08403\nMuhly, H. T.; Zariski, O.\n1950\nA fundamental lemma from the theory of holomorphic functions on an algebraic variety. Zbl 0039.03301\nZariski, Oscar\n1949\nA simple analytical proof of a fundamental property of birational transformations. Zbl 0037.16401\nZariski, Oscar\n1949\nAnalytical irreducibility of normal varieties. Zbl 0037.22701\nZariski, Oscar\n1948\nThe concept of a simple point of an abstract algebraic variety. Zbl 0031.26101\nZariski, Oscar\n1947\nA new proof of Hilbert’s Nullstellensatz. Zbl 0032.26001\nZariski, Oscar\n1947\nGeneralized semi-local rings. Zbl 0063.08392\nZariski, Oscar\n1946\nThe compactness of the Riemann manifold of an abstract field of algebraic functions. Zbl 0063.08390\nZariski, Oscar\n1944\nReduction of the singularities of algebraic three dimensional varieties. Zbl 0063.08361\nZariski, Oscar\n1944\nThe theorem of Bertini on variable singular points of a linear system of varieties. Zbl 0061.33101\nZariski, Oscar\n1944\nFoundations of a general theory of birational correspondences. Zbl 0061.33004\nZariski, Oscar\n1943\nA simplified proof for the resolution of singularities of an algebraic surface. Zbl 0063.08389\nZariski, Oscar\n1942\nNormal varieties and birational correspondences. Zbl 0063.08388\nZariski, Oscar\n1942\nPencils on an algebraic variety and a new proof of a theorem of Bertini. Zbl 0025.21502\nZariski, Oscar\n1941\nPencils on an algebraic variety and a new proof of a theorem of Bertini. JFM 67.0618.01\nZariski, O.\n1941\nLocal uniformization on algebraic varieties. JFM 66.1327.02\nZariski, O.\n1940\nLocal uniformization on algebraic varieties. Zbl 0025.21601\nZariski, Oscar\n1940\nAlgebraic varieties over ground fields of characteristic zero. Zbl 0022.30501\nZariski, Oscar\n1940\nAlgebraic varieties over ground fields of characteristic zero. JFM 66.0792.01\nZariski, O.\n1940\nThe reduction of the singularities of an algebraic surface. JFM 65.1399.03\nZariski, O.\n1939\nThe reduction of the singularities of an algebraic surface. Zbl 0021.25303\nZariski, Oscar\n1939\nSome results in the arithmetic theory of algebraic varieties. JFM 65.0118.02\nZariski, O.\n1939\nSome results in the arithmetic theory of algebraic varieties. Zbl 0020.39101\nZariski, Oscar\n1939\nThe resolution of singularities of an algebraic curve. Zbl 0020.16001\nMuhly, H. T.; Zariski, O.\n1939\nPolynomial ideals defined by infinitely near base points. Zbl 0018.20101\nZariski, Oscar\n1938\nPolynomial ideals defined by infinitely near base points. JFM 64.0079.01\nZariski, O.\n1938\nThe topological discriminant group of a Riemann surface of genus p. Zbl 0016.32502\nZariski, Oscar\n1937\nA theorem on the Poincaré group of an algebraic hypersurface. Zbl 0016.04102\nZariski, Oscar\n1937\nThe topological discriminant group of a Riemann surface of genus $$p$$. JFM 63.0338.02\nZariski, O.\n1937\nA theorem on the Poincaré group of an algebraic hypersurface. JFM 63.0621.03\nZariski, O.\n1937\nGeneralized weight properties of the resultant of $$n + 1$$ polynomials in $$n$$ indeterminates. JFM 63.0047.01\nZariski, O.\n1937\nGeneralized weight properties of the resultant of $$n+1$$ polynomials in $$n$$ indeterminates. Zbl 0016.10001\nZariski, Oscar\n1937\nOn the Poincaré group of rational plane curves. Zbl 0014.32801\nZariski, Oscar\n1936\nOn the Poincaré group of rational plane curves. JFM 62.0758.02\nZariski, O.\n1936\nA topological proof of the Riemann-Roch theorem on an algebraic curve. Zbl 0013.07602\nZariski, Oscar\n1936\nAlgebraic surfaces. Zbl 0010.37103\nZariski, O.\n1935\nAlgebraic surfaces. JFM 61.0704.01\nZariski, O.\n1935\nReducible exceptional curves of the first kind. JFM 61.0707.01\nBarber, S. F.; Zariski, O.\n1935\nReducible exceptional curves of the first kind. Zbl 0010.37104\nBarber, S. F.; Zariski, Oscar\n1935\nOn the topology of algebroid singularities. JFM 58.0614.02\nZariski, O.\n1932\nOn the topology of algebroid singularities. Zbl 0004.36902\nZariski, Oscar\n1932\nOn a theorem of Eddington. JFM 58.0105.02\nZariski, O.\n1932\nOn the irregularity of cyclic multiple planes. Zbl 0001.40301\nZariski, Oscar\n1931\nOn the irregularity of cyclic multiple planes. JFM 57.0440.03\nZariski, O.\n1931\nOn the non-existence of curves of order 8 with 16 cusps. JFM 57.0823.03\nZariski, O.\n1931\nOn the irregularity of cyclic multiple planes. JFM 57.0830.13\nZariski, O.\n1931\nOn the non-existence of curves of order 8 with 16 cusps. Zbl 0001.22604\nZariski, Oscar\n1931\nOn the problem of existence of algebraic functions of two variables possessing a given branch curve. JFM 55.0806.01\nZariski, O.\n1929\nOn the linear connection index of the algebraic surfaces $$z^n=f(x,y)$$. JFM 55.0806.02\nZariski, O.\n1929\n...and 5 more Documents\nall top 5\n\n### Cited by 2,252 Authors\n\n 55 Heinzer, William J. 32 Cutkosky, Steven Dale 29 Abhyankar, Shreeram Shankar 29 Ratliff, Louis J. jun. 26 Gilmer, Robert W. jun. 22 Artal Bartolo, Enrique 18 Kuhlmann, Franz-Viktor 17 Olberding, Bruce M. 16 Dobbs, David Earl 16 Rush, David E. 14 Eyral, Christophe 14 Fontana, Marco 14 Nobile, Augusto 13 Galindo Pastor, Carlos 13 Mordeson, John N. 13 Oka, Mutsuo 13 Warner, Seth 12 Cogolludo Agustín, José Ignacio 11 Sampaio, José Edson 11 Spivakovsky, Mark 11 Tokunaga, Hiro-o 10 Lê Dûng Tráng 10 Parusiński, Adam 10 Piltant, Olivier 10 Spirito, Dario 10 Teicher, Mina 10 Teissier, Bernard 9 Campillo, Antonio 9 Ciliberto, Ciro 9 Finocchiaro, Carmelo Antonio 9 Gonçalves, Daciberg Lima 9 Greco, Silvio 9 Guaschi, John 9 Hernandes, Marcelo Escudeiro 9 Kang, Ming-Chang 9 Verma, Jugal Kishore 9 Villamayor Uriburu, Orlando Eugenio 9 Yau, Stephen Shing-Toung 8 Degtyarev, Alex 8 García Barroso, Evelia Rosa 8 González Pérez, Pedro Daniel 8 Gurjar, Rajendra Vasant 8 Marchionna, Ermanno 8 Monserrat, Francisco 8 Reguera, Ana-José 8 Shustin, Eugenii Isaakovich 8 Vasconcelos, Wolmer V. 7 Boucksom, Sébastien 7 Brown, William C. 7 Chistov, Alexandre L. 7 Favre, Charles 7 Greuel, Gert-Martin 7 Jonsson, Mattias 7 Kim, Mee-Kyoung 7 Kiyek, Karl-Heinz 7 Küronya, Alex 7 Libgober, Anatoly S. 7 Mott, Joe Leonard 7 O’Carroll, Liam 7 Ohm, Jack 7 Popescu-Pampu, Patrick 7 Vogel, Wolfgang 6 Becker, Joseph A. 6 Debremaeker, Raymond 6 Guerville-Ballé, Benoît 6 Harbourne, Brian 6 Hauser, Herwig 6 Hefez, Abramo 6 Hochster, Melvin 6 Kaliman, Shulim I. 6 Kleiman, Steven Lawrence 6 Kuhlmann, Norbert 6 Kulikov, Viktor Stepanovich 6 Kuroda, Shigeru 6 Loper, Kenneth Alan 6 Luengo, Ignacio 6 Ngô Viêt Trung 6 Noh, Sunsook 6 Nuño-Ballesteros, Juan José 6 Pfister, Gerhard 6 Risler, Jean-Jacques 6 Schicho, Josef 6 Van Tuyl, Adam 6 Zaidenberg, Mikhail G. 6 Zariski, Oscar 5 Alling, Norman Larrabee 5 Amram, Meirav 5 Baldassarri Ghezzo, Santuzza 5 Bouchiba, Samir 5 Butts, Hubert S. 5 Cossart, Vincent 5 Daigle, Daniel 5 Delgado, Felix 5 Eakin, Paul M. jun. 5 Granja, Angel 5 Grifo, Eloísa 5 Guo, Kunyu 5 Herrmann, Manfred 5 Herzog, Jürgen 5 Huneke, Craig L. ...and 2,152 more Authors\nall top 5\n\n### Cited in 291 Serials\n\n 236 Journal of Algebra 183 Transactions of the American Mathematical Society 154 Proceedings of the American Mathematical Society 147 Mathematische Annalen 139 Journal of Pure and Applied Algebra 80 Communications in Algebra 80 Mathematische Zeitschrift 77 Inventiones Mathematicae 63 Compositio Mathematica 54 Manuscripta Mathematica 50 Archiv der Mathematik 43 Advances in Mathematics 40 Annales de l’Institut Fourier 37 Annali di Matematica Pura ed Applicata. Serie Quarta 37 Rendiconti del Seminario Matematico della Università di Padova 35 Bulletin de la Société Mathématique de France 32 Journal of Symbolic Computation 27 Bulletin of the American Mathematical Society 24 Duke Mathematical Journal 23 Proceedings of the Japan Academy. Series A 22 Mathematical Proceedings of the Cambridge Philosophical Society 21 Publications Mathématiques 20 Israel Journal of Mathematics 20 Linear Algebra and its Applications 19 Mathematical Notes 19 Bulletin of the American Mathematical Society. New Series 18 Nagoya Mathematical Journal 18 Topology and its Applications 17 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 17 Journal of Algebraic Geometry 16 International Journal of Mathematics 16 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 16 Journal of Mathematical Sciences (New York) 15 Journal of Algebra and its Applications 14 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 14 Algebra and Logic 14 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 13 Rocky Mountain Journal of Mathematics 13 Monatshefte für Mathematik 13 Comptes Rendus. Mathématique. Académie des Sciences, Paris 12 Journal of Number Theory 12 Rendiconti del Circolo Matemàtico di Palermo. Serie II 12 Rendiconti del Seminario Matemàtico e Fisico di Milano 11 Journal für die Reine und Angewandte Mathematik 11 Michigan Mathematical Journal 11 Bulletin de la Société Mathématique de France. Supplément. Mémoires 10 Transformation Groups 10 Journal of Commutative Algebra 9 Fuzzy Sets and Systems 9 Journal of the Mathematical Society of Japan 9 Results in Mathematics 9 Tôhoku Mathematical Journal. Second Series 9 Annali della Scuola Normale Superiore di Pisa. Scienze Fisiche e Matematiche. III. Ser 8 Czechoslovak Mathematical Journal 8 Functional Analysis and its Applications 8 Siberian Mathematical Journal 8 Journal of Knot Theory and its Ramifications 8 European Journal of Mathematics 7 Arkiv för Matematik 7 Acta Mathematica 7 Geometriae Dedicata 7 Journal of Soviet Mathematics 7 Memoirs of the American Mathematical Society 7 Revista Matemática Iberoamericana 7 Applicable Algebra in Engineering, Communication and Computing 7 Finite Fields and their Applications 7 Revista Matemática Complutense 7 Proceedings of the Japan Academy 6 Astronomische Nachrichten 6 Bulletin of the Australian Mathematical Society 6 Journal of Mathematical Analysis and Applications 6 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 6 Kodai Mathematical Journal 6 Mathematische Nachrichten 6 Proceedings of the Edinburgh Mathematical Society. Series II 6 Publications of the Research Institute for Mathematical Sciences, Kyoto University 6 Computer Aided Geometric Design 6 Geometry & Topology 6 Journal of Singularities 5 Archive for History of Exact Sciences 5 Discrete Mathematics 5 Mathematics of Computation 5 Beiträge zur Algebra und Geometrie 5 Algebra Universalis 5 Mathematica Slovaca 5 Osaka Journal of Mathematics 5 Journal of the American Mathematical Society 5 International Journal of Algebra and Computation 5 Designs, Codes and Cryptography 5 Expositiones Mathematicae 5 St. Petersburg Mathematical Journal 5 Selecta Mathematica. New Series 5 Annals of Mathematics. Second Series 5 Algebraic & Geometric Topology 5 Bulletin of the Brazilian Mathematical Society. New Series 4 Communications in Mathematical Physics 4 Acta Mathematica Vietnamica 4 Collectanea Mathematica 4 Journal of Differential Equations 4 Mathematika ...and 191 more Serials\nall top 5\n\n### Cited in 55 Fields\n\n 1,212 Algebraic geometry (14-XX) 926 Commutative algebra (13-XX) 362 Several complex variables and analytic spaces (32-XX) 216 Field theory and polynomials (12-XX) 149 Number theory (11-XX) 149 Group theory and generalizations (20-XX) 143 Associative rings and algebras (16-XX) 107 Manifolds and cell complexes (57-XX) 49 Computer science (68-XX) 42 Linear and multilinear algebra; matrix theory (15-XX) 42 Nonassociative rings and algebras (17-XX) 41 Mathematical logic and foundations (03-XX) 37 Global analysis, analysis on manifolds (58-XX) 34 Functional analysis (46-XX) 32 Algebraic topology (55-XX) 30 Dynamical systems and ergodic theory (37-XX) 27 Category theory; homological algebra (18-XX) 27 Functions of a complex variable (30-XX) 26 Combinatorics (05-XX) 26 Convex and discrete geometry (52-XX) 26 Differential geometry (53-XX) 22 Systems theory; control (93-XX) 21 Topological groups, Lie groups (22-XX) 21 Information and communication theory, circuits (94-XX) 18 History and biography (01-XX) 18 Numerical analysis (65-XX) 17 Order, lattices, ordered algebraic structures (06-XX) 17 Ordinary differential equations (34-XX) 16 General algebraic systems (08-XX) 13 $$K$$-theory (19-XX) 13 Real functions (26-XX) 13 General topology (54-XX) 13 Quantum theory (81-XX) 12 Approximations and expansions (41-XX) 12 Geometry (51-XX) 11 Operator theory (47-XX) 9 Special functions (33-XX) 8 Partial differential equations (35-XX) 7 Difference and functional equations (39-XX) 5 Relativity and gravitational theory (83-XX) 4 Measure and integration (28-XX) 4 Integral transforms, operational calculus (44-XX) 4 Mechanics of particles and systems (70-XX) 3 Potential theory (31-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 3 Abstract harmonic analysis (43-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 2 Statistics (62-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 General and overarching topics; collections (00-XX) 1 Sequences, series, summability (40-XX) 1 Mechanics of deformable solids (74-XX) 1 Operations research, mathematical programming (90-XX) 1 Biology and other natural sciences (92-XX) 1 Mathematics education (97-XX)\n\n### Wikidata Timeline\n\nThe data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata."
]
| [
null,
"https://zbmath.org/static/feed-icon-14x14.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5836077,"math_prob":0.7562529,"size":34921,"snap":"2022-40-2023-06","text_gpt3_token_len":11044,"char_repetition_ratio":0.20852306,"word_repetition_ratio":0.5131034,"special_character_ratio":0.2956101,"punctuation_ratio":0.18743502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9555474,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-24T17:00:47Z\",\"WARC-Record-ID\":\"<urn:uuid:9482df51-3f55-47a6-a98d-49f717d47468>\",\"Content-Length\":\"401352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9ca4624-e419-4d8f-ae9e-69bb9c2e976d>\",\"WARC-Concurrent-To\":\"<urn:uuid:14f9dfa5-deda-4a24-9fc1-772cfba0f7bd>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/authors/?q=ai:zariski.oscar\",\"WARC-Payload-Digest\":\"sha1:4P2XQR6GPQP76NTOAOZZWWLWONSO7QUS\",\"WARC-Block-Digest\":\"sha1:AHXUOFZ72PPZ22P3KDWQNND5RHBYRZ45\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030331677.90_warc_CC-MAIN-20220924151538-20220924181538-00494.warc.gz\"}"} |
https://www.magiwise.com/addition-and-subtraction/ | [
"Select Page",
null,
"## Plus and Minus app",
null,
"## Learn the fundamental of addition and subtraction with numbers.\n\nThe app includes excellent instructions alternated with 59 interactive exercises to practice addition and subtraction with numbers up to 20.\nOur “Plus & Minus”-app is ideal to support any maths method, in which you start adding and subtracting numbers.\n\nTopics which are covered in this app, are:\n\n• The line with numbers.\n• Addition and subtraction to the number 5.\n• Addition and subtraction with 5, 10, 15 and 20.\n• Addition and subtraction to the number 20.\n• To split numbers.\n• Calculating with a bus and train.\n• Adding and subtracting with dozens.",
null,
"### The line with numbers\n\nA number line is a line that displays all the integers in order from low to high. First, you practise with up to 5 then finally up to 20. To understand the concept of the number line, you first get short exercises, where you have to add each time the correct number omitted on the line.",
null,
"### To split numbers\n\nYou learn to divide numbers with short exercises. You have to break the number into two other values, which together forms the first number. In this way, you will learn the concept of adding and subtracting of numbers.",
null,
"### Splitting by using a train as an example\n\nAs soon as you understand the concept of dividing numbers, we will count with numbers above five. You will work with a train as an example, where you have to place cubes on the wagon. In this way, you will learn that 4 + 2 is actually 5 + 1.",
null,
"### Practice addition and subtraction with passengers at the bus stop\n\nAfter the train, you get the sums with buses. For example: ‘ A bus drives away with twenty passengers. One passenger leaves the bus at the next stop. How many passengers are on the bus? ‘",
null,
""
]
| [
null,
"https://www.magiwise.com/wp-content/uploads/2018/12/AppIconE006-nl.svg",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2019/05/E006-P-en-07.png",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2019/05/E006-en-01.png",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2019/05/E006-en-02.png",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2019/05/E006-en-04.png",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2019/05/E006-04-nl.png",
null,
"https://i0.wp.com/www.magiwise.com/wp-content/uploads/2018/11/logo.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9361361,"math_prob":0.8856221,"size":1729,"snap":"2023-40-2023-50","text_gpt3_token_len":384,"char_repetition_ratio":0.17855072,"word_repetition_ratio":0.032051284,"special_character_ratio":0.22498554,"punctuation_ratio":0.110787176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99600554,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,5,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T17:51:48Z\",\"WARC-Record-ID\":\"<urn:uuid:89aa2be8-e8e9-4a21-8038-263b92734309>\",\"Content-Length\":\"230847\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa819163-f2b5-4ff7-9cbc-1016083d35be>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a36bd44-a044-4d1a-b8ac-09bc8fb5a3f4>\",\"WARC-IP-Address\":\"141.138.168.124\",\"WARC-Target-URI\":\"https://www.magiwise.com/addition-and-subtraction/\",\"WARC-Payload-Digest\":\"sha1:7IDWFIO4FOK4OW46HNEDY3A3MYIDUPA2\",\"WARC-Block-Digest\":\"sha1:OKSOTMKYN42ZC3FMTQ55E4R6BRRRW4ZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510520.98_warc_CC-MAIN-20230929154432-20230929184432-00191.warc.gz\"}"} |
https://www.onlinemathlearning.com/percentage-p5.html | [
"",
null,
"# Percentage\n\nVideos, worksheets, games and activities to help students learn about percentage in Singapore Math.\nIntroduction to Percentage\nFind out what a percentage is using examples.\nPercentages and Fractions\nHow are percentages and fractions related? How to convert percentages to fractions and fractions to percentages?\n\nPercentage and Decimal\nDetailed example to explain the relation between percentage and decimal, and converting percentage to decimal and decimal to percentage.\nPercentage (1): Practice Exercise\nConversion of fractions to percentages\n\nPercentage (1): More Practical Exercise Question 3\nExample: 65% of the children prefer cycling to walking. Two-fifths of those who prefer cycling are girls. What percentage of the children are girls who prefer walking to cycling?\n\nTry the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.",
null,
"",
null,
""
]
| [
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8631142,"math_prob":0.8566696,"size":985,"snap":"2021-04-2021-17","text_gpt3_token_len":184,"char_repetition_ratio":0.19877675,"word_repetition_ratio":0.0,"special_character_ratio":0.17563452,"punctuation_ratio":0.103658535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99241376,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-25T05:04:24Z\",\"WARC-Record-ID\":\"<urn:uuid:51cd146f-3660-4fa9-9733-582a4d4111ba>\",\"Content-Length\":\"46671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fd69c9f-e24b-4194-a27f-0de506ee6711>\",\"WARC-Concurrent-To\":\"<urn:uuid:83dfa1dd-3f84-49e4-8dda-a189197f023b>\",\"WARC-IP-Address\":\"173.247.219.45\",\"WARC-Target-URI\":\"https://www.onlinemathlearning.com/percentage-p5.html\",\"WARC-Payload-Digest\":\"sha1:3KJ4KK634KWUF6MTQ5KUBSVHE77DFLPE\",\"WARC-Block-Digest\":\"sha1:ZTYSOLNJGPNBLECNPXWDKEGTSN4NAZFQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703564029.59_warc_CC-MAIN-20210125030118-20210125060118-00605.warc.gz\"}"} |
https://www.getsetcoupon.com/npv-discount-rate-table/ | [
"Filter Type:\nFilter Time:\n\n## PRESENT VALUE TABLE\n\nActived: Thursday Feb 13, 2020\n\n› See more: Promo codes\n\n## Leases Discount rates - KPMG\n\nActived: Friday Feb 14, 2020\n\n› See more: Discount codes\n\nOffer Details: The discount rate affects the amount of the lessee’s lease liabilities – and . a host of key financial ratios. The new standard brings forward definitions of discount rates from the current leases standard. But applying these old definitions in the new world of on-balance sheet lease accounting will be tough, especially for lessees. They now need to . determine discount rates for most See more ...\n\n## NPV calculation - Illinois Institute of Technology\n\nActived: Saturday Feb 15, 2020\n\n› See more: Discount codes\n\nOffer Details: NPV Calculation – basic concept PV(Present Value): PV is the current worth of a future sum of money or stream of cash flows given a specified rate of return. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows. See more ...\n\n## Difference Between NPV and IRR (with Comparison Chart\n\nActived: Thursday Feb 13, 2020\n\n› See more: Discount codes\n\nOffer Details: The aggregate of all present value of the cash flows of an asset, immaterial of positive or negative is known as Net Present Value. Internal Rate of Return is the discount rate at which NPV = 0. The calculation of NPV is made in absolute terms as compared to IRR which is computed in percentage terms. See more ...\n\n## NPV Calculator - calculate Net Present Value\n\nActived: Wednesday Feb 12, 2020\n\n› See more: Discount codes\n\nOffer Details: If you wonder how to calculate the Net Present Value (NPV) by yourself or using an Excel spreadsheet, all you need is the formula: where r is the discount rate and t is the number of cash flow periods, C 0 is the initial investment while C t is the return during period t. See more ...\n\n## Business Smarts - Sample Module for Corporate Financial\n\nActived: Saturday Feb 15, 2020\n\n› See more: Discount codes\n\nOffer Details: 2) A project with a 3 year life and a cost of $28,000 generates revenues of$8,000 in year 1, $12,000 in year 2, and$17,000 in year 3. If the discount rate is 4%, what is the NPV of the project? See more ...\n\n## Discount Factor Calculator - miniwebtool.com\n\nActived: Wednesday Feb 12, 2020\n\n› See more: Discount codes\n\nOffer Details: Discount Rate: % Number of Compounding Periods: About Discount Factor Calculator . The Discount Factor Calculator is used to calculate the discount factor, which is the factor by which a future cash flow must be multiplied in order to obtain the present value. Discount Factor Calculation Formula. The discount factor is calculated in the following way, where P(T) is the discount factor, r the See more ...\n\n## Valuing Pharmaceutical Assets: When to Use NPV vs rNPV\n\nActived: Friday Feb 14, 2020\n\n› See more: Discount codes\n\nOffer Details: The NPV approach requires the use of different discount rates in an attempt to approximate the evolving probability of technical and regulatory success. Each new NPV calculation and discount rate can only provide insight about the net present value and risk at a single point in time. For example, the NPV calculation with a 33.4% discount rate See more ...\n\n## Calculate the Net Present Value - NPV | PrepLounge.com\n\nActived: Friday Feb 14, 2020\n\n› See more: Discount codes\n\nOffer Details: By increasing the discount rate, the NPV of future earnings will shrink. Discount rates for quite secure cash-streams vary between 1% and 3%, but for most companies, you use a discount rate between 4% - 10% and for a speculative start-up investment, the applied interest rate could reach up to 40%. See more ...\n\n## Mid Period Discounting with the NPV Function\n\nActived: Saturday Feb 15, 2020\n\n› See more: Discount codes\n\nOffer Details: taking the present value of each cash flow then adding them up. If you use the NPV including the initial cash flow you will understate the true NPV. Getting the IRR for the half year assumptions is a little trickier. The easiest way is to use the NPV formula above with the half year adjustment, make the discount rate (.1 in example) a variable See more ...\n\n## NPV (net present value) - Valuation - Moneyterms\n\nActived: Friday Feb 14, 2020\n\n› See more: Promo codes\n\nOffer Details: A net present value (NPV) includes all cash flows including initial cash flows such as the cost of purchasing an asset, whereas a present value does not. The simple present value is useful where the negative cash flow is an initial one-off, as when buying a security (see DCF valuation for more detail) See more ...\n\n## How to Evaluate Two Projects by Evaluating the Net Present\n\nActived: Saturday Feb 15, 2020\n\n› See more: Discount codes\n\nOffer Details: Does the Net Present Value of Future Cash Flows Increase or Decrease as the Discount Rate Increases? Share on Facebook The net present value method has become one of the most popular tools for evaluating capital projects because it reduces each project to a single figure: the total estimated value of the project, expressed in today's dollars. See more ...\n\n## NPV Profile | Excel with Excel Master\n\nActived: Wednesday Feb 12, 2020\n\n› See more: Promo codes\n\nOffer Details: An NPV Profile or Net Present Value Profile is a graph that looks like this:. The horizontal axis shows various values of r or the cost of capital and the vertical axis shows the Net Present Values (NPV) at those values of r. The point at which the line or curve crosses the horizontal axis is the estimate of the Internal Rate of Return or IRR.. To prepare an NPV Profile we need to have set up See more ...\n\n## A Refresher on Net Present Value\n\nActived: Tuesday Feb 11, 2020\n\n› See more: Promo codes\n\nOffer Details: “Net present value is the present value of the cash flows at the required rate of return of your project compared to your initial investment,” says Knight. In practical terms, it’s a method See more ...\n\n## Calculate NPV with a Series of Future Cash Flows - dummies\n\nActived: Friday Feb 14, 2020\n\n› See more: Promo codes\n\nOffer Details: Compute the net present value of a series of annual net cash flows. To determine the present value of these cash flows, use time value of money computations with the established interest rate to convert each year’s net cash flow from its future value back to its present value. Then add these present values together. See more ...\n\n## Excel formula: NPV formula for net present value | Exceljet\n\nActived: Saturday Feb 15, 2020\n\n› See more: Promo codes\n\nOffer Details: How this formula works. Net Present Value (NPV) is the present value of expected future cash flows minus the initial cost of investment. The NPV function in Excel only calculates the present value of uneven cashflows, so the initial cost must be handled explicitly. See more ...\n\n## Net Present Value Calculator\n\nActived: Saturday Feb 15, 2020\n\n› See more: Discount codes\n\nOffer Details: Calculator Use. Calculate the net present value (NPV) of a series of future cash flows.More specifically, you can calculate the present value of uneven cash flows (or even cash flows). See Present Value Cash Flows Calculator for related formulas and calculations.. Interest Rate (discount rate per period) This is your expected rate of return on the cash flows for the length of one period. See more ...\n\n## Let's Talk About Net Present Value and - Bloomberg.com\n\nActived: Friday Feb 14, 2020\n\n› See more: Discount codes\n\nOffer Details: hyperbolic discounting,” the idea being that the discount rate in your personal present-value calculation might start out low but rises sharply after a year or two before settling down again in See more ...\n\n## Appendix: Present Value Tables - GitHub Pages\n\nActived: Thursday Feb 13, 2020\n\n› See more: Promo codes\n\n## NPV Profile | Definition | Example\n\nActived: Friday Feb 14, 2020\n\n› See more: Discount codes\n\nOffer Details: NPV profile of a project or investment is a graph of the project’s net present value corresponding to different values of discount rates. The NPV values are plotted on the Y-axis and the WACC is plotted on the X-axis. The NPV profile shows how NPV changes in response to changing cost of capital. See more ..."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8926355,"math_prob":0.89759004,"size":24110,"snap":"2020-10-2020-16","text_gpt3_token_len":5917,"char_repetition_ratio":0.2325977,"word_repetition_ratio":0.15635179,"special_character_ratio":0.24807134,"punctuation_ratio":0.16087408,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96016127,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T08:14:32Z\",\"WARC-Record-ID\":\"<urn:uuid:98d5918e-6aff-46e1-bb6d-85ce5109bf16>\",\"Content-Length\":\"85294\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fa2857b-8e38-437c-82da-8e5207d1a199>\",\"WARC-Concurrent-To\":\"<urn:uuid:21a88506-8253-4e21-83d5-06b54af5d302>\",\"WARC-IP-Address\":\"104.27.143.51\",\"WARC-Target-URI\":\"https://www.getsetcoupon.com/npv-discount-rate-table/\",\"WARC-Payload-Digest\":\"sha1:22PHGVA42OI3GHUF7M2SF6CRWFGLJ4M5\",\"WARC-Block-Digest\":\"sha1:FJA3QKQMM7ZUPK5Q2QT55Z3ZQPTTITU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143635.54_warc_CC-MAIN-20200218055414-20200218085414-00078.warc.gz\"}"} |
https://nanohub.org/courses/SOE | [
"## nanoHUB-U: The Science, Art, and Practice of Analyzing Experimental Data and Designing Experiments\n\nBrought to you by:\n\n### nanoHUB-U\n\nLecture 1: Collecting and Plotting Data\n\n• Origin of data, Field Acceleration vs. Statistical Inference\n• Nonparametric information\n• Preparing data for projection: Hazen formula\n• Preparing data for projection: Kaplan formula\n\nLecture 2: Physical vs Empirical Distribution\n\n• Physical vs. empirical distribution\n• Properties of classical distribution function\n• Moment-based fitting of data\n\nLecture 3: Model Selection/Goodness of Fit\n\n• The problem of matching data with theoretical distribution\n• Parameter extractions: Moments, linear regression, maximum likelihood\n• Goodness of fit: Residual, Pearson, Cox, Akika\n\nLecture 4: Scaling Theory of Design of Experiments\n\n• Buckingham PI Theorem\n• An Illustrative Example\n• Recall the scaling theory of HCI, NBTI, and TDDB\n\nLecture 5: Design of Experiments\n\n• Single factor and full factorial method\n• Orthogonal vector analysis: Taguchi/Fisher model\n• Correlation in dependent parameters"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.69992816,"math_prob":0.6880696,"size":1065,"snap":"2022-40-2023-06","text_gpt3_token_len":250,"char_repetition_ratio":0.11875589,"word_repetition_ratio":0.013422819,"special_character_ratio":0.18967137,"punctuation_ratio":0.14457831,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96508515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T03:16:33Z\",\"WARC-Record-ID\":\"<urn:uuid:08e63c0c-96fe-47c5-a189-eaf294bd6e96>\",\"Content-Length\":\"37327\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27d1e600-931e-4a15-a897-6a28246fb94c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f45abb0-0759-48dc-bb3a-5e58138f8e81>\",\"WARC-IP-Address\":\"132.249.202.76\",\"WARC-Target-URI\":\"https://nanohub.org/courses/SOE\",\"WARC-Payload-Digest\":\"sha1:YT277TAOMQDNDOYOAR3BZS2LPVLYRCX5\",\"WARC-Block-Digest\":\"sha1:HBOAQWB5F263YFKTCNWVJ5JC7PLNSQX7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335059.43_warc_CC-MAIN-20220928020513-20220928050513-00650.warc.gz\"}"} |
https://chem.libretexts.org/Ancillary_Materials/Exemplars_and_Case_Studies/Case_Studies/Heat_and_Chemical_Resistant_Silicone_Rubber/Silicones_6._(CH3)2SiCl2/Silicones_6a._Thermodynamics_and_the_Preparation_of_Silicon | [
"# Silicones 6a. Thermodynamics and the Preparation of Silicon\n\n•",
null,
"• Contributed by ChemCases\n• Laurence Peterson (Project PI) at Kennesaw State University\n\n## Heat and Chemical Resistant Silicone Rubber\n\nEugene Rochow knew that the elemental silicon was not easy to make in a pure state. He needed pure silicon to make silicones. We need even purer silicon for computer chips and other devices. Consider thermodynamics. The reaction of sand with carbon is the source of silicon. In a high temperature electric arc, we observe:\n\n$\\ce{2C +SiO2 -> 2CO + Si}$\n\nIntuitively this scheme seems rather unattractive - two of nature's seemingly most stable materials in a chemical reaction to reduce sand and oxidize carbon. Intuitively this scheme seems attractive, too. We learned in thermodynamics that the universe seems to want to go to a disordered state of what we called high entropy. Gases, like CO, represented ideal products of chemical reactions. Gases disperse, gases can have a huge number of configurations, gases are highly disordered, gases have high entropy.\n\nBut we learned in thermodynamics that if a process had high positive enthalpy - was not spontaneous at room temperature, we could only make the process go forward if the entropy was positive and we raised the temperature. It is the Free Energy, G, that must be negative if we want a process to proceed.\n\nHere is the thermodynamic data on the four reactants and products, along with data for CO2.\n\nC SiO2 CO Si CO2\nStandard Enthalpy kJ/mol 0 -911 -110 0 -394\nStandard Free Energy kJ/mol 0 -856 -137 0 -394\nStandard Entropy J/Kmol 6 42 198 19 214\n\nFor 298 degrees Kelvin:\n\n$\\ce{2C +SiO2 -> 2CO + Si}$\n\nΔH0 = 2(-110) - (-911) = 691kJ/mol - highly unfavored\n\nΔS0 = [2(198) + (19)] - (6 + 42) = 367 J/mol.K = 0.367 kJ/mol.K - highly favored\n\nbut ΔG0 is what counts for spontaneity.\n\nΔG0 = 2(-137) - (-856) = 586kJ/mol highly unfavored at 25 degrees Celsius.\n\nWell, what temperature would be required to, let's say, reach equilibrium in this system? At equilibrium, G = 0. So let's calculate a theoretical temperature for an equilibrium :\n\nΔG = ΔH - TΔS\n\n0 = 691 - T(.367)\n\nT = 1850 degrees Kelvin\n\nThe electric arc furnace is what's required for this process. And the process will only go to equilibrium. Thus the silicon is certain to be contaminated with the two other solid materials, sand and carbon. Additional processing is required to make pure silicon metal - processes that take advantage of the low intermolecular forces between nonpolar SiCl4 molecules:\n\n$\\ce{Si + 2Cl2 -> SiCl4}$\n\nSilicon tetrachloride is a liquid with a unique boiling point. It can be purified easily by boiling it from the solid silicon and gaseous chlorine. The pure silicon tetrachloride is then converted back to now pure silicon for electronic devices.\n\nExercise $$\\PageIndex{1}$$\n\nWe might expect that an alternate reaction for the preparation of silicon would be:\n\n$\\ce{C + SiO2 -> Si + CO2}$\n\nFrom the thermodynamic data above, determine ΔG for this potential reaction. Compare your data with the CO producing reaction.\n\n1. At what temperature might this reaction be expected to occur?\n2. Which process is favored from the ΔG calculations?\n3. By comparing the temperatures at equilibrium of the CO and CO2 producing reactions, which reaction do you predict DOES occur more readily.\n4. Which process gives the higher system entropy?\n\n5. How does the entropy affect the preferred chemical pathway?"
]
| [
null,
"https://chem.libretexts.org/@api/deki/files/126375/Peterson.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8869693,"math_prob":0.9440636,"size":3371,"snap":"2021-43-2021-49","text_gpt3_token_len":848,"char_repetition_ratio":0.114939116,"word_repetition_ratio":0.0070921984,"special_character_ratio":0.2622367,"punctuation_ratio":0.10243902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.970167,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T22:38:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cb4f0e7d-f5d0-4e85-bcfe-272ad3196a51>\",\"Content-Length\":\"100306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66f63486-c294-4dc7-a895-5fb70f81e46e>\",\"WARC-Concurrent-To\":\"<urn:uuid:84f285e0-e779-43d0-87f3-344dfed3a195>\",\"WARC-IP-Address\":\"99.86.230.108\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Ancillary_Materials/Exemplars_and_Case_Studies/Case_Studies/Heat_and_Chemical_Resistant_Silicone_Rubber/Silicones_6._(CH3)2SiCl2/Silicones_6a._Thermodynamics_and_the_Preparation_of_Silicon\",\"WARC-Payload-Digest\":\"sha1:GHPYRCZMNASVLFKYE52SVFOUSN4J7V3W\",\"WARC-Block-Digest\":\"sha1:3IM3L7J7KU2DTUVHFEGABFJBPR4LXIIB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585215.14_warc_CC-MAIN-20211018221501-20211019011501-00704.warc.gz\"}"} |
https://kupdf.net/download/instruction-templates-of-microprocessor-8086_597a3717dc0d608303043371_pdf | [
"# Instruction Templates of Microprocessor 8086\n\n#### Description\n\n8086 Instruction Template Need for Instruction Template 8085 has 246 opcodes. The opcodes can be printed on an A4 size paper. 8086 has about 13000 opcodes. A book of about 60 pages is needed for printing the opcodes. Concept of Template In 8085, MOV r1, r2 (ex. MOV A, B) has the following template. 0\n\n1\n\n3-bit r1 code\n\n3-bit Register code 000 001 010 011 100 101 110 111\n\n3-bit r2 code\n\nRegister B C D E H L M A\n\nEx. 1: Code for\n\nMOV A, 01 11 1 7\n\nB is 000 = 78H 8\n\nEx.2: Code for\n\nMOV M, D is 01 11 0 010 = 72H 7 2\n\nUsing the template for MOV r1, r2 we can generate opcodes of 26 = 64 opcodes. 8086 Template for data transfer between REG and R/M 1 0 0 0\n\n1\n\n0 D\n\nW\n\nMOD 2 bits\n\nREG 3 bits\n\nR/M 3 bits\n\nREG = A register of 8086 (8-bit or 16-bits) (except Segment registers, IP, and Flags registers) Thus REG = AL/ BL/ CL/ DL/ AH/ BH/ CH/ DH/ AX/ BX/ CX/ DX/ SI/ DI/ BP/ SP R/ M = Register (as defined above) or Memory contents (8-bits or 16-bits) W = 1 means Word operation W = 0 means Byte operation\n\nD = 1 means REG is Destination register D = 0 means REG is source register MOD = 00 means R/M specifies Memory with no displacement MOD = 01 means R/M specifies Memory with 8-bit displacement MOD = 10 means R/M specifies Memory with 16-bit displacement MOD = 11 means R/M specifies a Register 3-bit Register code 000 001 010 011 100 101 110 111\n\nRegister name When W = 1 When W = 0 AX AL CX CL DX DL BX BL SP AH BP CH SI DH DI BH\n\nAid to remember:\n\nALl Children Drink Bournvita (AL, CL, DL, BL) SPecial Beverages SIamese DrInk (SP, BP, SI, DI)\n\nCase of MOD = 11 Example: Code for MOV AX, BX treated as ‘Move from BX to destination register AX’\n\n1 0 0 0\n\n1\n\n0\n\nD 1\n\n8\n\nW 1 Word operation\n\nMOD 11\n\nB\n\nREG 00 0 AX is destination C\n\nR/M 011 BX\n\n= 8B C3H\n\n3\n\nExample: Alternative code for MOV AX, BX treating it as ‘Move from source register BX to register AX’\n\n1 0 0 0\n\n8\n\n1\n\nD 0 0\n\nW 1 Word operation 9\n\nMOD 11\n\nD\n\nREG 01 1 BX is source\n\nR/M 000 AX\n\n= 89 D8H\n\n8\n\nThere are 2 possible opcodes for MOV AX, BX as we can choose either AX or BX as REG.\n\nExample: Code for MOV AL, BH treated as ‘Move from BL to destination register AL’\n\n1 0 0 0\n\n1\n\nD 0 1\n\n8\n\nW 0 Byte operation\n\nMOD 11\n\nA\n\nREG 00 0 AL is destination\n\nR/M 111 BH\n\nC\n\n= 8A C7H\n\n7\n\nExample: Alternative code for MOV AL, BH treating it as ‘Move from source register BH to register AL’\n\n1 0 0 0\n\n8\n\n1\n\nD 0 0\n\nW 0 Byte operation\n\nMOD 11\n\n8\n\nREG 11 1 BH is source\n\nF\n\nR/M 000 AL\n\n= 88 F8H\n\n8\n\nThere are 2 possible opcodes for MOV AL, BH as we can choose either AL or BH as REG. Case of MOD = 00, 01 or 10 R/M\n\n000 001 010 011 100 101 110 111\n\nMOD = 00 No Displacement [SI+BX] [DI+BX] [SI+BP] [DI+BP] [SI] [DI] [BP] Direct Addressing [BX]\n\nMOD = 01 8-bit signed displacement d8 [SI+BX+d8] [DI+BX+d8] [SI+BP+d8] [DI+BP+d8] [SI+d8] [DI+d8] [BP+d8]\n\nMOD = 10 16-bit signed displacement d16 [SI+BX+d16] [DI+BX+d16] [SI+BP+d16] [DI+BP+d16] [SI+d16] [DI+d16] [BP+d16]\n\n[BX+d8]\n\n[BX+d16]\n\nThe table shows 24 memory addressing modes i.e. 24 different ways of accessing data stored in memory. Aid to remember: SubInspector DIxit is a BoXer ( [SI+BX] and [DI]+[BX] ) SubInspector DIxit knows to control BP ( [SI+BP] and [DI]+[BP] ) He says’ SImple DIet DIRECTs a BoXer' ( [SI], [DI], Direct addressing, [BX] )\n\nEx: Code for MOV CL, [SI]\n\n1 0 0 0\n\n1\n\nD 1\n\n0\n\n8\n\nW 0 Byte operation\n\nMOD REG R/M 00 00 1 100 No CL is [SI] Disp. destination 0 C\n\nA\n\n= 8A 0CH\n\nNote that there is a unique opcode for MOV CL, [SI] as CL only can be REG. Ex: Code for MOV 46H[BP], DX D 1 0 0 0 1 0 0\n\n8\n\nW 1 Word operation\n\nMOD REG R/M d8 01 01 0 110 46H 8-bit DX is [BP+d8] Disp. source 5 6\n\n9\n\n= 89 56 46H\n\nNote that there is a unique opcode for MOV 46H[BP], DX as DX only can be REG. Ex: Code for MOV 0F246H[BP], DX\n\n1\n\n0\n\n0\n\n0\n\n1\n\nD 0 0\n\n8\n\nW 1 Word operation 9\n\nMOD 10 16-bit Disp. 9\n\nREG 01 0 DX is source\n\nR/M 110 [BP+d16]\n\nd16 F2 46H\n\n6\n\n= 89 96 F2 46H Stored as 89 96 46 F2H in Little Endian\n\nNote that there is a unique opcode for MOV 0F246H[BP], DX as DX only can be REG.\n\nEx: Code for MOV [BP], DX\n\n1 0 0 0 1 0\n\n8\n\nD 0\n\nW 1 Word operation 9\n\nMOD REG R/M d8 01 01 0 110 00H 8-bit DX is [BP+d8] Disp. source 5 6\n\nNote that MOV [BP], DX is treated as MOV 00H[BP], DX before coding.\n\n= 89 56 00H\n\nEx: Code for MOV BX, DS:1234H\n\n1 0 0 0 1 0\n\nD\n\nW\n\nMOD\n\nREG\n\nR/M\n\n1\n\n1\n\n00\n\n01 1\n\n110\n\nWord operation 8\n\nB\n\nNo BX is Disp. Dest. 1"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7320869,"math_prob":0.96504664,"size":4527,"snap":"2023-14-2023-23","text_gpt3_token_len":1759,"char_repetition_ratio":0.124474905,"word_repetition_ratio":0.16109422,"special_character_ratio":0.40048596,"punctuation_ratio":0.07637017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96987325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T02:43:24Z\",\"WARC-Record-ID\":\"<urn:uuid:568136aa-b927-4182-8a75-4f6e7ee994c1>\",\"Content-Length\":\"29386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72a7322f-2a71-4f55-9684-19836c15a469>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0ac8621-2274-45c5-afee-9be5bb626638>\",\"WARC-IP-Address\":\"172.67.149.121\",\"WARC-Target-URI\":\"https://kupdf.net/download/instruction-templates-of-microprocessor-8086_597a3717dc0d608303043371_pdf\",\"WARC-Payload-Digest\":\"sha1:L7RQGCNUW3H3QC2AZWYFDEMKKBPYJ7XQ\",\"WARC-Block-Digest\":\"sha1:RA7V376JYPPAHIMHKCEXZWCL4SIBUYIC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944606.5_warc_CC-MAIN-20230323003026-20230323033026-00098.warc.gz\"}"} |
http://medicalopensource.net/mcs/wifi-trh-arduino.html | [
"Lawrence Technological University\nCollege of Arts and Science\nDepartment of Mathematics and Computer Science\n\n## WiFi and the Arduino\n\n### by John M. Miller M.D.\n\nThe experiment on this page may not be a good \"starter\" Arduino Uno project. There were two main problem areas:\n\n1. Many other serial devices are easy to use with the Arduino Wire library. The Sensirion devices are non standard and have no working Arduino style library that I could find. The interface I made was patterned after the Sensirion data sheets and non-Arduino sample code.\n2. Using the WiFi Web server with the small Uno variable space (which is also shared with the stack) worked better when I stored most of the HTML strings in the flash memory, program space. This brings up the issue of C pointers, which many beginners would prefer to avoid.\n\nThis page is an experiment in measuring temperature and humidity using a digital sensor with the Arduino platform. The results are monitored remotely on a home WiFi network. The parts:\n\n1. Arduino Uno\n2. Sensirion's SHT10 Digital Humidity Sensor assembled as Emartee's Digital Temperature & Humidity Sensor module, Product ID 42049\n3. Emartee's TLG10UA03 Wifi module and Arduino shield kit, Product ID 42121\n4. Disk drive power cord from a defunct PC power supply.\n\nAssembled, the remote sensor looks like this",
null,
"Before using our hardware setup, we have to intialize the flash memory on the WiFi module so it will connect to the password protected network we have picked for our experiment. In general the Emartee document: WiFi_test_v2.pdf (available on the Emartee site through their link to Google Docs) was followed. Except after using the Arduino IDE Serial Monitor to send the escape sequence of \"+++\" (in order to force the WiFi module to exit \"Auto Work Mode\" and accept AT configuration commands) we continue configuration still using the Serial Monitor because there was no Window's machine handy to run the supplied configuration program. Here is a session, verifying all the settings that were changed manually with AT commands, using\n\n• The Arduino IDE Serial Monitor,\n• The USB-TTL module, jumper set to use 3.3 volts,\n• The supplied 4-wire cable,\n• The USB-TTL Driver PL-2303 Mac OS X Universal Binary Driver v1.4.0 (DMG file format), Prolific Edition For Mac OS X 10.7 Lion and 10.6 Snow Leopard (32-bit and 64-bit kernel) and\n• The AT commands detailed in pages 75 to 99 of the Emartee document: User Manual-03_ENGLISH.pdf (available on the Emartee site through their link to Google Docs.)\n```(Set for \"no line ending\")\n(Sent +++ with no line ending) to exit into AT command mode.\n+OK\n(Set for \"cr line ending\")\n(Sent at+ with cr line ending) to test command mode.\n+OK\n(Sent at+e with cr line ending) to begin echo of commands sent.\n+OK\n\nat+ssid\n+OK=\"2WIRE148\"\n\nat+encry\n+OK=3 3 seems faster than 6 at joining.\n\nat+key\n+OK=1,0,\"0123456789\"\n\nat+wscan\n+OK=c0830a255691,0,1,1,\"2WIRE596\",86\n00304409384a,0,4,1,\"Greenview Observatory3\",58\n00235140dba1,0,5,1,\"2WIRE674\",82\n00173f9288c1,0,10,0,\"Belkin_G_Plus_MIMO_180170\",84\n640f28108449,0,10,1,\"2WIRE148\",29\n\nat+qver\n+OK=H0.00.00.0000,F1.02.01@ 17:23:31 Dec 16 2010\n\nat+wjoin\n+OK=640f28108449,0,10,1,\"2WIRE148\",37\n\nat+lkstt\n+OK=1,\"192.168.1.66\",\"255.255.255.0\",\"192.168.1.254\",\"192.168.1.254\"\n\nat+nip\n+OK=0\n\nat+uart\n+OK=115200,0,0,0\n\nat+webs\n+OK=1,80\n\nfinally: at+pmtf records changes in flash memory.\nat+z resets module so will come back up in auto work mode.\n```\n\nThe rest of the initialization was done via the WiFi interface exactly the same as in the Emartee module test procedure -- including setting the port to 8090.\n\nTesting the server code I found that without using a `Content-Length:` header the browser would wait about 5 minutes for the reply to end under HTTP 1.1. Here is the code running on the Arduino: (Uploading using the Arduino IDE produces an error unless the jumper plugs that connect the WiFi module to the Arduino Tx and Rx pins are removed temporarily.)\n\n```// wifi_trh\n// March 8, 2012\n// Libraries:\n#include <avr/pgmspace.h>\n#include <PString.h>\n// Notes:\n/* Uploading this program with the Arduino IDE, when the Wifi Shield\nis attached to the Arduino, requires removing the 2 jumper plugs\non the shield that connect the Wifi module's and Arduino's Tx and Rx\npins.\n*/\n/* Temperature and Humidity are measured with digital results using\nEmartee's Digital Temperature & Humidity Sensor module, Product ID: 42049\n*/\n/* Read temperature and humidity values from an SHT10 sensor.\nCoefficients for adjusting for temperature and non-linearity\nare from Sensirion SHT1x datasheet Version 4.3 May 2010.\nUsed coefficients for 3.5 volts as nearest to VDD of 3.3 volts.\n*/\n/* SHT10 module connections:\nFunction SHT10 SHT Module Wire Arduino `\nGround 1 3 Black Gnd\nData 2 4 Red D10\nClock 3 1 Yellow D11\nVDD +3.3V 4 2 Black/Red +3.3v\n*/\n/* Results are served with Emartee's TLG10UA03 WiFi module which\nwas purchased, with a nice shield that makes the wiring easier\nand a USB converter that makes initialization easier, as a kit\n-- Product ID: 42121\n*/\n/* With a little JavaScript, the time-stamp for the served measurements\nis left to the client Web browser.\n*/\n// #defines for SHT10:\n// Pins:\n#define DATA_PIN 10\n#define CLOCK_PIN 11\n// Commands:\n#define TEMPERATURE 3\n#define HUMIDITY 5\n#define WRITE_STATUS 6\n#define SOFT_RESET 0x1e\n// Delays in microseconds:\n#define PLATEAU 40\n#define HOLD 20\n#define MAX_MEASUREMENT_WAIT 4000\n#define MAX_RETRIES 1\n// Constant strings in flash/program memory:\n\"<title>Temperature and Relative Humidity \"\n\"from Arduino Uno WiFi Server</title>\\n\";\nprog_char html1_sr[] PROGMEM = \"<meta name=\\\"send receive\\\" content=\\\"\";\nprog_char html1_se[] PROGMEM = \"<meta name=\\\"serial errors\\\" content=\\\"\";\nprog_char html1_rth[] PROGMEM = \"<meta name=\\\"raw data t,h\\\" content=\\\"\";\nprog_char html1_ms[] PROGMEM = \"<meta name=\\\"wait time\\\" content=\\\"\";\n\"<h1>Squawk's Weather Station</h1>\\n\";\nprog_char html3[] PROGMEM = \"<h3><script type=\\\"text/javascript\\\">\\n\"\n\"var d = new Date();\\n\"\n\"document.write(d.toString());\\n\"\n\"</script></h3>\\n\";\nprog_char html3_t[] PROGMEM = \"<p>Temperature: \";\nprog_char html3_h[] PROGMEM = \"<p>Relative Humidity: \";\nprog_char html3_na[] PROGMEM = \"<p>Temperature and Humidity not available. \"\nprog_char html4[] PROGMEM = \"</body>\\n</html>\\n\";\n// Other constant strings:\nconst char meta_end[] = \"\\\" />\\n\";\n// Globals:\nunsigned long started;\nunsigned long elapsed;\nint raw_temperature;\nint raw_humidity;\nfloat temp_C;\nfloat temp_F;\nfloat humidity;\nint retries;\nchar buffer;\nchar buffer_sr;\nchar buffer_se;\nchar buffer_rth;\nchar buffer_ms;\nchar buffer_t;\nchar buffer_h;\n// Debugging traces for reporting in META tags:\nPString serial_errors(buffer_se,sizeof(buffer_se));\nPString raw_th(buffer_rth,sizeof(buffer_rth));\nPString millisec(buffer_ms,sizeof(buffer_ms));\n// The actual reply temperature and relative humidity.\nPString t(buffer_t,sizeof(buffer_t));\nPString h(buffer_h,sizeof(buffer_h));\n// Conversion coefficients from SHT1x datasheet Version 4.3 May 2010\nconst float D1 = -39.7; // for 14 Bit @ 3.5V\nconst float D2 = 0.01; // for 14 Bit DEGC\nconst float C1 = -2.0468; // for 12 Bit\nconst float C2 = 0.0367; // for 12 Bit\nconst float C3 = -0.0000015955; // for 12 Bit\nconst float T1 = 0.01; // for 14 Bit\nconst float T2 = 0.00008; // for 14 Bit\n// Subroutines:\nvoid start_transmission()\n{\n// ____ _____\n// CLOCK_PIN ____| |____| |____\n// ______ _______\n// DATA_PIN |__________|\npinMode(DATA_PIN,OUTPUT);\ndigitalWrite(DATA_PIN,HIGH);\ndigitalWrite(CLOCK_PIN,LOW);\ndelayMicroseconds(PLATEAU);\ndigitalWrite(CLOCK_PIN,HIGH);\ndelayMicroseconds(HOLD);\ndigitalWrite(DATA_PIN,LOW);\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,LOW);\ndelayMicroseconds(PLATEAU);\ndigitalWrite(CLOCK_PIN,HIGH);\ndelayMicroseconds(HOLD);\ndigitalWrite(DATA_PIN,HIGH);\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,LOW);\ndelayMicroseconds(PLATEAU);\n}\nvoid serial_reset()\n// This reset has not been tested!\n{\npinMode(DATA_PIN,OUTPUT);\ndigitalWrite(DATA_PIN,HIGH);\ndigitalWrite(CLOCK_PIN,LOW);\nfor (int i = 0;i < 9;i++) {\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,HIGH);\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,LOW);\n}\nretries++;\n}\nvoid bit_out(int d)\n{\ndigitalWrite(DATA_PIN,d);\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,HIGH);\ndelayMicroseconds(PLATEAU);\ndigitalWrite(CLOCK_PIN,LOW);\ndelayMicroseconds(HOLD);\ndigitalWrite(DATA_PIN,LOW);\n}\nint bit_in()\n{\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,HIGH);\ndelayMicroseconds(HOLD);\ndelayMicroseconds(HOLD);\ndigitalWrite(CLOCK_PIN,LOW);\ndelayMicroseconds(HOLD);\nreturn d;\n}\n{\npinMode(DATA_PIN,INPUT); // Assume was in DATA_PIN output mode.\ndigitalWrite(DATA_PIN,HIGH); // Pullup DATA_PIN.\nreturn bit_in();\n}\nvoid send_ack(boolean ack)\n{\npinMode(DATA_PIN,OUTPUT); // Assume was in DATA_PIN input mode.\n// ____ __\n// DATA_PIN |________| Case where ack is true.\n// ____\n// CLOCK_PIN ______| |____\ndelayMicroseconds(PLATEAU);\nbit_out(ack ? LOW : HIGH);\npinMode(DATA_PIN,INPUT); // Restore to DATA_PIN input mode.\ndigitalWrite(DATA_PIN,HIGH); // Restore pullup.\n}\n{\nint b,i;\nfor(b = 0,i = 0x80;i;i >>= 1) { if ( bit_in() == HIGH) b |= i; }\nsend_ack(ack);\nreturn b;\n}\nint send_byte(int b)\n{\nfor (int i = 0x80;i > 0;i >>= 1) { bit_out( (i & b) ? HIGH : LOW); }\nreturn ack;\n}\nint get_measurement(int measurement)\n{\nint m = -1;\nwhile (retries <= MAX_RETRIES) {\nstart_transmission();\nif (send_byte(measurement) == HIGH) {\nserial_errors.print(\"A\");\nserial_errors.print(measurement);\nserial_reset();\ncontinue;\n}\n// Wait during measurement.\ndelayMicroseconds(PLATEAU);\nstarted = millis();\nwhile ((elapsed = millis() - started) < MAX_MEASUREMENT_WAIT) {\nmillisec.print(elapsed);\nmillisec.print(\"|\");\nbreak;\n}\n}\nif (elapsed >= MAX_MEASUREMENT_WAIT) {\nmillisec.print(elapsed);\nserial_errors.print(\"T\");\nserial_errors.print(measurement);\nserial_reset();\ncontinue;\n}\nm = m * 256 + read_byte(false); // Skip CRC by sending HIGH acknowledgment.\nbreak;\n}\nreturn m;\n}\n\nvoid setup()\n{\nstrlen_P(html1_sr) +\nstrlen_P(html1_se) +\nstrlen_P(html1_rth) +\nstrlen_P(html1_ms) +\nstrlen_P(html2) +\nstrlen_P(html3) +\nstrlen_P(html4) +\nstrlen(meta_end) * 4;\nSerial.begin(115200);\npinMode(CLOCK_PIN,OUTPUT);\ndigitalWrite(CLOCK_PIN,LOW);\npinMode(DATA_PIN,INPUT);\ndigitalWrite(DATA_PIN,HIGH);\n}\nvoid loop()\n{\nboolean blankLine = true;\n// Initialize debugging traces and the result:\nserial_errors.begin();\nraw_th.begin();\nmillisec.begin();\nt.begin();\nh.begin();\nretries = 0;\nwhile(1){\nif (Serial.available()) {\nif (c == '\\n' && blankLine) { // End of an HTTP request.\n// Read the current values from the SHT10.\nraw_temperature = get_measurement(TEMPERATURE);\nraw_humidity = get_measurement(HUMIDITY);\nraw_th.print(raw_temperature);\nraw_th.print(\",\");\nraw_th.print(raw_humidity);\n// Report the measurements only if both are good.\nif (raw_temperature >= 0 && raw_humidity >= 0) {\ntemp_C = (raw_temperature * D2) + D1;\ntemp_F = temp_C * 9.0 / 5.0 + 32.0;\nfloat adj_humidity = C1 + C2 * raw_humidity +\nC3 * raw_humidity * raw_humidity;\nhumidity = (temp_C - 25.0 ) * (T1 + T2 * raw_humidity) + adj_humidity;\nt.print(temp_C,1);\nt.print(\" °C, \");\nt.print(temp_F,1);\nt.print(\" °F\\n\");\nh.print(humidity,1);\nh.print(\"%\\n\");\nserial_errors.length() +\nraw_th.length() +\nmillisec.length() +\nstrlen_P(html3_t) + t.length() +\nstrlen_P(html3_h) + h.length();\n} else {\nserial_errors.length() +\nraw_th.length() +\nmillisec.length() +\nstrlen_P(html3_na);\n}\nSerial.print(\"HTTP/1.1 200 OK\\r\\n\");\nSerial.print(\"Content-Type: text/html\\r\\n\");\nSerial.print(\"Content-Length: \");\nSerial.print(\"\\n\\n\");\n// Send body of the reply:\nstrcpy_P(buffer,html1);\nSerial.print(buffer);\nstrcpy_P(buffer,html1_sr);\nSerial.print(buffer);\nSerial.print(meta_end);\nstrcpy_P(buffer,html1_se);\nSerial.print(buffer);\nSerial.print(serial_errors);\nSerial.print(meta_end);\nstrcpy_P(buffer,html1_rth);\nSerial.print(buffer);\nSerial.print(raw_th);\nSerial.print(meta_end);\nstrcpy_P(buffer,html1_ms);\nSerial.print(buffer);\nSerial.print(millisec);\nSerial.print(meta_end);\nstrcpy_P(buffer,html2);\nSerial.print(buffer);\nstrcpy_P(buffer,html3);\nSerial.print(buffer);\nif (raw_temperature >= 0 && raw_humidity >= 0) {\nstrcpy_P(buffer,html3_t);\nSerial.print(buffer);\nSerial.print(t);\nstrcpy_P(buffer,html3_h);\nSerial.print(buffer);\nSerial.print(h);\n} else {\nstrcpy_P(buffer,html3_na);\nSerial.print(buffer);\n}\nstrcpy_P(buffer,html4);\nSerial.print(buffer);\nbreak;\n} else if (c == '\\n') { // && !blankLine\nblankLine = true; // So start a new line.\n} else if (c != '\\r') {\nblankLine = false; // Line is not blank.\n}\n}\n}\n}\n```\n\nAs seen from a client Web Browser:",
null,
"Viewing the source code of that reply:\n\n```<html>\n<title>Temperature and Relative Humidity from Arduino Uno WiFi Server</title>\n<meta name=\"serial errors\" content=\"\" />\n<meta name=\"raw data t,h\" content=\"6255,1101\" />\n<meta name=\"wait time\" content=\"221|63|\" />"
]
| [
null,
"http://medicalopensource.net/mcs/wifi-trh.jpg",
null,
"http://medicalopensource.net/mcs/wifi-reply.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6030546,"math_prob":0.82799727,"size":13873,"snap":"2020-10-2020-16","text_gpt3_token_len":3749,"char_repetition_ratio":0.13620304,"word_repetition_ratio":0.024581006,"special_character_ratio":0.31514454,"punctuation_ratio":0.2110849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95279473,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T01:51:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f4ed9299-6270-4c7b-8a44-c5b560836ddc>\",\"Content-Length\":\"20243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcbd6f14-cd9a-40f0-b607-eb6e9ff82967>\",\"WARC-Concurrent-To\":\"<urn:uuid:0188d88f-8578-4353-9fc7-1192e2a25995>\",\"WARC-IP-Address\":\"192.34.56.95\",\"WARC-Target-URI\":\"http://medicalopensource.net/mcs/wifi-trh-arduino.html\",\"WARC-Payload-Digest\":\"sha1:AD4EFRXC2LPKZNLUYXRMJ6EPPIFVABIP\",\"WARC-Block-Digest\":\"sha1:HCFJ7WSHN4UAVLG3QEUTDZXISB7XKUIF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144498.68_warc_CC-MAIN-20200220005045-20200220035045-00015.warc.gz\"}"} |
https://www.codespeedy.com/model-evaluation-metrics-in-regression-models-with-python/ | [
"# Model Evaluation Metrics in Regression Models with Python\n\nIn this tutorial, we are going to see some evaluation metrics used for evaluating Regression models. Whenever a Machine Learning model is being constructed it should be evaluated such that the efficiency of the model is determined, It helps us to find a good model for our prediction by evaluating the model. In such a note, we are going to see some Evaluation metrics for Regression models like Logistic, Linear regression, and SVC regression.\n\n### Evaluation metrics – Introduction\n\nGenerally, we use a common term called the accuracy to evaluate our model which compares the output predicted by the machine and the original data available. Consider the below formula for accuracy,\n\nAccuracy=(Total no. of correct predictions /Total no. of data used for testing)*100\n\nThis gives the rough idea of evaluation metrics but it is not the correct strategy to evaluate the model. We have some defined metrics especially for Regression models which we will see below.\n\n### Regression Models Evaluation metrics\n\nThe SkLearn package in python provides various models and important tools for machine learning model development. Where it provides some regression model evaluation metrics in the form of functions that are callable from the sklearn package.\n\n• Max_error\n• Mean Absolute Error\n• Mean Squared Error\n• Median Squared Error\n• R Squared\n\nAbove are the available metrics provided from sklearn we will see them in detail with implementation,\n\n1. Max_error\nIt calculates the maximum error present between the original data and predicted data,\nWhere it compares and finds out data that has the maximum difference and produces the output. Consider the code segment below which illustrates the max_error function from the\n\n```from sklearn.metrics import max_error\noriginal_data = [8, 4, 7, 1]\npredicted_data = [4, 2, 7, 1]\nmax_error(original_data,predicted_data)```\n```Output:\n4```\n\nFrom the above code, the original data is compared with predicted data, where the maximum difference occurred between data 8 and 4 so the output is the difference between them (i.e 4).\nThe best output possible here is 0.\n\n2. Mean Absolute Error\nIt is given by the formula below,",
null,
"Where the difference between data is taken and the average of it is found out and returned as output. The implementation of it is shown in the below code segment.\n\n```from sklearn.metrics import mean_absolute_error\noriginal_data = [3, 5, 2, 7]\npredicted_data = [2, 0, 2, 8]\nmean_absolute_error(y_true, y_pred)```\n```Output:\n1.75```\n\nLet us do some calculations here, the difference between these data is 1,5,0,1 (i.e 1+5+0+1) which gives you 7. Then the average is taken where n=4, so 7/4 gives you (1.75).\nThe best score here would be 0.\n\n3. Mean Squared Error\nIt is as similar to the above metric wherein Mean Squared Error we will be calculating the square of the difference between the predicted and the original data. The formula is given below,",
null,
"The difference value is calculated and it is squared and means is obtained as the result. Let us see an implementation of it,\n\n```from sklearn.metrics import mean_squared_error\noriginal_data = [3, 5, 2, 7]\npredicted_data = [2, 0, 2, 8]\nmean_squared_error(original_data,predicted_data)```\n\nThe same inputs similar to above mean absolute error is given to this mean squared error, where the difference in the data is ( 1 square+5 square+0 square+1 square) = 27 and mean is (27/4) which gives the output.\n\n```Output:\n6.75```\n\nThe ideal output is 0 and this suits to identify a very large error in the prediction compared to the mean absolute error.\n\n4. Median Absolute Error\nThis finds the median value of the absolute difference between the original and the predicted data. It is famous for its consistency towards robust to outliers. It helps us to know about the outliers present in the dataset.\n\n```from sklearn.metrics import median_absolute_error\noriginal_data = [3, 5, 2, 7]\npredicted_data = [3, 1, 2, 5]\nmedian_absolute_error(original_data,predicted_data)```\n```Output:\n1.0```\n\nLet formulate it! , the output of the above code segment is the median(0,4,0,2) that is obviously 1. The best value is 0.\n\n5. R Squared\nThis is the most important evaluation metric in the regression evaluation where it gives us an understanding of how well the data get fit towards the regression line. This helps us to find the relationship between the independent variable towards the dependent variable.\n\n```from sklearn.metrics import r2_score\noriginal_data = [8, 5, 1, 6]\npredicted_data= [7, 8, 2, 3]\nr2_score(original_data,predicted_data)```\n```Output:\n0.23076923076923073```\n\nIt is calculated by the below formula,",
null,
"where the SSRes is the sum of the square of the difference between the actual value and the predicted value.SSTotal is the sum of the square of the difference between the actual value and the mean of the actual value.\n\nThese are various Regression evaluation metrics available, Hope this tutorial helps!!!\n\n### One response to “Model Evaluation Metrics in Regression Models with Python”\n\n1. Naveen Kumar says:\n\nSuper It Was Very Useful .Right blog I Was Searching For…..!"
]
| [
null,
"https://codespeedy.com/wp-content/uploads/2019/12/Mean-Absolute-Error-equation.png",
null,
"https://codespeedy.com/wp-content/uploads/2019/12/Mean-Squared-Error-Equation.png",
null,
"https://codespeedy.com/wp-content/uploads/2019/12/R-Squared-Equation.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8455886,"math_prob":0.9886897,"size":4999,"snap":"2020-34-2020-40","text_gpt3_token_len":1125,"char_repetition_ratio":0.14734735,"word_repetition_ratio":0.061712846,"special_character_ratio":0.2284457,"punctuation_ratio":0.12673056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994358,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T08:56:57Z\",\"WARC-Record-ID\":\"<urn:uuid:a33338d7-c9b4-4bf3-b262-b2dabca1be97>\",\"Content-Length\":\"39342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:715772b8-1bde-47d1-b446-8d365369e17d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7ea0277-f856-4f71-acd5-46725d406499>\",\"WARC-IP-Address\":\"194.1.147.77\",\"WARC-Target-URI\":\"https://www.codespeedy.com/model-evaluation-metrics-in-regression-models-with-python/\",\"WARC-Payload-Digest\":\"sha1:ELYS3QBYEXCD4AXB3RZ4TOU6DZXZL4DB\",\"WARC-Block-Digest\":\"sha1:C2JNL2FKB3WNGN7NMTORRQ3HVEHAK3ZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401632671.79_warc_CC-MAIN-20200929060555-20200929090555-00426.warc.gz\"}"} |
https://www.physicsoverflow.org/tag/hamiltonian-formalism | [
"# Recent questions tagged hamiltonian-formalism",
null,
"The Hamiltonian formalism is a formalism in Classical Mechanics. Besides Lagrangian Mechanics, it is an effective way of reformulating classical mechanics in a simple way. Very useful in Quantum Mechanics, specifically the Heisenberg and Schrodinger formulations. Unlike Lagrangian Mechanics, this formalism relies on a \"Hamiltonian\" instead of a Lagrangian, which differs from the Lagrangian through a Legendre transformation.\n\n# The Hamiltonian\n\nThe Hamiltonian can be interpreted as an “energy input”, as opposed to a Lagrangian, which is the \"energy output\". The Euclidean Hamiltonian, which is used in Classical Mechanics is given by:\n\n$$H = \\frac{{{p^2}}}{{2m}} + U$$\n\nThe Euclidean Lagrangian, on the other hand, has a minus instead of a plus. Notice that\n\n$$L + H = p\\frac{{{\\text{d}}x}}{{{\\text{d}}t}}$$\n\nThis shows that the two are related by a Legendre transformation.\n\n# The Poisson Bracket relationships and the Dynamic Hamiltonian Relationships\n\nThe Poisson Bracket relations are algebraic relationships between phase space variables, and without the presence of any dynamical Lagrangian or Hamiltonian. Thus, the Poisson Bracket relations would obviously (to someone with a basic knowledge of Lagrangian Mechanics) be :\n\n$$\\begin{gathered} \\{ {{p_i},{x_j}} \\} = {\\delta _{ij}} \\\\ \\{ {{p_i},{p_j}} \\} = 0 \\\\ \\{ {{x_i},{x_j}} \\} = 0 \\\\ \\end{gathered}$$\n\nThe Dynamical Relationships, however, are obviously changed. It is clear that the new relationshipjs are that:\n\n$$\\begin{gathered} \\frac{{\\partial H}}{{\\partial {\\mathbf{x}}}} = - \\frac{{{\\text{d}}{\\mathbf{p}}}}{{{\\text{d}}t}} \\\\ \\frac{{\\partial H}}{{\\partial {\\mathbf{p}}}} = \\frac{{{\\text{d}}{\\mathbf{x}}}}{{{\\text{d}}t}} \\\\ \\end{gathered}$$\n\nCompare this to the dynamical Lagrangian Relations:\n\n$$\\begin{gathered} \\frac{{\\partial L}}{{\\partial {\\mathbf{x}}}} = \\frac{{{\\text{d}}{\\mathbf{p}}}}{{{\\text{d}}t}} \\\\ \\frac{{\\partial L}}{{\\partial {\\mathbf{p}}}} = \\frac{{{\\text{d}}{\\mathbf{x}}}}{{{\\text{d}}t}} \\\\ \\end{gathered}$$\n\nThe central equation of Hamiltonian Mechanics is the Hamilton Equation:\n\n$$\\frac{{{\\text{d}}A}}{{{\\text{d}}t}} = \\{A,H \\}$$\n+ 1 like - 0 dislike\n+ 0 like - 0 dislike\n+ 0 like - 0 dislike\n+ 1 like - 0 dislike\n+ 2 like - 0 dislike\n+ 5 like - 0 dislike\n+ 1 like - 0 dislike\n+ 1 like - 0 dislike\n+ 3 like - 0 dislike\n+ 6 like - 0 dislike\n+ 1 like - 0 dislike\n+ 1 like - 0 dislike\n+ 5 like - 0 dislike\n+ 3 like - 0 dislike"
]
| [
null,
"https://www.physicsoverflow.org/qa-theme/OverSnow/images/rss.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.802869,"math_prob":0.99927694,"size":2102,"snap":"2021-04-2021-17","text_gpt3_token_len":601,"char_repetition_ratio":0.15538608,"word_repetition_ratio":0.007662835,"special_character_ratio":0.30066603,"punctuation_ratio":0.09309309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99924994,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T10:40:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8e06b6ce-2ca0-405f-a5b2-e09547bf1251>\",\"Content-Length\":\"138003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bac1c432-c192-4e60-9d15-6c513e54eb54>\",\"WARC-Concurrent-To\":\"<urn:uuid:04dd87d6-3f49-4ec6-aff1-00176880c258>\",\"WARC-IP-Address\":\"129.70.43.86\",\"WARC-Target-URI\":\"https://www.physicsoverflow.org/tag/hamiltonian-formalism\",\"WARC-Payload-Digest\":\"sha1:BYYXKMZJBVD7AN4Y74TGSA4RKN2243TZ\",\"WARC-Block-Digest\":\"sha1:D2EOCLF7LKJAAMCT7ZTXAZ3WSPTXGKIR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524743.61_warc_CC-MAIN-20210121101406-20210121131406-00629.warc.gz\"}"} |
https://cs.stackexchange.com/questions/2814/sums-of-landau-terms-revisited | [
"# Sums of Landau terms revisited\n\nI asked a (seed) question about sums of Landau terms before, trying to gauge the dangers of abusing asymptotics notation in arithmetics, with mixed success.\n\nNow, over here our recurrence guru JeffE does essentially this:\n\n$\\qquad \\displaystyle \\sum_{i=1}^n \\Theta\\left(\\frac{1}{i}\\right) = \\Theta(H_n)$\n\nWhile the end result is correct, I think this is wrong. Why? If we add in all the existence of constants implied (only the upper bound), we have\n\n$\\qquad \\displaystyle \\sum_{i=1}^n c_i \\cdot \\frac{1}{i} \\leq c \\cdot H_n$.\n\nNow how do we compute $c$ from $c_1, \\dots, c_n$? The answer is, I believe, that we can not: $c$ has to bound for all $n$ but we get more $c_i$ as $n$ grows. We don't know anything about them; $c_i$ may very well depend on $i$, so we can not assume a bound: a finite $c$ may not exist.\n\nIn addition, there is this subtle issue of which variable goes to infinity on the left-hand side -- $i$ or $n$? Both? If $n$ (for the sake of compatibility), what is the meaning of $\\Theta(1/i)$, knowing that $1 \\leq i \\leq n$? Does it not only mean $\\Theta(1)$? If so, we can't bound the sum better than $\\Theta(n)$.\n\nSo, where does that leave us? It it a blatant mistake? A subtle one? Or is it just the usual abuse of notation and we should not look at $=$ signs like this one out of context? Can we formulate a (rigorously) correct rule to evalutate (certain) sums of Landau terms?\n\nI think that the main question is: what is $i$? If we consider it constant (as it is inside the scope of the sum) we can easily build counterexamples. If it is not constant, I have no idea how to read it.\n\n• This question on math.SE is a good read about arithmetics with Landau terms in general. – Raphael Jul 18 '12 at 15:36\n• From the link you gave, the equality can be seen to be either a subset relation or an \"is in\" relation (i.e. $\\in$). For $\\Theta$ you're just saying it's bounded above and below by a constant. Why not choose $c = \\min(c_1, c_2, \\cdots, c_n)$ and $C = \\max(c_1, c_2, \\cdots, c_n)$? – user834 Jul 18 '12 at 17:33\n• Hang on there, Bucky. I didn't write any summation with a Theta in it. I wrote a recurrence with a Theta in it. Do you really interpret the recurrence \"$t(n) = \\Theta(1/n) + t(n-1)$\" as something other than \"There is a function $f \\in \\Theta_{x\\to \\infty}(x\\mapsto 1/x)$ such that $t(n) = f(n) + t(n-1)$\"? – JeffE Jul 19 '12 at 16:06\n• @Raphael No, the recurrence is not mathematically the same as the sum, for precisely the reason you describe! The recurrence has exactly one Theta term in it, which unambiguously refers to a single function. – JeffE Jul 20 '12 at 13:41\n• That's not very intuitive — I strongly disagree, but I think it's a matter of taste and experience. – JeffE Jul 20 '12 at 17:01\n\nLooks right to me in the following convention:\n\n$S_n = \\sum_{k=1}^{n} \\Theta(1/k)$ is convenient notation for\n\nThere is an $f(x) \\in \\Theta(1/x)$ (as $x \\to \\infty$) such that\n\n$S_n = \\sum_{k=1}^{n} f(k)$.\n\nThus the $c_i$ (or with the notation in this answer $c_k$) you get, are not really dependent on $k$.\n\nUnder this interpretation, it is indeed true that $S_n = \\Theta(H_n)$.\n\nIn fact, in Jeff's answer, he shows that $T(k+1) = f(k) + T(k)$ where $f \\in \\Theta(1/k)$, so it is consistent with the above interpretation.\n\nThe confusion seems to be arising from mentally \"unrolling\" the $\\sum$ and presuming different functions for each occurrence of $\\Theta$...\n\n• Jup, but every $\\Theta$ can have its own function, and constant. So this convention does only work with context, that is if we know that the Landau terms stem from a somewhat \"uniform\" (in $k$ and $n$) definition of the summands. – Raphael Jul 19 '12 at 11:40\n• @Raphael: It seems meaningless to unroll and then allow different $f_i$: the constants will then depend on the variable! and it becomes an incorrect usage of $\\Theta$, assuming the $\\Theta$ variable is $i$ (or $k$ in above answer). Even if we assume the variable is $n$, it is still looks meaningless to me. – Aryabhata Jul 19 '12 at 15:06\n• In principle, every $\\Theta$ can have its own constant, but in the particular context you describe, it's clear that every $\\Theta$ does not have its own constant. – JeffE Jul 19 '12 at 15:53\n• @JeffE: Right. We can have multiple $\\Theta$ with their own constants, as long as the constants are really constant :-) – Aryabhata Jul 19 '12 at 15:58\n• @JeffE So why don't you just write what you mean but prefer something ambiguous/wrong? Note that my updated answer now proposes a way to do so. I'd appreciate comments on that; downvotes with no reason don't help me understand why people seem to reject my point. – Raphael Jul 20 '12 at 12:10\n\nI think I nailed the problem down. In essence: using Landau terms decouples the variable of the summand function from the sum's running variable. We still (want to) read them as identical, though, therefore the confusion.\n\nTo develop it formally, what does\n\n$\\qquad \\displaystyle S_n \\in \\sum_{i=1}^n \\Theta(f(i)) \\qquad \\qquad\\qquad (1)$\n\nreally mean? Now I assume that these $\\Theta$ let $i$ -- not $n$ -- to infinity; if we let $n \\to \\infty$, every such sum evaluates to $\\Theta(n)$ (if the summands are independent of $n$ and therefore constant) which is clearly wrong. Here is a first giveaway that we to crude things: $i$ is bound (and constant) inside the sum, but we still let it go to infinity?\n\nTranslating $(1)$ (for the upper bound, the lower bound works similarly), we get\n\n$\\qquad \\displaystyle \\exists f_1, \\dots, f_n \\in \\Theta(f).\\ S_n \\leq \\sum_{i=1}^n f_i(i)$\n\nNow it is clear that the sum-$i$ and parameter-$i$ are decoupled: we can easily define the $f_i$ so that they use $i$ as a constant. In the example from the question, we can define $f_i(j) = i \\cdot \\frac{1}{j} \\in \\Theta(1/j)$ and have\n\n$\\qquad \\displaystyle \\sum_{i=0}^n f_i(i) \"=\" \\sum_{i=0}^n \\Theta(1/j) = \\sum_{i=0}^n \\Theta(1/i)$\n\nbut the original sum clearly does not evaluate to something in $\\Theta(H_n) = \\Theta(\\log n)$. Now exchanging $j$ for $i$ -- which is only a renaming -- in the $\\Theta$ may feel strange because $i$ is not independent of $n$ resp. the sum, but if we object to that now, we should never have used $i$ inside the $\\Theta$ in the first place (as that holds the same strangeness).\n\nNote that we did not even exploit that the $f_i$ may also depend on $n$.\n\nTo conclude, the proposed identity is bogus. We can, of course, agree on conventions on how to read such sums as abbreviation of rigorous calculation. However, such conventions will be incompatible with the definition of Landau terms (together with the normal abuse of them), impossible to understand correctly without context and misleading (for beginners) at least -- but that is ultimately a matter of taste (and ruthlessness?).\n\nIt occurred to me that we can also write exactly what we mean and still make use of the convenience of Landau terms. We know that all summands come from one common function, implying that the asymptotic bounds use the same constants. This is lost when we put the $\\Theta$ into the sum. So let us not put it in there and write\n\n$\\qquad \\displaystyle \\sum_{i=1}^n \\frac{2i - 1}{i(i+1)} \\in \\Theta\\left(\\sum_{i=1}^n \\frac{1}{i}\\right) = \\Theta(H_n)$\n\ninstead. Putting the $\\Theta$ outside of the sum results in\n\n• a mathematically correct statement and\n• a simple term inside the $\\Theta$ we can easily deal with (which is what we want here, right?).\n\nSo it seems to me that this is both a correct and a useful way of writing the matter down, and should therefore be preferred over using Landau symbols inside the sum when we mean them outside of it.\n\n• Consider $\\sum_i^n i$. I can define $f_i(n)=i$ (using $i$ as a constant), therefore $\\sum_i^n i = \\sum_i^n O(1) = O(n)$ by your reasoning, right? But this sum is $O(n^2)$. – Xodarap Jul 21 '12 at 19:03\n• @Xodarap: By my reasoning, collapsing the sum like this does not work, because coupling the inner $\\Theta$s (which are not coupled to $i$ nor $n$) to $n$ does change the meaning. – Raphael Jul 21 '12 at 19:24\n• I'm not coupling them to $n$, I'm just using the fact that $\\sum_i^n k = nk$. (And I suppose also the fact that $nO(f)=O(nf)$.) – Xodarap Jul 22 '12 at 1:53\n• @Xodarap: But you don't have one $f$, but one $f_i$ per summand. If the underlying functions $f_i$ use $i$ (as a constant factor), you have to expand that, and the sum ends up being correct. So, clearly, by my reasoning the summing rule you propose does not work as you write. – Raphael Jul 22 '12 at 8:51\n• If I have a sequence $5,1,3,2,\\dots$, each of these are $O(1)$ (provided they don't increase as the series progresses). Would you say that adding $n$ of them will generate a sum $O(n)$? What's the difference if instead of being constants I describe them as constant functions $f_1(x)=5,f_2(x)=1,\\dots$? – Xodarap Jul 22 '12 at 14:03\n\nIf each $c_i$ is a constant, then there is some $c_{max}$ such that $\\forall c_i: c_i\\leq c_{max}$. So clearly $$\\sum c_i f(i) \\leq \\sum c_{max} f(i) = c_{max} \\sum f(i) = O\\left(\\sum f(i)\\right)$$ Same idea for little o.\n\nI think the problem here is that $1/i\\not=\\Theta(1)$. It's $o(1/n)$ (since there is no $\\epsilon$ such that $\\forall i: 1/i>\\epsilon$), so the overall sum will be $no(1/n)=o(1)$. And each term is $O(1)$, meaning the overall sum is $O(n)$. So no tight bounds can be found from this method.\n\n1. Is bounding $\\sum_i^n f(i)$ by doing the little o of each term and the big o of each term then multiplying by $n$ acceptable? (Answer: Yes)\n2. Is there a better method? (Answer: Not that I know of.)\n\nHopefully someone else can answer #2 more clearly.\n\n$\\sum_i^n \\Theta(f(n)) = \\Theta(nf(n))$ ?\n\nTo which the answer is yes. In this case though, each term is not $\\Theta$ of anything, so that approach falls apart.\n\nEDIT 2: You say \"consider $c_i=i$, then there is no $c_{max}$\". Unequivocally true. If you say that $c_i$ is a non-constant function of $i$, then it is, by definition, non-constant.\n\nNote that if you define it this way, then $c_i i$ is not $\\Theta(i)$, it's $\\Theta(i^2)$. Indeed, if you define \"constant\" to mean \"any function of $i$\", then any two functions of $i$ differ by a \"constant\"!\n\nPerhaps this is an easier way to think of it: we have the sequence $1,\\frac{1}{2},\\dots,\\frac{1}{n}$. What's the smallest term in this sequence? Well, it will depend on $n$. So we can't consider the terms as constant.\n\n(Computer scientists are often more familiar with big-O, so it might be more intuitive to ask if $1,\\dots,n$ has a constant largest term.)\n\nTo provide your proof: let $f(i_{min})$ be the smallest value of $f(i)$ in the range $1,\\dots,n$. Then $$\\sum_i^n f(i) \\geq \\sum_i^n f(i_{min}) = n f(i_{min}) = n o(f(n))$$\n\nAn analogous proof can be made for the upper bound.\n\nLastly, you write that $H_n = o(n)$ and as proof give that $H_n = \\Theta(\\log n)$. This is in fact a counter-proof: if $H_n$ is \"bigger\" than $n$, then it can't be \"smaller\" than $\\log n$, which is what's required for it to be $\\Theta(\\log n)$. So it can't be $o(n)$.\n\n• 1) \"..then there is some $c_{max}$ such that...\" -- no, there is not. Consider $(c_i)_{i \\in \\mathbb{N}}$ with $c_i = i$. 2) \"I don't think $H_n=o(n)$\" -- $H_n \\in \\Theta(\\ln n)$ 3) $1/i\\not=\\Theta(1)$. It's $o(1/n)$ -- That is wrong. As $1/i \\geq 1/n$, $1/i \\in \\Omega(1/n)$. 4) \"(Answer: Yes)\" -- as long as I don't see a formal proof of that fact, I don't believe it. Besides, \"multiplying by $n$\" is not what happened in the exhibited case. – Raphael Jul 18 '12 at 21:32\n• I think you are missing the point. Your proof does not work because we may not have the same $f$ in every summand, and not even the same for the same summand but different $n$. I think I nailed it down; I will compose an answer shortly. – Raphael Jul 19 '12 at 11:38\n• I still don't understand what you're saying, so I'm glad you figured it out :-) – Xodarap Jul 19 '12 at 17:50"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9125597,"math_prob":0.9987468,"size":1598,"snap":"2019-51-2020-05","text_gpt3_token_len":461,"char_repetition_ratio":0.08908407,"word_repetition_ratio":0.0,"special_character_ratio":0.28410512,"punctuation_ratio":0.11815562,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999701,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T14:47:50Z\",\"WARC-Record-ID\":\"<urn:uuid:4660ebe3-af8d-4088-8f27-a42cad2b750e>\",\"Content-Length\":\"179843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6201ce4-84b1-4ef9-9ea3-7509ebb33f9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f731eb64-8d46-4121-98c3-390cd968f3c2>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/2814/sums-of-landau-terms-revisited\",\"WARC-Payload-Digest\":\"sha1:5H26EGDZKYOACKO4SZKFZ3NEUS3NEGK2\",\"WARC-Block-Digest\":\"sha1:XLJ4J6AD6ZCT7FDPNB755XIVTMWOLL2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543850.90_warc_CC-MAIN-20191212130009-20191212154009-00281.warc.gz\"}"} |
https://in.mathworks.com/matlabcentral/cody/problems/3-find-the-sum-of-all-the-numbers-of-the-input-vector/solutions/1596431 | [
"Cody\n\n# Problem 3. Find the sum of all the numbers of the input vector\n\nSolution 1596431\n\nSubmitted on 31 Jul 2018 by Athi\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\nx = 1; y_correct = 1; assert(isequal(vecsum(x),y_correct))\n\n2 Pass\nx = [1 2 3 5]; y_correct = 11; assert(isequal(vecsum(x),y_correct))\n\n3 Pass\nx = [1 2 3 5]; y_correct = 11; assert(isequal(vecsum(x),y_correct))\n\n4 Pass\nx = 1:100; y_correct = 5050; assert(isequal(vecsum(x),y_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.51876277,"math_prob":0.9958308,"size":607,"snap":"2020-45-2020-50","text_gpt3_token_len":200,"char_repetition_ratio":0.16749585,"word_repetition_ratio":0.17021276,"special_character_ratio":0.35914332,"punctuation_ratio":0.12820514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970146,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T02:25:26Z\",\"WARC-Record-ID\":\"<urn:uuid:daf2b3a4-7d5e-448c-b58f-35c003384fb2>\",\"Content-Length\":\"80659\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fb286c8-8158-4ed1-bcba-a952e628f81d>\",\"WARC-Concurrent-To\":\"<urn:uuid:aacec150-d56d-4fe9-a927-1f281fcd6520>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://in.mathworks.com/matlabcentral/cody/problems/3-find-the-sum-of-all-the-numbers-of-the-input-vector/solutions/1596431\",\"WARC-Payload-Digest\":\"sha1:WQZNGEYVSDACAHO7O5GZC6A4GHAD5XPJ\",\"WARC-Block-Digest\":\"sha1:GLDLKA3NHDEMQ2PRRTK54ML3RW6GHSJX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107906872.85_warc_CC-MAIN-20201030003928-20201030033928-00317.warc.gz\"}"} |
https://ru.scribd.com/document/249352659/Unit-Planscribd | [
"Вы находитесь на странице: 1из 31\n\n# UNIT PLAN\n\nI. The Setting\nMy unit, entitled 3-D shapes and applications, is intended for a 10th grade\ngeometry class. My class is a heterogeneous mix of 26 students, 16 of which that are\nmales and 10 that are females. Even though this is a standard level geometry class,\nmost of the students are already showing honors level potential. The unit will take\napproximately 9 days; 7 for lessons, 1 for review and 1 for the exam. Each\ninstructional lesson is 50 minutes.\nII. Overview and Rationale for the Unit Overall, this unit should not only provide students the knowledge to work with three\ndimensional shapes, but also be able to apply some of their properties to real life\napplications. Being able to work in three dimensions is the main goal of this unit.\nOften, as most are one dimensional in life, its hard to progress through and enjoy\nthe true beauty of all three dimensions. This unit will also be helpful throughout the\nremainder of the course because we will be making references to 3-D figures I\nregards to proofs. With the visualization out of the way, we can focus our attention\nsolely on the proofs.\nCommon Core standards:\nG.MG.1. Use geometric shapes, their measures, and their properties to\ndescribe objects (e.g., modeling a tree trunk or a human torso as a cylinder).\nG.MG. 2. Apply concepts of density based on area and volume in modeling\nsituations (e.g., persons per square mile, BTUs per cubic foot).\nNCTM standards:\nAnalyze characteristics and properties of two- and\nthree-dimensional geometric shapes and develop\nanalyze properties and determine attributes of two- and three-dimensional\nobjects;\nexplore relationships (including congruence and similarity) among classes of\ntwo- and three- dimensional geometric objects, make and test conjectures\nabout them, and solve problems involving them;\nWhile in mathematics, the common rational for teaching is to prepare them for the next\nlevel of mathematics. In my unit however, I aim to make connections between mathematical\nconcepts and real life. This unit is especially good for my target audience because it takes a new\nlook into the world of three dimensions and the perceptions, or lack thereof, a person can have.\nSince we all live in a three dimensional world, it is hard not to find a relationship between the\n\nmathematics I present and what they will experience in everyday life. Therefore, my rational for\nthis unit is to open their eyes to new perspectives of the world they live in.\nIV. Content Outline\n1. Graphing in three dimensions\n2. Properties of 3-D shapes\na. Cubes\nb. Prisms\nc. Pyramids\nd. Cylinders\ne. Cones\nf. Spheres\n3. Finding surface area\n4. Finding Volume\n5. Cavalieris Principle & Cross sections\n\n(Attatched)\n\n## VII. Assessment procedures\n\nFor each objective that I present, I always give multiple means of assessment to\nensure the objectives are met. Some of the common assessments I used in my lesson\nplan are as follows:\na) Graphic organizers: Used throughout the unit to make comparisons among\nthe different 3-D shapes\nb) Exploration worksheet: With Common Core taking over the mathematics\ncurriculum, it is essential to implement more hands-on learning. With these\nexploration worksheets, I can assess to see if the students are understanding the\nmaterial.\nc) Exit tickets: For all of my exit tickets, I not only assess some of the formulas\nand applications that I teacher, but I also want to make important references to\nactivities that led them to the knowledge they obtained that day\nd) Homework: Homework is presented every day except the day before the\nreview in the hopes that students will get to practice the new material they learned\n\nDay 1\nIntroduction: 2-D shapes in a 3-D world\nDay 1 Objectives:\n1. Students will be able to utilize their knowledge of 2-D shapes in order to graph\nsolids in three dimensions\n2. Students will be able to explain the third dimension in order to discuss\nperceptions of everyday life\nAssessments:\n1. During the course of the lesson, students will be required to graph 3-D objects in\nboth two and three dimensions.\n2. During the exploration activity, students will be required to practice plotting\ncoordinates in three dimensions on a worksheet. The plots will resemble 3-D solids\nthat we will cover in the next couple days.\n3. At the end of class, I will provide an exit ticket that will assess the students\nknowledge of graphing in three dimensions as well as graphing everyday 3-D\nitems in two dimensions.\n4. For homework, the students will create a question sheet for one of their\nclassmates to do next class. While it will once again cover graphing in three\ndimensions, it will be up to each student to play teacher and also provide an answer\nkey to their problem set.\n\nActivities\n1. 3-D shapes in two dimensions: In the first activity, I will have brought in\nrandom household items for the students to observe. Students will be put into\ngroups of four and each given a single item. While they will not be taking any\nmeasurements in this class period, they will be required to draw a perspective of\none angle of the item. In other words, students will be making a sketch (on graph\npaper), from looking at either the back, front, left side or right side of the item.\nAfter each student has chosen an object and drawn a side, we will discuss the\nperceptions of items. More specifically, I may question how only 1 angle and one\nviewpoint can alter ones perception of a solid. You look up in the sky and see a\nfour sided 2-D figure. What shape is it? While most may answer rectangle or\nsquare, the reality is that it is most likely a cube or prism. Unfortunately, with our\nperception on reality, even though we live in a 3 dimensional world, we tend to see\n\nthings in one dimension. Hopefully this activity will open their eyes to a new world\nof mathematics.\n2. Creating the third dimension: In the second activity, not long after the first,\nstudents will take their graphs of one perception and try and manipulate the graphs\ndrawn by the other group members to re-create the original 3-D figure. While this\nactivity will initially seem easy, it is extremely difficult to create a 3-D drawing\nfrom only seeing a few images of the original figure. However, once the students\ncan visualize this image and re-create it, (the best they can) they will start to notice\nsomething they hadnt before when graphing in two dimensions; depth. In order to\nre-create these 3-D objects, there needs to be a third dimension present. I will ask\nthe students the type of orientation needed to create such an abstract image. This\nwill lead us into the next activity in regards to plotting points in three dimensions.\n3. Graphing in three dimensions: For the final activity, I will be guiding the\nstudents in exploring the nature of three dimensions. To do so, I will be showing\nthe students how to plot points on a 3-D graph, and then sending them on their own\nto find points for the 3-D objects we plotted the activity before. Then using a\nregular sheet of graph paper, they will plot the coordinates of the objects and create\nan outline for them. This activity will precede the classwork in the hopes that they\nbuild a deeper understanding of graphing in three dimensions.\nKey Content Outline:\nI. 2-D shapes in a 3-D world\nII. The third dimension\nIII. Graphing in three dimensions\nThis lesson was intended to open the eyes of my students to the true world of three\ndimensions. Even though they have lived in this 3-D world for about 15 years\nalready, they still have a lot to understand in regards to perception. Hopefully,\nespecially for future mathematics, my students understand that it isnt always\nwhat it seems.\nResources:\nNo technology resources\nWill need handouts and graph paper\n\nDay 2\nCubes, Prisms and Pyramids\nDay 2 Objectives:\n1. Students will be able to implement their knowledge of 2-D shapes in order to\ngraph cubes, prisms and pyramids\n2. Students will be able to compare/contrast cubes, prism and solids\n3. Students will be able to construct 3-D mobiles for a cube, prism and pyramid in\norder to further analyze their properties\nAssessments:\n1. During the lesson, students will be filling out graphic organizers to compare the\nthree different types of 3-D solids\n2. Classwork practice will be given for graphing cubes, prisms and pyramids in\nthree dimensions\n3. Students will construct mobiles of cubes, prisms and pyramids\n4. At the end of class, students will be given an exit ticket that will assess their\nknowledge, not only of the continuation of graphing in three dimensions, but more\nimportantly ,the properties of each that make them distinct from each other.\n5. The final assessment will be a homework assignment in which students will\ngraph particular cubes, prism and pyramids. Once again, major comparison\nquestions will also be given to ensure maximum understanding of the different 3-D\nshapes. Students will also be required to start bringing in models of cubes, prisms\nand pyramids from home.\nActivities:\n1. Graphing cubes, prisms and pyramids: The class session will be split up into 6\nsections, each of which covering a 3-D object and either its properties or graphing\nstrategies. For this particular lesson, the graphic organizer will help to drive the\nproperties section of this lesson. The graphing portions will be done by an activity\nin which student will explore, using properties of 2-D figures, graphing cubes,\nprisms and pyramids in three dimensions. A common question I could ask in the\nclasswork is why the coordinates are what they are. More specifically, a cube for\nexample, why are the distances between any two points, when graphing in one\ndimension, the same. If the students were focused during the lecture, the answer is\nsimple. Each measurement is the same because one face of a cube is a square. The\nsame sort of questions could be asked when graphing the prism and pyramid as\nwell.\n\n## 2. Creation of mobiles: The next activity I would lead my class in is the\n\nconstruction of 3-D mobiles. In order to do so, my students will need an\nunderstanding of the properties of cubes, prisms and pyramids. With this\nknowledge, the students will use paper, scissors and pencils to draw two\ndimensional outlines for each three dimensional shape. Then, the students will fold\neach side according to the directions and tape them together to create their figure.\nThis solid will not only be a great tool to remember the properties I future class\nperiods, but I also intend to you use them in the next lesson when we discover\nsurface area.\nKey Content Outline:\nI. Properties of a cube\nII. Graphing a cube\nIII. Properties of a prism\nIV. Graphing a prism\nV. Properties of a pyramid\nVI. Graphing a Pyramid\nThis lesson is intended to introduce the properties of cubes, prisms, and pyramids\nin the hopes of once again having them make connections to the real world. This\nlesson is especially crucial because of the applications we will be covering in the\nnext couple days. It will be extremely hard to find surface area and volume of\nsolids that students have no understanding of.\nResources:\nNo technology resources\nWe will need scissors, tape, paper and handouts\n\n## Day 3: Surface area (Major lesson component)\n\nClass Description\n\nUnit Title\n\nLesson\nTopic\n\n3-D shapes\nSurface\nand\nArea\napplications\n\nType of\nLesson\nDevelopmental\nlesson\n\nMD State Curriculum\nStandard/ MD Common Core\nState Standard\nApply geometric concepts in modeling\nsituations\n1. Use geometric shapes, their measures, and their\nproperties to describe objects (e.g., modeling a tree\ntrunk or a human torso as a cylinder).\n2. Apply concepts of density based on area and volume\nin modeling situations (e.g., persons per square mile,\nBTUs per cubic foot).\n3. Apply geometric methods to solve design problems\n(e.g., designing an object or structure to satisfy physical\nconstraints or minimize cost; working with typographic\ngrid systems based on ratios).\n\nJudges Prior Knowledge (How do you know students are ready to learn the\ncontent in this lesson?)\nI will be discussing two major ideas in this lesson; manipulation of 3-D\nshapes and surface area. To understand the manipulation of 3-D shapes, the\nstudents will need a vast knowledge of the creation of 3-D shapes using 2-D\nshapes. This requirement is fulfilled in my introduction lesson to this unit\non day 1 (This lesson is about day 4 or 5). I will also be including a\nsegment in the drill. To teach the second idea, surface area, students will\nneed to have had previous instruction on the area of 2-D shapes. This type of\nmaterial is usually covered all throughout elementary and middle school. Just\nin case though, I have included a review in my drill as well.\n\nLesson Objective(s):\nObjective 1 SWBAT utilize their knowledge of area to discover\nsurface area of three different 3-D shapes\nObjective 2 SWBAT participate in a hands on activity to find\nthe surface area of common household items\nAssessment(s):\nAssessment for Objective 1 Drill and Opening worksheet\n\n## Is this a formative or summative assessment? Formative\n\nWould you characterize this assessment as a traditional or\nWhy did you select this assessment strategy to measure student\nlearning? It is a review of things they should be confident\nAssessment for Objective 2 Activity completion/ worksheet\nIs this a formative or summative assessment? Formative\nWould you characterize this assessment as a traditional or\nperformance assessment? Performance\nWhy did you select this assessment strategy to measure student\nlearning? The activity allows for student discovery of\nmathematical relationships.\nMaterials Needed for Lesson\nObject from home ( previously instructed)\nModels of 3-D shapes (previously constructed)\nCalculator\nRuler\n\n## Incorporation of Technology (if appropriate)\n\n*If you are using a website, type in the website citation.\nCalculator: To assist the student in calculations\n\nLesson Development\nTeacher\n\nStudents\n\nTime\n\nDrill/Motivational\nActivity\n\nWorksheet\n\n8 minutes\n\nTransition\n\nActivity 1\n\nKey Questions\n\nWhat assumption\nthe area of a two\ndimensional figure\nmore than one two\ndimensional shape?\nWhat seems to be\nthe main\ndifference between\na cube and prism?\nHow does this\naffect the way we\nfind the surface\narea of each?\n\nSo everyone seemed\nto notice that the\nlast figure on the\ndrill was actually a\ncube. Correct? So\neven though I was\nof a 2-D figure, you\nwere actually\nfinding the area of\na cube. We call the\narea of a 3-D shape,\nsurface area\nLecture: Comparing\nthe area of 2-D\nconstructions with\n3-D shapes\n(Worksheet)\nAnticipated\nResponses?\nIt is the sum of\neach two dimensional\nfigure\n\n## The prism is bigger\n\nso the surface area\nis bigger. Since all\nthe sides of the\ncube are the same\nthe surface area is\n6bh. For a prism,\nthe equation would\nbe\nlw+lw+wh+wh+lh+lh.\nCorrection: The\nprism may appear to\nbe bigger in most\ncases but not\nalways. Coincidently\n\n10 minutes\n\nWhat is unique\nsurface area of a\npyramid\n\nTransition\n\n## though, the surface\n\narea of a prism will\nbe bigger than that\nof a cube most of\nthe time. But, do\nnot confuse visual\nrepresentation with\nmathematical\nconcepts. In\nstudents to see and\nbe able to recognize\nthe equation for the\nsurface area of a\nprism: 2(lw+lh+hw)\nIt requires finding\nthe area of\ntriangles and\nsquares.\nMake note: the sides\nof the squares will\nbe the same as the\nbases of each\ntriangle in the\npyramid.\n\nActivity 2\n\n## Now that we know\n\nhow to find the\nsurface area of\ncubes, pyramids and\nprisms, lets go\nKey Questions\nmaterials from home\nDid the formulas for\nsurface area differ for and get into the\ngroups I have\neach object? (i.e was\nassigned on the\nthe way you found\nboard. Hopefully\nsurface area for a\neveryone in the\nshoebox the same way\ngroup has a\nyou found the surface\ndifferent example of\narea of a soda box?)\neach of the 3-D\nfigures we worked\nDid you find any areas\nwhich were close to the with today. While\nyou are doing that,\narea of maybe a\nI will pass around\ndifferent 3-D figure?\n\n25 minutes\n\nmeans?\n\nrulers\n\nTransition\n\n## Finding the Surface\n\narea of household\nitems\nAnticipated\nResponses?\nfound were\ndifferent, but the\nway in which we\nfound them was the\nsame.\n\nSummary/Closure/Revisit\nObjective\nNo, none of the\nvalues were even\nclose. I dont know\nwhat that would mean\nSafety Valve\nthough.\nmean that if you\nopened up the 3-D\nshapes, you would\nfind that the sum of\nthe areas would be\nthe same.\ngentlemen that wraps\nup our lesson on the\nsurface area of\nPyramids, cubes and\nprisms. Can anyone\ntell me what 3-D\nshapes I missed?\nCones, Cylinders and\nSpheres What two\ndimensional shapes\ndo they utilize?\n\n5-7 minutes\n\nCircles. So next\ntime, we will look\nat those shapes in\nparticular and do a\nsimilar activity\nwith more objects\nfrom home.\nExit ticket\n\n## Start to discuss and\n\ndiscover cylinders\nand cones\nReflection on assessment Assume that after you have taught\nthis lesson and assessed student learning you find that students\ndid not meet the objective(s). How would you plan future\ninstruction on this lessons content and skills to ensure\nstudent mastery and application?\nUnfortunately, in regards to Common core standards, the 3-D\nshapes unit is very limited for time and in some cases gets\nthrown all into one week. If I find that students arent meeting\nmy objectives, over the course of two weeks, I would probably\nmake less time for discovery learning and more time for lecture\nbased work. I do not like lecture at all, and in most cases it\nis ineffective, but with the vast number of activities I have\nplanned, the only way students shouldnt be accomplishing my\nobjectives is if they stray too far away from the main objective\nwhen exploring or by the time they finish the activity, the\nobjective is unclear. Regardless though, more lecture seems to\nbe the best solution here.\n\nDay 4\nVolume\n\nDay 4 Objectives:\n1. Students will be able to interpret the theoretical volume in order to calculate the\nvolume of cubes, pyramids and prisms.\n2. Students will be able to participate in a hands on activity in order to calculate\nthe volume of household items that represent cubes, prisms and pyramids\nAssessments:\n1. Fill in the blank lecture notes will handed out at the beginning of class\n2. Before the exploration, students will also be given a worksheet to practice\nfinding the volume of cubes, prisms and pyramids.\n3. During the exploration, students will need to fill out a chart that will showcase\ntheir knowledge of volume by application.\n4. For an exit ticket, students will be required to calculate the volume of cubes,\nprisms and pyramids. However, the questions will be based on real world\napplications. For example, a question might read you want to fill your shoebox up\nwith water to act as a fishbowl. Unfortunately though, your fish requires living in a\nbowl with volume 24inches3. Too much water is ok, but too little will kill him. If\nthe dimensions of the shoebox are 2inches x 5 inches x 4inches, what can we\nconclude about the lifespan of the fish? Should he die anytime soon?\n5. For homework, students will be finding more household items and finding the\nvolume of them as well.\nActivities:\n1. Volume of household items: For this lesson, due to the length of it, there will\nonly be one exploration activity. In this activity, students will be splitting up into\ngroups and measuring the dimensions of common household items that represent\ncubes, prisms and pyramids. With these dimensions, students will calculate the\nvolume of each item and discuss their findings with a partner. Two of each kind of\nsolid should be measured to compare the results. Even though this activity seems\nto be very short and simple, many questions arise when discussing the volume of\nany three dimensional object. The first of which is based similarity and\nproportionality within solids. For example, a good question might ask that prism\ncan clearly hold more volume than the others. What dimensions make this\npossible? Why do you think this is? Also, I may ask the relationship in volumes if\nthe sides were all proportional. In other words, if one prism had dimensions\n\n4x10x8 and the other had 2x5x4, how would one volume compare to the other?\nFinally, in my explanation section, I want to make sure that I express the great deal\nof ease for finding the volume of these shapes in particular. Similar to the way\nsurface area can be hard, volume can easily be just as hard when dealing with more\ncomplex solids.\nKey Content Outline:\nI. What is volume? (Units included)\nII. What is the volume of cube?\nIII. What is the volume of a rectangular prism?\nIV. What is the volume of a pyramid?\nThis lesson is intended to once again give a deeper meaning to an easy concept in\nthree dimensions. Even though volume has most likely been seen in a more applied\nnature to real world applications, I doubt they had the same understanding of\nvolume going into todays lesson. This lesson will also prove to be helpful when\nwe discuss the volume of cones, cylinders and spheres as well. With less time\nhaving to cover the idea of volume, we can take more time for exploration and\napplication\nResources:\nWe will need handouts, household items, rulers and charts\n\nDay 5\nCones and Cylinders\n\nDay 5 Objectives:\n1. Students will be able to relate the applications of circles in order to construct\ncylinders and cones\n2. Students will be able to interpret the height of a cone and cylinder as a\nquadrilateral in order to further understand their surface area\n3. Students will participate in a hands on activity in order to calculate the surface\narea and volume of common household items that represent cones and cylinders\nAssessments:\n1. Students will first be provided a graphic organizer that will be used to\ndistinguish between cones and cylinders\n2. Students will also be given a classwork assignment in which they will graph\ncones and cylinders in three dimensions as well as calculate their volume and\nsurface area\n3. Students will also be given a chart, similar to the ones used for cubes, prisms\nand pyramids. This will be used as a guide for finding the surface area and volume\nof common household items that represent cones and cylinders\n4. An exit ticket will also be given that will assess students knowledge of graphing\ncones and cylinders in three dimensions. In addition, it will look to apply the\nformulas discovered in the exploration.\n5. Finally, a homework assignment will be given that will include application\nproblems of finding both the surface area and volume of cubes, prisms, pyramids,\ncones and cylinders.\n\nActivities:\n1. Constructing a Cylinder: For my first activity, I will have students analyze the\nproperties of a cylinder by manipulating a roll of paper towels. This will be a very\ninnovative activity because it represents the true height of a cylinder. One of the\nbiggest discrepancies in geometry classes is the idea that no one seems to know\nwhere the formula for surface area of a cylinder comes from. With the paper\ntowels, I can literally unravel a single towel and show it to my class. I will ask\nthem first what new shape I have created by opening up the cylinder. Next, I will\nask them what the base and the height are for the new quadrilateral I created.\nWhile the height is simply h, the base can be represented by a variation of a circle.\n\n## More specifically, the base is represented by circumference of the circle at the\n\nbase. This will help lead us into the surface area exploration. Unfortunately, the\nsame type of manipulation cannot occur for the cone.\n2. Surface area/ volume: The surface area and volume activity will be the exact\nsame activity we had done 2 days earlier, except now the students are measuring\ncylinders and cones. Some of the items we will have are ice cream cones, cone\ncups, funnels, tops of soda bottles, soup cans, soda cans, cylindrical boxes, coffee\ncans and cups. Since the formulas have already been given, the actual process of\nfinding all the surface areas and volumes wont be difficult, but it will provide the\nstudents with practice and more real life application.\nKey Content Outline:\nI. Properties of a circle in relation to three dimensions\nII. Properties of a Cylinder\nIII. Properties of a Cone\nIV. Surface area of a Cylinder\nV. Surface area of a Cone\nVI. Volume of a Cylinder\nVII. Volume of a Cone\nThis lesson is intended to provide another dimension to our students knowledge of\nthree dimensional shapes. While cubes and pyramids were harder to find in real\nlife applications, cylinders and cones are found almost anywhere. Now with this\nknowledge, we can cover the last three dimensional shape that utilizes a circle; a\nsphere. This lesson provided some excellent strategies that will be taken over\nwhen we talk about spheres as well.\nResources:\nNo technology resources\nWe will need handouts, a paper towel roll, household items, rulers and charts\n\n## Day 6: Spheres (Major lesson component)\n\nClass Description\nThis is a 10th grade Geometry class. It is simulated by 8\nstudents, 4 of which are male and the other four, female. Though\nthe classroom is quite large and the number of students is quite\nsmall in comparison, I will have them sitting in groups of four\nin the front of the classroom. Each table will have 2 males and\n2 females. Since Jessie and Lydia are my higher level students\n(in mathematics), I will put them at separate tables.\nUnit Title\n\nLesson\nTopic\n\n3-D shapes\nSpheres\nand\napplications\n\nType of\nLesson\nDevelopmental\nLesson\n\nMD State Curriculum\nStandard/ MD Common Core\nState Standard\nApply geometric concepts in modeling\nsituations\n1. Use geometric shapes, their measures, and their\nproperties to describe objects (e.g., modeling a tree\ntrunk or a human torso as a cylinder).\n2. Apply concepts of density based on area and volume\nin modeling situations (e.g., persons per square mile,\nBTUs per cubic foot).\n\n## Judges Prior Knowledge (How do you know students are ready to\n\nlearn the content in this lesson?)\nIn order to understand the significance of finding the surface\narea and volume of a sphere, students must have a vast\nunderstanding of how to find the volume and surface area of\ncubes, pyramids and rectangular prisms. With time constraints, I\nhave elected to only review cubes and rectangular prisms.\nPyramids take a little longer to compute. Also, even though this\nwill be reviewed in lessons before this one, a mastery of\nworking with 2-D shapes and finding their area is crucial to\nunderstanding the shift into 3- dimensions.\n\nLesson Objective(s):\nObjective 1 Students will be able to analyze the major\ncharacteristics of a sphere in order calculate its volume in\nreal world applications.\nObjective 2 Students will manipulate an orange in order to\n\n## develop the surface area of a sphere.\n\nAssessment(s):\nAssessment for Objective 1 PowerPoint Question\nThe distance from the center of Spalding NBA basketball to any\npoint on the ball itself is 3inches. How much air should go\ninto the basketball to fill it up?\nIs this a formative or summative assessment? Formative\nWould you characterize this assessment as a traditional or\nperformance assessment? Performance Assessment\nWhy did you select this assessment strategy to measure student\nlearning? It is a direct way to test the knowledge the students\nwere just given with a real world application.\nAssessment for Objective 2 Orange discovery worksheet\nIs this a formative or summative assessment? Formative\nWould you characterize this assessment as a traditional or\nperformance assessment? Performance Assessment\nWhy did you select this assessment strategy to measure student\nlearning? This hands on activity requires a step by step process\nof discovering the surface area of a sphere. The worksheet\nchronologically takes the students through the thought process\nthey should have when completing this activity. It will be a\ngreat way to make sure they are on track.\n\n## Materials Needed for Lesson\n\nOranges\nPowerPoint slides\nNapkins\n\nNo IEP students in this particular lesson\nIncorporation of Technology (if appropriate)\n*If you are using a website, type in the website citation.\nPowerPoint slides as a guide\n\nTeacher\nKey Questions\nSo a football is a\nsphere right? Can\nanyone give me some\nexamples of spheres?\nDrill/Motivational\nActivity\n\nKey Questions\nWhat makes a cube\ndifferent from a\nrectangular prism? Why\nis its surface area and\nvolume so easy to find?\n\n## Does anyone know why\n\nthe surface area is\nsquared and the volume\nis cubed? Shouldnt\nthey both be cubed\nsince we are in 3dimennsions?\nTransition\n\nLesson Development\nStudents\nAnticipated\nResponses?\nNo!! Balls:\nsoccerball,\ntennisball,\nbaseball etc.\n\nTime\n5 minutes\n\n## See Drill sheet\n\nAnticipated\nResponses?\nSince a cube is\nthe volume and\nsurface area are\ncalculated from\nonly one\nmeasurement. A\nrectangular prism\ncan have up to\nthree different\nsets of dimensions\nto work with.\nNo, surface area is\nsquared because\nonly two things are\nmultiplied at a\ntime. Volume is\ncubed because it is\nbased on\nmultiplication of\n\n10 minutes\n\nthree dimensions.\nActivity 1\n\nKey Questions\n\nSo now that\neveryone seems to\nfeel comfortable\nworking with the\neasier 3-D shapes,\nlets move into a\nmore complex figure\nin spheres.\n\n## What do we call r, the\n\ndistance from the\ncenter of the sphere to Lecture:\nany point on the\nInformation on\nsphere?\nspheres\nHow many faces does a\nsphere have?\n\nAnticipated\nResponses?\n\n## Can anyone show me a\n\nAre there any others?\n\nNone!!\nDoes anyone know or\nremember the volume of\na sphere?\n\n## Students can draw\n\nany line from the\ncenter of the\nsphere to any point\nWhy do you think we use on the sphere\npi and r in our\nitself. Therefore\nequation for volume?\nthere are infinite\nlines that students\ncan give me.\nNo.\nTeacher response:\nAssessment 1: The\nThe volume of a\ndistance from the\nsphere, while\ncenter of Spalding NBA\ndifficult and very\nbasketball to any point arbitrary is 4/3\non the ball itself is\nr3.\n3inches. How much air\nshould go into the\nBecause its based\non a circle.\nup?\nTeacher response:\n\nThats good to\nSurface area is measure note. The volume\nin squares. Why again?\nhas a lot to do\nwith the\naccumulation of\ncircles. When we\nget to surface\narea, we will talk\nTransition\nthat.\n36. There should\nbe no difficulty\nseeing as though it\nis simply a direct\nimplementation of\nthe equation.\n\nActivity 2\n\nKey Questions\nWhat is the area of\neach great circle you\ncreated?\n\nBecause we are\nmultiplying two\nthings at a time.\nEven though it is\nin 3-dimmensions,\nit is based on the\nnumber of items\nbeing multiplies.\n\n## The surface area\n\nof a sphere is not\nonly difficult\nHow many great circles\nbecause it does not\ndid you fill with your\norange peel?\npatterns as the\nother shapes, but\nWhat is the surface\nin most cases,\narea of a sphere being\nstudents will\nrepresented by in this\nforget the equation\nactivity?\nfor surface area\nmore often than\nSo, if you were able to that of volume.\nfill four great\nHopefully this next\ncircles, each of which\nactivity will spark\nhaving area r2, then\nyou interest and\ngive you a\nthe surface area of a\nmemorable\n\n10 minutes\n\nsphere?\n\nexperience.\n\nTransition\nDiscovering the\nsurface area of a\nsphere\nAnticipated\nResponses?\n\nKey Questions\n\nr2\n\n## Before todays lesson,\n\ndid you have any idea\nwhat the surface area\nof a sphere is?\n\n## When you leave class\n\nthough, would you be\nable to explain to\nsomeone what the\nsurface area of a\nsphere means to you?\n\n## The surface area of\n\na sphere is being\nrepresented by the\nnumber of filled\ncircles.\n4r2\n\nSummary/Closure/Revisit So to wrap up\nObjective\ntodays lesson lets\nimportance of\nvisualization and\nwhy it helps to\nunderstand topics\nAnticipated\nResponses?\nSafety Valve\n\nNo, in fact, I\ndont even remember\nThe volume has\nalways been\n\nembedded in my\nbrain. However,\nwith todays\nactivity, I not\nonly feel confident\nusing the equation,\nI feel equally as\ncomfortable\nexplaining it to\nsomeone else.\nSee Exit Ticket w/\nIn todays lesson,\nwe completed both\nobjectives relating\nto the surface area\nand volume of a\nsphere. The\nimportant\ninformation to take\naway is the usage\nof the formulas and\nsome basic\nspheres. These will\ncome back to haunt\nyou later, so do\nnot forget the\nformulas!\nWith a 25 minute\nlesson, yet alone a\nmath lesson, it is\nreally hard to plan\nfor a safety valve.\nHowever, with a bit\nof extra time, I\ncould provide more\nreal world examples\nto drive the\nconcept home for\nthe students.\nReflection on assessment Assume that after you have taught\nthis lesson and assessed student learning you find that students\n\ndid not meet the objective(s). How would you plan future\ninstruction on this lessons content and skills to ensure\nstudent mastery and application?\nIf I were to be informed, through assessment, that my lesson did\nnot coincide with the objectives, regardless of my time frame, I\nneed to find more time to discuss it. Seeing as though my unit\nis 8 days with the initial assumption of 7, I may have that\nextra day to go a little deeper in class. I would however need\nto address this idea ASAP because once this unit is over, it\nwill be very hard to come back to it. We absolutely cannot skip\nit because of its importance. Also, if I wanted to keep the\norange activity, because it worked so well, I can possibly cut\ndown on some lecture and give the students more hands on\nopportunity. That is essentially what Common Core wants in the\nlong run anyway. To ensure students have mastered this concept,\nI will make sure that surface area and volume are hit the\nhardest.\n\nDay 7\nCavalieris Principle\n\nDay 7 Objectives:\n1. Students will be able to manipulate 3-D figures in order to interpret the cross\nsections that they create.\n2. Students will be able to interpret cross sections in order to calculate the volume\nof 3-D shapes using Cavalieris principle\nAssessments:\n1. To introduce the idea of cross sections, I will be leading the students in an\nactivity. A worksheet will be provided to guide the students in their findings.\n2. A practice classwork will also be given for cross sections to ensure students can\nfind cross sections of all 3-D shapes we have discussed thus far (cubes, prisms,\n3. Another exploration worksheet will be given in their discovery of Cavalieris\nprinciple. Even though the activity will cover only spheres, the students will use\nthese worksheets to make predictions of the effects on other 3-D solids.\n4. After the lesson, I will provide an exit ticket that will assess students\nknowledge of cross sections and ensure they are able to apply Cavalieris principle\nto any 3-D solid.\n5. For homework, the students will begin studying for their review the next class.\nIn their studying, they will look over the material taught today and hopefully\npractice it.\n\nActivities:\n1. Cross sections: Before the activity starts, students already have a basic idea of\nwhat cross sections are. Unfortunately though, it is very hard to visualize a cross\nsection of a 3-D solid without actually cutting that solid open yourself. My activity\naims to fill this gap in my students understanding. Using Play-Doh, I will have my\nstudents mold a cone and a cylinder (to the best of their abilities). Using a piece of\nfloss, they will cut their Play-Doh as directed. Their first attempt will be to cut\neach solid horizontally, thus creating similar solids. What the students should\nnotice is that not only are you creating a smaller version of the original solid, but\nyou are also creating 2-D faces from where you cut. These should be circles. This\nidea should also take students back to one of our original lessons where we\nmanipulated three dimensional shapes to graph in two dimensions. Afterwards, I\n\nwill have the students put their 3-D solids back together ( easy with Play-Doh) and\ntry cutting from a different angle. It will be essential to note that this cut will not be\nparallel to the base. With this cut, students will also note that the 2-D shape they\nhave now created is an ellipse, not a circle. The students will continue this process\nfor cubes, prisms and pyramids as well. Then we will look at spheres. For spheres,\nwhich differ from all other 3-D solids, its cross sections are always a circle. This\nwill also be an important idea for the lesson.\n2. Cavalieris principle: The sphere will prove to be a great last example for cross\nsections. As most classes do, I will be introducing Cavalieris principle through the\nmanipulation of spheres. One of the greatest proofs in 3-D geometry involves the\ndiscovery of the volume of a sphere. In this activity, that will be very similar to the\nlast, students will again be finding cross sections, but comparing it to cross sections\nof other 3-D shapes. Cavalieris principle is as follows: If two solids of equal\nheight are cut by a horizontal parallel equidistant from each of the bases, those\nobjects will have the same volume. With that said students can simply make two\nsolids with Ply-Doh, cut both of them equidistant from the bases using floss and\nthen measure the volume of each new solid. They will be the same.\n\n## Key Content Outline:\n\nI. Cross sections\n- Horizontal\n- Diagonal\nII. Cavalieris principle\nThis lesson is intended to relate the students to something they have most likely\nplayed with before; Play-Doh. In one instance, they are doing something fun and\nexciting, but they are also doing mathematics. This material is necessary for the\nstudy session and exam.\nResources:\nNo technology resources\nWe will need Play-Do, handouts and floss\n\nDay 8\nReview\n\nDay 8 Objectives:\n1. Students will be able to utilize their knowledge of 3-D shapes and applications\nin order to compete in a game of Jeopardy.\nAssessments:\n1. A game of Jeopardy will be given to assess students knowledge of cubes,\nprisms, pyramids, cylinders, cones and spheres and their respective applications\n(I.e surface area, volume, dimension in graphing, Cavalieris principle, etc.)\nActivities\n1. Students will compete in a game of Jeopardy in class. As in the common known\ngame show Jeopardy, questions will be given for different point values. The five\ncategories that will be questioned are 2-D shapes in a 3-D world, Cylinders,\ncones and spheres, Cubes, prism and pyramids, Surface Area, and Volume.\nIn regards to the point system, each category will be broken up into point values of\n200,400,600,800 and 1000 respectively. The question topics will be as follows\n2-D shapes Cylinders,\nCubes,\nSurface area Volume\nin a 3-D\ncones and\nprisms and\nworld\nspheres\npyramids\n200\nCube in 2-D Properties\nProperties\nSurface area Def. of\nof a cone\nof a\nof a sphere\nCavalieris\npyramid\nprinciple\n400\nLooking at\nRepresents A jack in\nShortcut to\nVolume of a\n2-D images funnel\nthe box is\nfinding the\nrectangle\nand deciding\nan example surface area and cube\nwhich 3-D\nof this\nof a pyramid\nimages they\nrepresent\n600\nExplain the Properties\nProperties\nApplication Cavalieris\nthird\nof a cylinder of a prism\nto prisms\nprinciple\ndimension\nwas\noriginally\nproven for a\n\n800\n\n1000\n\nGraph a\nrectangular\nprism with\n1x2x3 as the\ndimensions\nGraph a\nsphere with\n\nCross\nCross\nsection is an section of\noval\ncubes and\nrectangles\n\nApplication\nto cylinders\n\nCross\nsection is a\ncircle\nregardless\nof how you\ncut it\n\nApplication\nto cones\n\nCross\nsection of a\npyramid\n\nsphere\nApplication\nfor cones\n\nIf the height\nof a cone is\n4 inches and\nthe height of\na cylinder is\n4 inches, if I\ncreate a\nparallel, 3\ninches from\nthe base, and\nI know the\nvolume of\nthe cylinder\nis 12, what\nis the\nvolume of\nthe cone?\n\n## Key Content Outline:\n\n1. Cubes, Cylinders, Cones, Pyramids, Prisms and Spheres\n2. Surface area\n3. Volume\n4. 2-D and 3-D graphing\n5. Cavaliers principle\nThe intent of this lesson was to prepare my students for the exam next class. In\naddition though, it served as a fun and innovative way to help them remember all\nthe material.\nResources:\n\n## We will need the interactive PowerPoint that utilizes Jeopardy\n\nDay 9\nExam\n\nDay 8 Objectives:\n1. Students will be able to apply their knowledge of 3-D shapes and their\napplications in order to complete an exam\nAssessments:\n1. Students will be given their unit exam today. A few sample questions from each\ntopic are as follows. There are also many forms of questions on todays exam.\nThey include multiple choice, fill in the blank, calculation, matching, graphing and\nshort responses\n2-D objects in a 3-D world\n-(Fill in the blank)If I were to look at a cube from any angle, what two\ndimensional shape should I see?\n-(Graphing)Graph a rectangular prism on the grid provided. Its dimensions are\n2x3x4.\n- (Multiple choice) Which 3-D solids can you manipulate so that at least one side\nis a circle?\na. Sphere\nb. Cone\nc. Cylinder\nd. a and b\ne. a,b and c\nCubes, Pyramids, Prisms, Cones, Spheres and Cylinders\n-(Multiple choice) Which 3-D solid could also be another?\na. Rectangular prism\nb. Cube\nc. Cone\nd. Sphere\ne. None of the above\n- (Short response)What makes pyramids different than both cubes and prisms,\neven though we put them all in the same category?\n- (Matching)Match the following to its appropriate 2-D shape( there can be more\nthan one for each)\n\n1.Prism\n2.Cube\n3.Sphere\n4. Cylinder\n5.Cone\n6. Pyramid\n\na. Square\nb. Rectange\nc. Circle\nd. Triangle\n\nSurface area\n-(Short response)Surface area for cones, cylinders and sphere include a pi. Why?\n- (Fill in the blank)Based on the orange activity, we discovered that our orange\npeel filled _______ circles. The area of each circle was ______.This meant that our\nsurface area was going to be ___________\n- (calculation)Find the surface area of a rectangular prism with dimensions 4x8x3\nVolume\n- (Calculation) The volume of a standard MLB baseball is about 32/ 3 inches3.\nUsing the formula for the volume of a sphere, what is the radius of a standard MLB\nbaseball?\n- (Multiple choice) Whos theorem says that if we have two three dimensional\nsolids that have the same height, and we draw two parallels equidistant from the\nbases, they will have the same volume?\na. Pythagoras\nb. Euclid\nc. Cavalieri\nd. Einstein\n- (Fill in the blank) How much water can I put in a box if my boxes dimensions\nare 5x8x10_______\nActivities\n1. Students will take an exam that will cover 3-D shapes, surface area, volume,\ngraphing and Cavalieris principle.\nKey Content Outline:\n1. Cubes, Cylinders, Cones, Pyramids, Prisms and Spheres\n2. Surface area\n3. Volume\n4. 2-D and 3-D graphing\n5. Cavaliers principle\n\nThe intent of this lesson is to test the knowledge of my students in their ability to\napply formulas of 3-D solids to real world applications\nResources:\nNo technology resources\nWe only need the test itself"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9070546,"math_prob":0.7381651,"size":29137,"snap":"2021-04-2021-17","text_gpt3_token_len":6500,"char_repetition_ratio":0.15587135,"word_repetition_ratio":0.048083466,"special_character_ratio":0.19020489,"punctuation_ratio":0.1224605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564898,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T13:14:57Z\",\"WARC-Record-ID\":\"<urn:uuid:2b2ae300-4395-4efb-bbad-42e5cc2e6431>\",\"Content-Length\":\"565242\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71d48654-7054-4280-b8b0-0e8f5f8114d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:0780a3d8-e7bc-49d5-8c13-39a127967d84>\",\"WARC-IP-Address\":\"199.232.66.152\",\"WARC-Target-URI\":\"https://ru.scribd.com/document/249352659/Unit-Planscribd\",\"WARC-Payload-Digest\":\"sha1:KOXXO67TOPBFLOQHNOZ7YW2S4YY3CZQ6\",\"WARC-Block-Digest\":\"sha1:ABMWSCDJMHFJ6BYC4P6LR52YW4JWQ4GH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703506640.22_warc_CC-MAIN-20210116104719-20210116134719-00081.warc.gz\"}"} |
https://afternoons-delight.com/qa/quick-answer-what-is-the-principle-of-efficiency.html | [
"",
null,
"# Quick Answer: What Is The Principle Of Efficiency?\n\n## What is the formula for efficiency?\n\nEfficiency is often measured as the ratio of useful output to total input, which can be expressed with the mathematical formula r=P/C, where P is the amount of useful output (“product”) produced per the amount C (“cost”) of resources consumed..\n\n## What is an example of allocative efficiency?\n\nAllocative efficiency means that the particular mix of goods a society produces represents the combination that society most desires. For example, often a society with a younger population has a preference for production of education, over production of health care.\n\n## What are the two types of efficiency?\n\nProductive efficiency and allocative efficiency are two concepts achieved in the long run in a perfectly competitive market. In fact, these two types of efficiency are the reason we call it a perfectly competitive market.\n\n## What is intertemporal efficiency?\n\nIn deriving these conditions, the paper extends the notion of efficiency to an intertemporal Pareto-optimal concept requiring the maximization of the ith individual’s utility at a point of time subject to the constancy of his utility in all future periods and that of all other individuals during the relevant time span.\n\n## What is known as the efficiency of doing work?\n\nBy working efficiently, more can be produced with the same amount of input (resources)(1). In short, achieving more for lower costs, a higher return and less pressure. Efficiency means ‘doing things in the right way’. 2. Two sorts of efficiency are often referred to, namely static efficiency and dynamic efficiency.\n\n## What do economists mean by efficiency?\n\nDefinition. Economic efficiency is a broad term typically used in microeconomics in order to denote the state of best possible operation of a product or service market. Economic efficiency assumes minimum cost for the production of a good or service, maximum output, and maximum surplus from the operation of the market.\n\n## What creates efficiency?\n\nEfficiency requires reducing the number of unnecessary resources used to produce a given output including personal time and energy. It is a measurable concept that can be determined using the ratio of useful output to total input.\n\n## What is simple machine efficiency?\n\nThe efficiency output of a machine is simply the output work divided by the input work, and is usually multiplied by 100 so that it is expressed as a percent. % efficiency=WoWi×100. Look back at the pictures of the simple machines and think about which would have the highest efficiency.\n\n## Why is efficiency less than 1?\n\nSince a machine does not contain a source of energy, nor can it store energy, from conservation of energy the power output of a machine can never be greater than its input, so the efficiency can never be greater than 1.\n\n## How can I live effectively?\n\nHere are 101 ways to live your life to the fullest:Live every day on a fresh new start. … Be true to who you are. … Quit complaining. … Be proactive. … Rather than think “what if,” think “next time.” Don’t think about the things you can’t change. … Focus on WHAT vs. … Create your own opportunities. … Live consciously each day.More items…\n\n## What is efficiency with example?\n\nEfficiency is defined as the ability to produce something with a minimum amount of effort. An example of efficiency is a reduction in the number of workers needed to make a car. noun.\n\n## What is the unit of efficiency?\n\nThe efficiency is the energy output, divided by the energy input, and expressed as a percentage. A perfect process would have an efficiency of 100%. Wout = the work or energy produced by a process. Units are Joules (J).\n\n## Can you improve work and increase efficiency?\n\nPeople who consistently accomplish their goals by improving work efficiency do so by creating sustainable habits. Develop a routine that puts you in the best possible state to be productive at work. … When you create a routine that makes you feel happy, healthy and clear-minded, your work efficiency will skyrocket.\n\n## Why is efficiency important in the workplace?\n\nEfficiency at the workplace is of utmost importance. It is the key to achieve goals and results and it is the best way to get our projects and tasks done. When we are efficient, we learn how to prioritize tasks and how to delegate those that can be done by somebody else under our supervision.\n\n## What is production efficiency?\n\nProduction efficiency is an economic term describing a level in which an economy or entity can no longer produce additional amounts of a good without lowering the production level of another product. … Productive efficiency similarly means that an entity is operating at maximum capacity.\n\n## How do you calculate work?\n\nWork can be calculated with the equation: Work = Force × Distance. The SI unit for work is the joule (J), or Newton • meter (N • m). One joule equals the amount of work that is done when 1 N of force moves an object over a distance of 1 m.\n\n## What are the types of efficiency?\n\nWhen the term efficiency is used in the field of law and economics, it generally refers to the so-called economic efficiency, which can be subdivided into two types: productive or technical efficiency and allocative efficiency. These categories together form the overall economic efficiency (Coelli et al.\n\n## Can machines be 100% efficient?\n\nIn other words, no machine can be more than 100% efficient. Machines cannot multiply energy or work input. … If a machine were 100% efficient then it can’t have any energy losses to friction, so no friction can be present.\n\n## Is command economy good or bad?\n\nCommand economy advantages include low levels of inequality and unemployment and the common good replacing profit as the primary incentive of production. Command economy disadvantages include lack of competition and lack of efficiency.\n\n## What is work formula?\n\nWe can calculate work by multiplying the force by the movement of the object. W = F × d. Unit. The SI unit of work is the joule (J)\n\n## What is the formula for calculating energy efficiency?\n\nEnergy efficiency is calculated by dividing the energy obtained (useful energy or energy output) by the initial energy (energy input). For example, a refrigerator has an energy efficiency of 20 to 50%, an incandescent bulb about 5%, a LED lamp over 30%, and a wind turbine 59% at most."
]
| [
null,
"https://mc.yandex.ru/watch/74507773",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92918205,"math_prob":0.8471171,"size":7036,"snap":"2021-21-2021-25","text_gpt3_token_len":1420,"char_repetition_ratio":0.19553469,"word_repetition_ratio":0.07394958,"special_character_ratio":0.20750426,"punctuation_ratio":0.10421456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97285545,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T08:34:41Z\",\"WARC-Record-ID\":\"<urn:uuid:57b5c8cd-bc4d-47a0-b9af-f33050e11a8b>\",\"Content-Length\":\"43340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02720481-0cb8-460d-a4f0-0dc6a95a8869>\",\"WARC-Concurrent-To\":\"<urn:uuid:611d7848-6e27-4e00-a761-94e8ba566dcd>\",\"WARC-IP-Address\":\"193.200.75.42\",\"WARC-Target-URI\":\"https://afternoons-delight.com/qa/quick-answer-what-is-the-principle-of-efficiency.html\",\"WARC-Payload-Digest\":\"sha1:RGRLCYUOUWCNVWNIVYD2IAJBWNT2AU6H\",\"WARC-Block-Digest\":\"sha1:LGN3JJ4XCR4N2KCIIJTZVQELYZ4FZVOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988850.21_warc_CC-MAIN-20210508061546-20210508091546-00542.warc.gz\"}"} |
https://answers.everydaycalculation.com/compare-fractions/42-3-and-25-24 | [
"Solutions by everydaycalculation.com\n\n## Compare 42/3 and 25/24\n\n1st number: 14 0/3, 2nd number: 1 1/24\n\n42/3 is greater than 25/24\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 3 and 24 is 24\n\nNext, find the equivalent fraction of both fractional numbers with denominator 24\n2. For the 1st fraction, since 3 × 8 = 24,\n42/3 = 42 × 8/3 × 8 = 336/24\n3. Likewise, for the 2nd fraction, since 24 × 1 = 24,\n25/24 = 25 × 1/24 × 1 = 25/24\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 336/24 > 25/24 or 42/3 > 25/24\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84360904,"math_prob":0.99565834,"size":880,"snap":"2021-31-2021-39","text_gpt3_token_len":328,"char_repetition_ratio":0.2089041,"word_repetition_ratio":0.0,"special_character_ratio":0.4431818,"punctuation_ratio":0.07035176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940321,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T18:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:6572957e-d166-4731-ab82-80b33aff024f>\",\"Content-Length\":\"7799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:291415a7-55c1-4f72-adef-217aa09a1fa0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c92ccf2-fd09-45fd-b194-88363e11419b>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/42-3-and-25-24\",\"WARC-Payload-Digest\":\"sha1:BNEGPLEPS4YKRUJZ4WXDSMKVJWIPLIJI\",\"WARC-Block-Digest\":\"sha1:KAXT5SBAZ2AFNLTDP44F5WUQFMGSRR6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046156141.29_warc_CC-MAIN-20210805161906-20210805191906-00256.warc.gz\"}"} |
https://online-unit-converter.com/length/convert-feet-to-meters/0-0314-ft-to-m/ | [
"# 0.0314 FT to M CONVERTER. How many METERS are in 0.0314 FEET?\n\n## 0.0314 FT to M\n\nThe question “What is 0.0314 ft to m?” is the same as “How many meters are in 0.0314 ft?” or “Convert 0.0314 feet to meters” or “What is 0.0314 feet to meters?” or “0.0314 feet to m”. Read on to find free ft to meter converter and learn how to convert 0.0314 ft to m. You’ll also learn how to convert 0.0314 ft to m.\n\nAnswer: There are 0.00957072 m in 0.0314 ft.\n\nAlternatively, you can say “0.0314 ft equals to 0.00957072 m” or “0.0314 ft = 0.00957072 m” or “0.0314 feet is 0.00957072 meters”.\n\n## feet to meter conversion formula\n\nA meter is equal to 3.280839895 feet. A foot equals 0.3048 meters.\nTo convert 0.0314 feet to meters you can use one of the formulas:\n\nFormula 1\nMultiply 0.0314 ft by 0.3048.\n0.0314 * 0.3048 = 0.00957072 m.\n\nFormula 2\nDivide 0.0314 ft by 3.280839895.\n0.0314 / 3.280839895 = 0.00957072 m.\n\nHint: no need to use a formula. Use our free feet to meters converter.\n\n## Alternative spelling of 0.0314 ft to m\n\nMany of our visitors spell feet and meters differently. Below we provide all possible spelling options.\n\n• Spelling options with “feet”: 0.0314 feet to m, 0.0314 foot to m, 0.0314 feet to meters, 0.0314 foot to meters, 0.0314 feet in m, 0.0314 foot in m, 0.0314 feet in meters, 0.0314 foot in meters,\n• Spelling options with “ft”: 0.0314 ft to m, 0.0314 ft to meter, 0.0314 ft to meters, 0.0314 ft in m, 0.0314 ft in meter, 0.0314 ft in meters,\n• Spelling options with “in”: 0.0314 ft in m, 0.0314 ft in meter, 0.0314 ft in meters, 0.0314 ft in m, 0.0314 ft in meter, 0.0314 ft in meters, 0.0314 feet in m, 0.0314 foot in m, 0.0314 feet in meters, 0.0314 foot in meters,\n\n## FAQ on 0.0314 ft to m conversion\n\nHow many meters are in 0.0314 feet?\n\nThere are 0.00957072 meters in 0.0314 feet.\n\n0.0314 ft to m?\n\n0.0314 ft is equal to 0.00957072 m. There are 0.00957072 meters in 0.0314 feet.\n\nWhat is 0.0314 ft to m?\n\n0.0314 ft is 0.00957072 m. You can use a rounded number of 0.00957072 for convenience. In this case you can say that 0.0314 ft is 0.01 m.\n\nHow to convert 0.0314 ft to m?\n\nUse our free feet to meters converter or multiply0.0314 ft by 0.3048.\n0.0314 * 0.3048 = 0.00957072 m"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80935985,"math_prob":0.98445415,"size":2234,"snap":"2023-40-2023-50","text_gpt3_token_len":846,"char_repetition_ratio":0.30582958,"word_repetition_ratio":0.24578314,"special_character_ratio":0.47985676,"punctuation_ratio":0.22432859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99241114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T20:14:05Z\",\"WARC-Record-ID\":\"<urn:uuid:e56edce4-91fa-40f8-968d-b2e811d24a64>\",\"Content-Length\":\"89378\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7dcf034-2ce2-463e-9b24-7e0ddaac7502>\",\"WARC-Concurrent-To\":\"<urn:uuid:67d5fffc-250e-498d-a494-4e0ccdc9e036>\",\"WARC-IP-Address\":\"35.209.245.121\",\"WARC-Target-URI\":\"https://online-unit-converter.com/length/convert-feet-to-meters/0-0314-ft-to-m/\",\"WARC-Payload-Digest\":\"sha1:DTELY53C2IUZHIEX4NDKCFAB3OAP6GTT\",\"WARC-Block-Digest\":\"sha1:TVVQ62ZX6SFDWS54Y3TNXFSY2P72AKZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.30_warc_CC-MAIN-20230924191454-20230924221454-00181.warc.gz\"}"} |
https://math.stackexchange.com/questions/1005242/primes-for-which-a-polynomial-splits-completely | [
"# Primes for which a polynomial splits completely\n\nSuppose that $f(x) \\in \\mathbb{Z}[x]$ is an irreducible polynomial over $\\mathbb{Q}$. Nevertheless, it may be the case that $f(x)$ is reducible modulo $p$ for some prime $p$. What is the density of primes $p$ for which $f(x)$ splits completely into linear factors over $\\mathbb{F}_p$?\n\nThe density is $\\frac{1}{|G|}$ where $G$ is the Galois group of $f$. In particular it is at least $\\frac{1}{(\\deg f)!}$. This is a corollary of the Frobenius density theorem, a slightly weaker but (apparently; I don't know this first-hand) considerably easier version of the Chebotarev density theorem.\n\nFor example, let $f(x) = \\Phi_n(x)$ be the $n^{th}$ cyclotomic polynomial. The primes for which $f(x)$ splits are precisely the primes congruent to $1 \\bmod n$; the density of these is $\\frac{1}{\\varphi(n)}$ by Dirichlet's theorem on arithmetic progressions. And indeed the Galois group is $(\\mathbb{Z}/n\\mathbb{Z})^{\\times}$, which has size $\\varphi(n)$.\n\n• Hi Qiaochu, one Frobenius density on MO, mathoverflow.net/questions/136025/frobenius-density-theorem I think there have been others on both sites. The phrase seems quite familiar to me, I think I was part of some discussion with it. I remember a statement that every pattern of factoring (partition of the degree) occurs with a natural density of primes. Nov 4, 2014 at 18:21\n• David's answer here mathoverflow.net/questions/16271/… contradicts my memory about all partitions, something interesting I can fiddle with, at least check on computer. Nov 4, 2014 at 18:35\n• @Will: each partition occurs with density proportional to the number of elements of $G$ with that cycle type (acting on the roots of $f$). The issue is that if $G$ is not all of $S_n$ (where $n = \\deg f$) then not all cycle types necessarily appear. Nov 4, 2014 at 18:42\n• Qiaochu, thanks. Types of cycles, good. I checked David's example, he was right, of course. News to me. Nov 4, 2014 at 19:08\n• @Will: the significance of David's example is that the cubic he wrote down has Galois group $A_3 \\cong C_3$ (in fact I think its splitting field embeds into $\\mathbb{Q}(\\zeta_7)$ or something like that), which only has one nontrivial cycle type in it. More generally the Galois group of an irreducible cubic is $A_3$ or $S_3$ depending on whether or not its discriminant is a square, and in the former case there is again only one nontrivial cycle type (and also, by Kronecker-Weber, the splitting field embeds into some cyclotomic field). Nov 5, 2014 at 5:18\n\nA full answer would be Chebotarev density for, well, anything. What I do know are some examples; if $$f(x) = x^3 - x + 1,$$ the density is $(1/6).$ The primes are those which can be expressed as $$p = u^2 + uv + 6 v^2.$$ Plenty more where that came from. Quoting from page 188 of Cox, Theorem 9.12, as well as a 1991(?) paper by Williams and Hudson.\n\nEDIT, Monday Nov. 10; I got very interested in the history of this for my own reasons. One direction about different cycle partitions in the Galois group, considered as a permutation group, is attributed entirely to Dedekind, see COX. So, if, for any prime not dividing the discriminant, the polynomial factors with a certain partition of irreducible factor degrees, then there is at least one element in the Galois group whose cycle description is the same.\n\nThe Frobenius direction, about the same time (1880-1900) is that, once the Galois group has an element with a certain cycle decomposition, then infinitely many primes cause the polynomial to factor with that pattern, this set of primes has both a Dirichlet density and a natural density, and the density is the number of Galois group elements with that pattern divided by the size of the full Galois group. Note that, as Qiaochu mentions, there is only one element (the identity) that has all cycles of length $1,$ the identity element.\n\nOh; most people seem to quote one source about Frobenius Density, a survey by Lenstra and Stevenhagen, especially pages 10-12 in the preprint linked.\n\nThe theorem of Frobenius ...deserves to be better known than it is. For many applications...it suffices to have Frobenius's theorem, which is both older (1880) and easier to prove..\n\nI think this is lovely. Proof in an undergraduate course would be another matter, showing there is a natural density came later."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9216032,"math_prob":0.9943184,"size":4427,"snap":"2022-27-2022-33","text_gpt3_token_len":1190,"char_repetition_ratio":0.109654084,"word_repetition_ratio":0.005524862,"special_character_ratio":0.26699796,"punctuation_ratio":0.12371134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99763745,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T16:07:06Z\",\"WARC-Record-ID\":\"<urn:uuid:8e1c4f8a-1295-4998-9770-420de421ac59>\",\"Content-Length\":\"240724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d9156e6-5b21-4153-8d07-04322ea6f112>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1eb5e0f-b4aa-400a-94d7-c36924027daf>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1005242/primes-for-which-a-polynomial-splits-completely\",\"WARC-Payload-Digest\":\"sha1:OR6AQCRBWIQ4NVJUQDEM5CYNOJL5CJWH\",\"WARC-Block-Digest\":\"sha1:MNDQVTBL3JHH72FZMGJPFSHCFJUWKFL6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103334753.21_warc_CC-MAIN-20220627134424-20220627164424-00590.warc.gz\"}"} |
http://www.scholarpedia.org/article/Taylor-Couette_flow | [
"# Taylor-Couette flow\n\nPost-publication activity\n\nCurator: Richard Lueptow\n\nDr. Richard Lueptow accepted the invitation on 22 January 2009 (self-imposed deadline: 22 August 2009).",
null,
"Figure 1: Schematic of counter-rotating axisymmetric vortices of Taylor-Couette flow (© 2000, Mike Minbiole and Richard M. Lueptow)\n\nTaylor-Couette flow is the name of a fluid flow and the related instability that occurs in the annulus between differentially rotating concentric cylinders, most often with the inner cylinder rotating and the outer cylinder fixed, when the rotation rate exceeds a critical value. The stable base flow was used in the 19th century to test the fundamental Newtonian stress assumption in the Navier-Stokes equations. It has long been used as a means to measure fluid viscosity. More importantly, the flow instabilities that arise in Taylor-Couette flow (toroidal Taylor vortices stacked in the annulus) and the related theoretical framework to describe these instabilities have provided valuable insight into the commonly used no-slip boundary condition, linear stability analysis, low dimension bifurcation phenomena, chaotic advection, absolute and convective instabilities, and a host of other fundamental physical phenomenon and analytic methods. The flow is frequently studied because it is easy to produce in small closed systems, demonstrates a fundamental fluid flow phenomenon that can be mathematically predicted from basic principles, and is simple and beautiful to observe.\n\n## History\n\nThe geometric simplicity of the flow of a fluid between differentially rotating concentric cylinders has attracted the interest of scientists for centuries. Sir Isaac Newton used it to describe the circular motion of fluids in his Principia in 1687 (Newton 1946). The pioneering theoretical fluid dynamicist George G. Stokes likewise considered this simple flow in 1848 noting the difficulty in the boundary conditions at the walls of the cylinder, now taken for granted as the no-slip boundary condition. He suggested that \"eddies would be produced,\" if the inner cylinder \"were made to revolve too fast\" (Stokes 1880), a remarkable insight many years before vortices were visualized.\n\nThe development of the Navier-Stokes equations for viscous fluid flow naturally brought about debate on how to best measure fluid viscosity. Henry R. A. Mallock and M. Maurice Couette both independently sought to accomplish this using two differentially rotating concentric cylinders, now known as a Taylor-Couette cell (Mallock 1888, Couette 1890). Couette rotated the outer cylinder keeping the inner cylinder fixed, which is the basis for the modern viscometer, thus avoiding the vortical structure and obtaining an accurate measurement of the viscosity of various fluids. Mallock performed similar experiments to Couette, but also rotated the inner cylinder keeping the outer cylinder fixed. He found anomalous results in this case because Taylor vortices occurred. In fact, Mallock’s experiment prompted Lord Kelvin to write a letter to Lord Rayleigh in 1895 bringing the instability to his attention (Donnelly 1991). While Rayleigh’s eventual analysis in 1916 explained the physical origin of the vortical structure, it was not until 1923, that G. I. Taylor was able to relate theory and experiment for stability in cylindrical Couette flow (Taylor 1923). His investigation was a key development in the modern study of fluid mechanics for three reasons (Donnelly 1991):\n\n• It was taken by many as convincing proof of the no-slip boundary condition wherein the velocity of a particle in contact with a wall moves at the same velocity as the wall. Although this concept has become a fundamental tenet for the study of fluid flow, it was questioned until Taylor used it with such success in his analysis of the stability of Taylor-Couette flow.\n• It offered convincing proof that the Navier-Stokes equations indeed accurately describe the flow of a Newtonian fluid, not just at the base flow level, but at a level that permitted the analysis of secondary flows and instabilities.\n• It was the first successful application of linear stability analysis that accurately predicted experimental results, namely the transition from stable flow to vortical Taylor-Couette flow.\n\n## Fundamentals of Taylor-Couette Flow\n\nIn its simplest form, Taylor-Couette flow arises from the shear flow between a rotating inner cylinder and a concentric, fixed outer cylinder. The stable flow for this geometry is known as cylindrical Couette flow (or sometimes circular Couette flow or rotating Couette flow). As with all Couette-type flows, the flow is driven by the motion of one wall bounding a viscous liquid. Applying the Navier-Stokes equation for an incompressible Newtonian fluid, the exact solution for infinitely long cylinders is of the form [in cylindrical coordinates ($$r\\ ,$$ $$\\theta\\ ,$$ $$z$$)]:\n\n$U=0,\\quad V=Ar+\\frac{B}{r},\\quad W=0,\\quad \\frac{\\partial P}{\\partial r}=\\rho \\frac {V^2}{r}$\n\nwhere $$U\\ ,$$ $$V\\ ,$$ and $$W$$ are the radial, azimuthal, and axial components of velocity, $$P$$ is the pressure, and $$\\rho$$ is the fluid density. $$A$$ and $$B$$ depend on the radius ratio, $$\\eta=r_i/r_o$$ of the inner cylinder radius $$r_i$$ and the outer cylinder radius $$r_o,$$ and the rotational speed of the inner cylinder, $$\\Omega_i\\ ,$$ as\n\n$A=-\\Omega_i\\frac{\\eta^2}{1-\\eta^2}, \\quad B=\\Omega_i\\frac{r_i^2}{1-\\eta^2}$",
null,
"Figure 2: Axisymmetric Taylor vortices visualized using titanium dioxide-coated mica flakes (© 2002, Alp Akonur and Richard M. Lueptow)\n\nCylindrical Couette flow becomes unstable as the rotational speed of the inner cylinder increases resulting in pairs of counter-rotating, axisymmetric, toroidal vortices that fill the annulus superimposed on the Couette flow ( Figure 1). Each pair of vortices has a wavelength of approximately $$2d\\ ,$$ where $$d = r_o-r_i$$ is the gap between the cylinders. The vortices are readily visualized by adding small flakes to the fluid that align with the flow ( Figure 2). As a consequence of the vortices, high-speed fluid near the rotating inner cylinder is carried outward in the outflow regions between vortices, while low-speed fluid near the fixed outer cylinder is carried inward in the inflow regions between vortices, redistributing angular momentum of the fluid in the annulus. The axial and radial velocities related to the Taylor vortices are relatively small, typically only a few percent of the surface speed of the inner cylinder (Wereley and Lueptow 1998).\n\n## Theoretical Background\n\nLord Rayleigh first put forth the inviscid (no viscosity) approach to the instability based on an imbalance of the centrifugal force and pressure gradient force (Rayleigh 1916). In considering meteorological problems such as cyclones having a fluid angular velocity $$\\Omega(r)\\ ,$$ he argued that if the value for $$(r^2\\Omega)^2$$ decreases in the radial direction, as it does for an inner rotating cylinder and a fixed outer cylinder, the flow should be unstable. While the Rayleigh stability criterion describes the underlying physics of the instability, it is not strictly correct. It predicts that regardless of the speed of the inner cylinder, as long as the inner cylinder rotates within a stationary outer cylinder, the flow should be unstable. This is not the case, since viscosity damps the perturbations for low rotational speeds, preventing the vortices from forming.\n\nG. I. Taylor first showed how viscosity stabilizes the flow at low rotational speeds using linear stability analysis (Taylor 1923). The analysis is based on small perturbations of the velocity and pressure fields, expressed as normal modes of the form:\n\n$u=u(r) \\cos(kz)e^{qt}, \\quad v=V+v(r)\\cos(kz)e^{qt}, \\quad w=w(r)\\sin(kz)e^{qt}, \\quad p=P+p(r)\\cos(kz)e^{qt}$\n\nThese expressions include the base flow (noting $$U=W=0$$) and a perturbation including sinusoidal variation of the disturbance in the $$z$$-direction with axial wavenumber $$k\\ ,$$ a growth rate or amplification factor $$q$$ for the disturbance, and amplitudes of the disturbance [$$u(r), v(r), w(r),$$ and $$p(r)$$], which are dependent on the radial position. The wavenumber describes the axial periodicity of the perturbation. Using $$\\sin(kz)$$ for $$w$$ and $$\\cos(kz)$$ for the other perturbations comes about due to the phase relationship between the velocity components for the vortex structure—$$w$$ is zero where the other perturbations are extrema, and vice versa.\n\nSubstituting these expressions into the Navier-Stokes equations followed by linearizing the equations (discarding higher order terms) results in a set of ordinary differential equations. These equations can be transformed into an eigenvalue problem for which the amplification factor is set to zero, corresponding to the onset of the instability. The solution yields the critical wavenumber, $$k_{crit}\\ ,$$ and the critical Taylor number, $$T_{crit}\\ ,$$ a dimensionless number above which the instability occurs. Below the critical Taylor number the flow is stable with no vortical structure; above it is unstable with toroidal vortices shown in Figure 1.\n\nThere are various forms of the Taylor number, though all represent the ratio of centrifugal (or inertial) forces to viscous forces. Above the critical Taylor number, centrifugal forces exceed viscous forces, and the flow becomes unstable. One form for the Taylor number when the inner cylinder rotating within a fixed outer cylinder is\n\n$T=4 Re^2 \\left[\\frac{1-\\eta}{1+\\eta}\\right]$\n\nTable 1: Critical Reynolds number for transition to vortical flow (Recktenwald et al. 1993).\n$$\\eta\\ !$$ ! $$Re_{crit}\\ !$$ ! $$k_{crit}$$\n0.975 260.9 3.13\n0.90 131.6 3.13\n0.80 94.7 3.13\n0.70 79.5 3.14\n0.60 71.7 3.15\n0.50 68.2 3.16\n\nwhere $$Re=\\Omega_ir_id/\\nu$$ is a Reynolds number based on the surface velocity of the inner cylinder as the velocity scale and the gap width as the length scale, with $$\\nu$$ being the kinematic viscosity. The critical Reynolds number, $$Re_{crit}\\ ,$$ corresponding to the critical Taylor number and the associated critical wavenumber, $$k_{crit}\\ ,$$ for the onset of vortices depends on the radius ratio as indicated in Table 1. The critical wavenumber defines the axial spacing of the vortices, or wavelength, $$\\lambda=2\\pi/k_{crit}\\ .$$ Thus, since $$k_{crit}=3.13/d$$ for $$\\eta=0.9\\ ,$$ $$\\lambda\\approx 2d\\ ,$$ indicating that a counter-rotating pair of vortices (one wavelength) has an axial wavelength that is twice the radial gap width. Thus, each vortex tends to be circular (as opposed to elliptical), filling a region that is $$d \\times d\\ ,$$ consistent with experiments.\n\nBoth Rayleigh’s and Taylor’s analyses can be extended to both cylinders rotating, either in the same direction (co-rotating) or in opposite directions (counter-rotating). Rayleigh’s inviscid analysis indicates that the flow has potential to be unstable when $$(r_i^2\\Omega_i)^2 > (r_o^2\\Omega_o)^2\\ ,$$ where $$\\Omega_o$$ is the rotation rate of the outer cylinder. Extending Taylor’s analysis to both cylinders rotating yields another Taylor number, one of the forms of which is\n\n$T=4 Re^2 \\left[1-\\frac{\\mu}{\\eta^2}\\right]\\left[\\frac{1-\\eta}{1+\\eta}\\right]$\n\nwhere $$\\mu=\\Omega_o/\\Omega_i\\ ,$$ is the ratio of the rotation rates of the outer and inner cylinders. For $$\\mu<0\\ ,$$ corresponding to the cylinders rotating in opposite directions, only the region near the inner cylinder is unstable, since the azimuthal velocity profile changes sign at some point in the gap between the two cylinders due to the opposite rotation of the cylinders.\n\n## Higher Order Instabilities",
null,
"Figure 3: Schematic of counter-rotating wavy vortices. Note that the axial flow between wavy vortices is not represented in this idealized version of the flow (© 2000, Mike Minbiole and Richard M. Lueptow)",
null,
"Figure 4: Velocity vectors measured using particle image velocimetry in a meridional ($$r$$-$$\\theta$$) plane showing intra-vortex flow between counter-rotating wavy vortices. Color corresponds to the azimuthal ($$\\theta$$) velocity with red corresponding to the velocity of the inner cylinder on the left and blue corresponding to the velocity of the outer cylinder on the right (for details, see Akonur and Lueptow 2003) (© 2003, Alp Akonur and Richard M. Lueptow)\n\nIncreasing the Taylor number above the critical Taylor number for the case of the inner cylinder rotating and the outer cylinder fixed results in higher order instabilities in which the vortical structure is retained, but the vortices are modified. The first transition is to wavy vortex flow, which is characterized by azimuthal waviness of the vortices as shown schematically in Figure 3. The waves travel around the annulus at a speed that is 30-50% of the surface speed of the inner cylinder, depending on the Taylor number and other conditions (King et al. 1984). The mathematical formulation of the problem for wavy vortices uses perturbations of the form\n\n$v=V+v(r)e^{qt}e^{i(n\\theta+kz)}$\n\nThis form of the perturbation includes azimuthal waviness, where n is an integer number of waves around the annulus. The axial dependence is included in the exponential, equivalent to a $$sin$$ or $$cos$$ term. In theory, the perturbation is the sum of many normal modes (many $$n$$’s and $$k$$’s), but in practice the mode that dominates is the one for which the Taylor number at the stability limit is lowest. The Taylor number for the transition from axisymmetric toroidal Taylor vortices to wavy vortices is not firmly established. For instance, the transition is theoretically predicted to occur at $$T/T_{crit}$$=1.1 for $$\\eta$$=0.85 for infinitely long cylinders, whereas experiments indicate a range of higher values between 1.14 and 1.31 for $$\\eta$$=0.80-0.90, depending on experimental conditions (Serre et al. 2008). The number of azimuthal waves depends on experimental conditions, though it is usually less than 6 or 7 (Coles 1965).\n\nRegions of upward (downward) deformation of a wavy vortex correspond to regions of upward (downward) axial flow. As a result, for wavy vortex flow streamtubes are destroyed leading to chaotic particle paths with intra-vortex mixing, as shown in Figure 4. By contrast, the axisymmetric cellular structure of non-wavy Taylor vortex flow results in a set of nested streamtubes (KAM tori) for each vortex with a dividing invariant streamsurface between adjacent vortices. The only mechanism for transport within a vortex or between vortices is molecular diffusion.",
null,
"Figure 5: Schematic of flow between concentric spheres with counter-rotating axisymmetric Taylor vortices at the equator (© 2000, Mike Minbiole and Richard M. Lueptow)\n\nAt higher Taylor numbers, the wavy vortices transition to modulated wavy vortices, evident upon flow visualization as a slight flattening of the outflow boundary. The transition is most easily detected from spectral analysis of a velocity or reflected light measurement at a single point in the flow. Wavy vortex flow has a single peak at a frequency related to the passage of the azimuthal wave; modulated wavy vortex flow introduces a second spectral peak at a lower frequency related to the modulation. At still higher Taylor numbers, the waviness gives way to turbulence, which raises the spectral level at all frequencies. The vortices become axisymmetric, but the flow is turbulent at small scales. At high enough Taylor number, the turbulent vortices disappear, and the flow is fully turbulent.\n\nThe rotation of the outer cylinder in addition to the inner cylinder results in a variety of other flow regimes for long cylinders: wavy inflow and outflow, wavelets, twisted vortices, and corkscrew regimes for co-rotating cylinders; interpenetrating spirals, wavy interpenetrating spirals, intermittent turbulent spots, and spiral turbulence regimes for counter-rotating cylinders (Andereck et al. 1986). (A map showing different flow regimes as a function of the Reynolds numbers for the inner and outer cylinders, $$R_i=\\Omega_ir_id/\\nu$$ and $$R_o=\\Omega_or_od/\\nu\\ ,$$ respectively, is given in Fig. 1 of this reference; it cannot be reproduced here due to copyright restrictions.) The addition of an axial flow in the annulus or a radial flow through porous cylinders alters the critical Taylor number as well as the wavelength and structure of the vortices. Likewise, the flow is altered by oscillating the inner or outer cylinder axially or azimuthally or by varying the gap between the inner and outer cylinders. The vortical structure is very robust--Taylor vortices can occur in other geometries including between concentric cones and spheres. For example, vortices occur at the equator between an inner rotating sphere and a concentric, stationary outer sphere ( Figure 5.)."
]
| [
null,
"http://www.scholarpedia.org/w/images/thumb/0/0e/Couette_Flow.gif/250px-Couette_Flow.gif",
null,
"http://www.scholarpedia.org/w/images/3/31/TC_visualization.jpg",
null,
"http://www.scholarpedia.org/w/images/thumb/9/94/Wave.fin_McDraw.gif/250px-Wave.fin_McDraw.gif",
null,
"http://www.scholarpedia.org/w/images/thumb/e/e6/TCPosterLowRes.jpg/200px-TCPosterLowRes.jpg",
null,
"http://www.scholarpedia.org/w/images/thumb/a/a8/Sphere.fin_McDraw.gif/250px-Sphere.fin_McDraw.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8673221,"math_prob":0.9903671,"size":20815,"snap":"2020-45-2020-50","text_gpt3_token_len":5032,"char_repetition_ratio":0.15602326,"word_repetition_ratio":0.014993481,"special_character_ratio":0.236368,"punctuation_ratio":0.14300358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949177,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T05:51:30Z\",\"WARC-Record-ID\":\"<urn:uuid:49bf6ac8-2734-497f-ac4f-9fe3f071a0e3>\",\"Content-Length\":\"56100\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa3927c6-5553-4a05-89ab-b665848f9368>\",\"WARC-Concurrent-To\":\"<urn:uuid:39b21a53-4325-4336-aa4e-96a1813efae0>\",\"WARC-IP-Address\":\"173.255.237.117\",\"WARC-Target-URI\":\"http://www.scholarpedia.org/article/Taylor-Couette_flow\",\"WARC-Payload-Digest\":\"sha1:EGUWOES4GUUBO2PTBFELGC45JOJYL3E6\",\"WARC-Block-Digest\":\"sha1:4SFMJQGGOICPLAG3ZRTDVKSLYLFWVF6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746320.91_warc_CC-MAIN-20201205044004-20201205074004-00453.warc.gz\"}"} |
https://math.stackexchange.com/questions/31192/is-it-possible-to-solve-any-euclidean-geometry-problem-using-a-computer/31368 | [
"# Is it possible to solve any Euclidean geometry problem using a computer?\n\nBy \"problem\", I mean a high-school type geometry problem.\n\nIf no, is there other set of axioms that allows that? If yes, are there any software that does that?\n\nI did a search, but was not able to find a single program that allows that. It is strange because even if it is impossible to solve any problem, most of the natural problems should be solvable.\n\nThere is a method to decide (true or not) any theorem in Euclid's Elements, by translating it into analytic geometry, or multivariate polynomials over the reals.\n\nOf the philosophical controversies discussed by the Greeks that inspired Euclid to set up his axioms, common notions and definitions (like what an angle is, what really is the action of a 'compass'), most of them are de facto resolved by how operations and variables work in real 2D analytic geometry with the Pythagorean theorem (which has been proven to be proof-power equivalent to Euclid's fifth postulate).\n\nThe conversion works as follows:\n\n• An undefined point is given two variables, an $x$ and $y$ coordinate $(x_1,y_1)$. Any possible distinct point should be named by some other pair $(x_2,y_2)$ (a proof that these two points coincide would show that $x_1=x_2$ and $y_1=y_2$).\n• a circle is defined by center $(x_3,y_3)$ and distance $d_1$ such that $(x-x_3)^2 + (y-y_3)^2 = d_1^2$. If you want a point on that circle, you instantiate this equation with the appropriate $x$ and $y$.\n• a line is defined by, well, two points. Just by stating two points, you have a line.\n• Frankly, I'm not sure how to do angles, but my memory tells me that it is possible to 'deal' with them.\n\nSo now you have a set of polynomial equations in many variables. And all you need is a procedure to find if there is a solution (some set of satisfying numbers or really a nontrivial set of equations) for them.\n\nSuppose you only had points and lines. Then all you'd need is Gaussian elimination. But with circles, you can have arbitrary degree. Groebner basis completion (via Buchberger's algorithm from the mid 70's) is what you can use. Essentially, it does just what you'd expect (but didn't realize you could) on higher degree multivariate polynomials. In analogy with eliminating columns in a matrix, you try to eliminate 'maximal' terms one by one. It makes it easier if you multiply out everything so your equations are all of the form 'a sum of terms' = 0, where a term is a coefficient with a number of variables (possible none) multiplied, e.g. $17x_1^3y_{41}^2d_1^{1729}$. The algorithm assumes an order on these terms, and then follows an analog of the numerical GCD algorithm (coincidentally or not in Euclid).\n\nSo the algorithm tries to 'reduce' as much as possible (eliminating leading terms where it can). If it eliminates so much to the point where it has an equation $0=1$, then you know you have a contradiction, and then your equations have no way of being satisfied, so the polynomials that describe the theorem are inconsistent, so the theorem itself does not hold. Otherwise (and this is the beauty of it), the theorem -does- hold.\n\nIn some sense you could do this by hand for some very small trivial problems (It allows you to find the intersection of a given circle and line for example). Anyway, the details of the algorithm make it a decision procedure (will stop with an answer of 'yes' or 'no').\n\nAs a practical matter, you probably don't want to do this by hand. Groebner basis completion is (depending on what you consider the exact problem) PSPACE-complete, so mostly likely exponential (in the number of variables).\n\nAs to Tarski's result that elementary plane geometry is decidable, yes, that is a classic result (from the 50's). The theorem is the the theory of the reals is decidable, which includes the '$\\lt$' relation. This includes Euclidean geometry, and is a bit more general. Tarski's algorithm is ingenious in that it solved a long outstanding problem, but is not particularly efficient. Collins' 'cylindrical algebraic decomposition' algorithm also from the mid 70's) is much more efficient (if you consider an increase in efficiency going from a stack of exponentials, to simply maybe just doubly exponential; the second is only merely astronomical, the first terribly so). It is of course still less efficient than Buchberger's algorithm. Because of the different fields these came out of, I'm not sure of any research that discusses any explicit comparison of the two.\n\nWhatever the efficiency is, yes, there are a number of actual theorems proved mechanically in this style. A very nontrivial example is Morley's theorem (the trisectors of the angles of a triangle meet in an equilateral triangle) which Zeilberger has given an automated proof of using Maple. And Wu (of the Wu method mentioned, which is similar to Buchberger's algorithm) and his acolytes have more comprehensively proven whole sets of Euclidean theorems.\n\nAn interesting problem would be how to prove Euclidean theorems using Euclid's axioms as is (instead of translating). There has been some recent research by Avigad into that. And I've heard that one can do a translation of Euclid into Clifford algebras that is a bit more 'like' the original Euclid than the analytic approach.\n\n(for anything unexplained, you might try googling or wikipedia with varying degrees of success)\n\nSo maybe that's not satisfying for whatever reason. Maybe there are other axiom systems for which one can natively (that is, synthetically, uninterpreted) have a decision procedure (which means write a computer program to do it. Hilbert's axioms are really a 'fix' of Euclid's; that is, it adds betweenness and some other axioms that Euclid, even in his arduous skepticism, left out. So I consider it essentially the same. Then there's Tarski's axioms. These are only mildly different, but still translate to the analytic version. (I am of the opinion that it wasn't a deliberate connection, and maybe an intellectual coincidence, between these axioms and Tarski's decision procedure for real closed fields.\n\nAs to computer programs, well there's Maple or Mathematica (or pretty much any computer algebra package that implements Groebner basis calculation); you still have to do the translation yourself into polynomials.\n\nOn quite the other hand there are a number of geometry 'editors', that allow you to 'draw' a theorem, allowing you to drag and drop the free parameters as objects, i.e. showing that the bisectors of a triangle meet at a point, allowing you to move all the vertices, showing you the intersection of the pairs of bisectors all meet at the same point. Some of these will allow you to 'prove' your construction (that some points always coincide for instance (Geometer's Sketchpad?). As far as I know, these 'proofs' are not symbolic (like GB calculation) but rather numeric (they notice that the difference among computed values is always less than some very small epsilon).\n\nI believe it is a theorem of Tarski that elementary plane geometry is decidable.\n\n• Right but Tarski's result does not yield a practical algorithm.\n– lhf\nApr 6, 2011 at 1:16\n• @lhf, @GEdgar: Tarski's algorithm is impractical in the sense that , if memory serves, has worst case running time proportional to a stack of exponentials the length of the number of variables (or something similar). The 'practical' algorithm is by Collins the 'Cylindrical Algebraic Decomposition' algorithm, which again from memory, is much more 'practical' being in the complexity class EXPSPACE or 2NEXP. Note that this is a generalization of the Groebner basis completion algorithm. Apr 6, 2011 at 13:50\n\nYou can express the problem using coordinates. You can then use Gröbner basis techniques to try to prove that the expression representing the conclusion is in the ideal generated by the expressions representing the hypotheses. See also Wu's method.\n\n• How would I prove theorems using this method? Apr 5, 2011 at 23:27\n• And also, what the theory has to say about it? Is every problem solvable? Apr 5, 2011 at 23:31\n• For a reference on this sort of thing, you might try the book by Cox, Little and O'Shea, \"Using Algebraic Geometry\". Caution: even for fairly simple-looking problems, the number of variables and equations may get large enough that Grobner basis techniques will require huge amounts of time and memory. To make the problem manageable, it's often necessary to take as much advantage as possible of symmetries to cut down the number of variables and equations. Apr 5, 2011 at 23:35\n• @Artium: The method could be called 'Descartes dream'. An unspecified point would be two new variables $(x_1,y_1)$. A circle would be defined by a point $(x_2, y_2)$ and a distance $d$ by the pythagorean formula. A line would be defined by just have two points $(x_3, y_3)$ and $(x_4, y_4)$. YOur assumptions in a Euclidean theorem would convert incidences to polynomial equaitons. The Buchberger algorithm for computing the Groebner basis of a set of multivariate polynomial equations will then 'simplify' this system. Look up Morley's theorem and Zeilberger for an example. Apr 5, 2011 at 23:39\n• For an example of an actual nontrivial problem solved by these methods, see my paper with Petr Lisonek in ISSAC 2000, \"Metric Invariants of Tetrahedra via Polynomial Elimination\", Proceedings of the 2000 International Symposium on Symbolic and Algebraic Computation, pp. 217-219, dx.doi.org/10.1145/345542.345635 Apr 6, 2011 at 22:29\n\nI just want to comment on one aspect of Mitch's excellent answer (above).\n\nHe wrote: \"Groebner basis completion (via Buchberger's algorithm from the mid 70's) is what you can use. \"\n\nAs a practical matter, Groebner bases do not usually work on geometrical problems. The Dixon resultant is far more effective. (I am speaking of symbolic solution.)\n\nMany examples can be found on my web site. For example, the paper on Apollonius problems at\n\nAs far as I know there is no practical tool which provide automatically a readable proofs of the same kind as high school proofs for many geometry theorems.\n\nAs Mitch pointed there are algebraic methods such as Grobner bases and Wu's method, but they do not provide readable proofs. They have been implemented and are available in software such as: opengeoprover (an open source implementation in java), Predrag Janicic's gclc, geother by Dong Ming Wang (a maple implementation of Wu's method)... Some provers are also available in GeoGebra 5.\n\nThere is also a method which produces proofs which sometimes can be considered readable: The area method by Chou, Gao and Zhang. This method is implemented in open-geo-prover and I have implemented it in Coq.\n\nThere is also an old paper by Gelertner which describes an approach which try to mimic the human proofs (but as far as I know this method is not very efficient): http://aitopics.org/sites/default/files/classic/Feigenbaum_Feldman/Computers_And_Thought-Part_1_Geometry.pdf\n\nThis reply is too long, so I can not add it as a comment. I hope someone will see it.\n\nAfter reading some more on the subject (referenced from answers here), I must say that I am a little disappointed from the approach being used. I had imagined that any high school student can input his homework to a program, and it will output a solution. As I see, this is currently not the case.\n\nI am only an egg, but I imagined that the axioms, assumptions and what is needed to be proved would be described in first order logic, and then some search algorithm would apply some rules of inference and search for the desired goal. This is similar to the way humans do math.\n\nThe problems with the polynomial approaches I see (you are welcomed to explain why I am wrong) are:\n\n1. Previously generated mathematical knowledge is not being used.\n2. Searches are all \"brute force\", a human mathematician usually use intuition and past experience to try different approaches and directions.\n3. The algorithm can answer if something is true or false, but it can not produce a step by step proof (Zeilberger did that in his \"geometry web book\", but was his method different?).\n4. Algorithm requires non trivial tuning for each different theorem.\n\nUsing the search algorithm, previously generated knowledge may be represented as more assumptions and rules of inference. Intuition would be represented by a heuristic function.\n\n• What you're saying here is quite beyond what you originally asked. All the answers given take the approach of and respond to the original question (the answer is yes, there's an algorithm, but it's not efficient, and there' is some software). It seems here you are asking for more than one additional different answer . Is there software a high-school student could use to do the proof -automatically-? (yes, sort of, Geometer's Sketchpad, but the 'proof' is numeric (not just 'look see the picture', but also, 'and is numerically accurate for most configurations)... Apr 7, 2011 at 20:20\n• And an additional question you want answered is, can this be done where the proofs themselves follow what a human would do? And the answer, as with almost any technology, is no, the proofs automatically produced by computer are often quite different from a human's, and usually unreadable. The steps in a Groebner basis 'proof' end up being the polynomial manipulations...those -are- the proof steps. They don't feel at all like geometry ias happening, but that's math. You may be interested in synthetic proofs (those that are more 'Euclidean' but then you should look at Avigad et al. Apr 7, 2011 at 20:24"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9436345,"math_prob":0.9409953,"size":6528,"snap":"2023-40-2023-50","text_gpt3_token_len":1486,"char_repetition_ratio":0.09840588,"word_repetition_ratio":0.0037071363,"special_character_ratio":0.21951593,"punctuation_ratio":0.094884485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99579924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T17:40:39Z\",\"WARC-Record-ID\":\"<urn:uuid:4e0e49c2-c9dd-48e1-83c9-e71231d11ffc>\",\"Content-Length\":\"214462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44e645a5-d743-4093-8eed-113642b4354a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3e63124-756e-4051-8dea-20883bada2d1>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/31192/is-it-possible-to-solve-any-euclidean-geometry-problem-using-a-computer/31368\",\"WARC-Payload-Digest\":\"sha1:7MK3TFDOZ62MXDI3PC3OM2HAZTLWUUVW\",\"WARC-Block-Digest\":\"sha1:4Y3X4MITRK7TOBHPSXPNHYVFLAV7IPQK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00838.warc.gz\"}"} |
https://openeuphoria.org/wiki/view/updating%20oE%20%20atan2.wc | [
"### updating oE atan2\n\n#### atan2\n\n```include math.e\nnamespace math\npublic function atan2(atom y, atom x)\n```\n\ncalculate the arctangent of a ratio.\n\n##### Parameters:\n1. y : an atom, the numerator of the ratio\n2. x : an atom, the denominator of the ratio\n##### Returns:\n\nAn atom, which is equal to arctan(y/x), except that it can handle zero denominator and is more accurate.\n\n##### Example 1:\n```a = atan2(10.5, 3.1)\n-- a is 1.283713958\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5808058,"math_prob":0.992131,"size":433,"snap":"2020-24-2020-29","text_gpt3_token_len":125,"char_repetition_ratio":0.121212125,"word_repetition_ratio":0.0,"special_character_ratio":0.295612,"punctuation_ratio":0.19587629,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99682856,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T21:17:30Z\",\"WARC-Record-ID\":\"<urn:uuid:86315873-03af-4e78-b0a5-fc4c759eefd1>\",\"Content-Length\":\"8045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:beaf2b3b-01dc-45cb-b61d-01e3fa5d04ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:feb735c4-1ec4-4a4c-a643-efd7305ae2e6>\",\"WARC-IP-Address\":\"23.138.32.173\",\"WARC-Target-URI\":\"https://openeuphoria.org/wiki/view/updating%20oE%20%20atan2.wc\",\"WARC-Payload-Digest\":\"sha1:BKAR2WQDCUSKMQO3KM7AIYT5Z5PYUZJK\",\"WARC-Block-Digest\":\"sha1:SL3VTYVYITBB2XZF5FSISJMTBWAQ4NPY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655882934.6_warc_CC-MAIN-20200703184459-20200703214459-00356.warc.gz\"}"} |
https://percentage-calculator.net/what-percent-of-x-is-y/what-percent-of-312.5-is-75.php | [
"# What percent of 312.5 is 75?\n\nAnswer: 24 percent of 312.5 is 75\n\n## Fastest method for calculating what percent of 312.5 is 75\n\nAssume the unknown value is 'Y', and 75 of 312.5 can be written as:\n\nY = 75 / 312.5\n\nBy multiplying both numerator and denominator by 100 we will get:\n\nY = 75 / 312.5 x 100 / 100 = 24 / 100\n\nY = 24%\n\nAnswer: 24 percent of 312.5 is 75\n\nIf you want to use a calculator, simply enter 75÷312.5x100 and you will get your answer which is 24\n\nYou may also be interested in:\n\nHere is a calculator to solve percentage calculations such as what percent of 312.5 is 75. You can solve this type of calculation with your own values by entering them into the calculator's fields, and click 'Calculate' to get the result and explanation.\n\nWhat percent of\nis\n?\n%\n\n## Have time and want to learn the details?\n\nLet's solve the equation for Y by first rewriting it as: 100% / 312.5 = Y% / 75\n\nDrop the percentage marks to simplify your calculations: 100 / 312.5 = Y / 75\n\nMultiply both sides by 75 to isolate Y on the right side of the equation: 75 ( 100 / 312.5 ) = Y\n\nComputing the left side, we get: 24 = Y\n\nThis leaves us with our final answer: 24 percent of 312.5 is 75"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91357106,"math_prob":0.99875003,"size":999,"snap":"2020-24-2020-29","text_gpt3_token_len":300,"char_repetition_ratio":0.15879397,"word_repetition_ratio":0.04040404,"special_character_ratio":0.35935935,"punctuation_ratio":0.119469024,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99942267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T06:07:29Z\",\"WARC-Record-ID\":\"<urn:uuid:9bcabc30-ab3c-45fc-890d-42245793e419>\",\"Content-Length\":\"59249\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3f8c5f9-e343-4846-a272-299fc4e1dd06>\",\"WARC-Concurrent-To\":\"<urn:uuid:9438f7ad-4afd-47e3-b87e-3a173fbcd4ac>\",\"WARC-IP-Address\":\"68.66.224.6\",\"WARC-Target-URI\":\"https://percentage-calculator.net/what-percent-of-x-is-y/what-percent-of-312.5-is-75.php\",\"WARC-Payload-Digest\":\"sha1:IQIVFGHZMCBO75PIP6KRQ423A2FN7T4W\",\"WARC-Block-Digest\":\"sha1:F6ZCMQM4EMWLCPD6E5Y5MXDKTMZR2OAU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896374.33_warc_CC-MAIN-20200708031342-20200708061342-00144.warc.gz\"}"} |
https://stacks.math.columbia.edu/tag/0FAY | [
"Lemma 42.49.7. In Lemma 42.49.1 assume $Q|_ T$ is isomorphic to a finite locally free $\\mathcal{O}_ T$-module of rank $< p$. Assume we have another perfect object $Q' \\in D(\\mathcal{O}_ W)$ whose Chern classes are defined with $Q'|_ T$ isomorphic to a finite locally free $\\mathcal{O}_ T$-module of rank $< p'$ placed in cohomological degree $0$. With notation as in Remark 42.34.7 set\n\n$c^{(p)}(Q) = 1 + c_1(Q|_{X \\times \\{ 0\\} }) + \\ldots + c_{p - 1}(Q|_{X \\times \\{ 0\\} }) + c'_{p}(Q) + c'_{p + 1}(Q) + \\ldots$\n\nin $A^{(p)}(Z \\to X)$ with $c'_ i(Q)$ for $i \\geq p$ as in Lemma 42.49.1. Similarly for $c^{(p')}(Q')$ and $c^{(p + p')}(Q \\oplus Q')$. Then $c^{(p + p')}(Q \\oplus Q') = c^{(p)}(Q)c^{(p')}(Q')$ in $A^{(p + p')}(Z \\to X)$.\n\nProof. Recall that the image of $c'_ i(Q)$ in $A^ p(X)$ is equal to $c_ i(Q|_{X \\times \\{ 0\\} })$ for $i \\geq p$ and similarly for $Q'$ and $Q \\oplus Q'$, see Lemma 42.49.1. Hence the equality in degrees $< p + p'$ follows from the additivity of Lemma 42.46.7.\n\nLet's take $n \\geq p + p'$. As in the proof of Lemma 42.49.1 let $E \\subset W_\\infty$ denote the inverse image of $Z$. Observe that we have the equality\n\n$c^{(p + p')}(Q|_ E \\oplus Q'|_ E) = c^{(p)}(Q|_ E)c^{(p')}(Q'|_ E)$\n\nin $A^{(p + p')}(E \\to W_\\infty )$ by Lemma 42.47.8. Since by construction\n\n$c'_ p(Q \\oplus Q') = (E \\to Z)_* \\circ c'_ p(Q|_ E \\oplus Q'|_ E) \\circ C$\n\nwe conclude that suffices to show for all $i + j = n$ we have\n\n$(E \\to Z)_* \\circ c^{(p)}_ i(Q|_ E)c^{(p')}_ j(Q'|_ E) \\circ C = c^{(p)}_ i(Q)c^{(p')}_ j(Q')$\n\nin $A^ n(Z \\to X)$ where the multiplication is the one from Remark 42.34.7 on both sides. There are three cases, depending on whether $i \\geq p$, $j \\geq p'$, or both.\n\nAssume $i \\geq p$ and $j \\geq p'$. In this case the products are defined by inserting $(E \\to W_\\infty )_*$, resp. $(Z \\to X)_*$ in between the two factors and taking compositions as bivariant classes, see Remark 42.34.8. In other words, we have to show\n\n$(E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ (E \\to W_\\infty )_* \\circ c'_ j(Q'|_ E) \\circ C = c'_ i(Q) \\circ (Z \\to X)_* \\circ c'_ j(Q')$\n\nBy Lemma 42.47.1 the left hand side is equal to\n\n$(E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ c_ j(Q'|_{W_\\infty }) \\circ C$\n\nSince $c'_ i(Q) = (E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ C$ the right hand side is equal to\n\n$(E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ C \\circ (Z \\to X)_* \\circ c'_ j(Q')$\n\nwhich is immediately seen to be equal to the above by Lemma 42.49.4.\n\nAssume $i \\geq p$ and $j < p$. Unwinding the products in this case we have to show\n\n$(E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ c_ j(Q'|_{W_\\infty }) \\circ C = c'_ i(Q) \\circ c_ j(Q'|_{X \\times \\{ 0\\} })$\n\nAgain using that $c'_ i(Q) = (E \\to Z)_* \\circ c'_ i(Q|_ E) \\circ C$ we see that it suffices to show $c_ j(Q'|_{W_\\infty }) \\circ C = C \\circ c_ j(Q'|_{X \\times \\{ 0\\} })$ which is part of Lemma 42.49.4.\n\nAssume $i < p$ and $j \\geq p'$. Unwinding the products in this case we have to show\n\n$(E \\to Z)_* \\circ c_ i(Q|_ E) \\circ c'_ j(Q'|_ E) \\circ C = c_ i(Q|_{Z \\times \\{ 0\\} }) \\circ c'_ j(Q')$\n\nHowever, since $c'_ j(Q|_ E)$ and $c'_ j(Q')$ are bivariant classes, they commute with capping with Chern classes (Lemma 42.38.9). Hence it suffices to prove\n\n$(E \\to Z)_* \\circ c'_ j(Q'|_ E) \\circ c_ i(Q|_{W_\\infty }) \\circ C = c'_ j(Q') \\circ c_ i(Q|_{X \\times \\{ 0\\} })$\n\nwhich we reduces us to the case discussed in the preceding paragraph. $\\square$\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7220577,"math_prob":0.9999862,"size":2691,"snap":"2022-27-2022-33","text_gpt3_token_len":1081,"char_repetition_ratio":0.19910681,"word_repetition_ratio":0.22932331,"special_character_ratio":0.45001858,"punctuation_ratio":0.06927711,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000009,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T17:07:42Z\",\"WARC-Record-ID\":\"<urn:uuid:e39c34fc-3cfc-465a-9bb7-a74055d12d4e>\",\"Content-Length\":\"18403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef076665-2963-42a6-af70-0d1f9223fbe3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1ee44be-757c-42d7-95f2-082cc8347371>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/0FAY\",\"WARC-Payload-Digest\":\"sha1:ISFTH6VJ43IPI3DBQAKY4RDGPTSWUNUW\",\"WARC-Block-Digest\":\"sha1:YANPZ4EN7NGDD4K5EUT33OT22C6ZOUWL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036077.8_warc_CC-MAIN-20220625160220-20220625190220-00597.warc.gz\"}"} |
https://planetcalc.com/1262/ | [
"Calculator solving some mathematical competition task\n\nYesterday I was asked to help with some math tasks. They could be solved head on with direct calculation or with enumerative technique, but that will take too long for a person. As I was too lazy to find a smart way, I gave this task to the computer by creating some calculators. It does that definitely faster.\n\nSo, the first task is going like this: Consider the alternating sum 1 3 – 5 7 + 9 11 – 13 15 + … – 2005 2007 + 2009 2011. What is their sum?\nThe calculator will find the sum simply following the procedure below. :)",
null,
"#### Odd numbers calculation\n\nSum\n\nBut in fact, this problem is solved quite easily, and the answer matches the one counted by the computer. In short, it all can be replaced with\n\nAll 1 are reduced, except the last one\n\ncalculating a couple of terms, we can see that the difference between two consecutive terms equal to -32n, where n = 1,3,5 etc. until 501\n$32$\n(-1) + 32(-3) + ... + 32(-501) + (2010^2-1)\nif we put 32 out of the brackets, then inside there will be an arithmetic progression, the sum of which can be calculated quickly using Arithmetic progression :). Well, and then multiply by 32 and subtract from the remaining term of 2009\n2011\n\nThe second task: We have a grandfather who is older than 80 (but younger than 150)/ Today he can tell his grandchildren who have different ages: «The product of our three ages is the sum of squares of our ages». Determine the age of the grandfather. If you simplify the wording - it is necessary to find three numbers whose product is equal to the sum of their squares. Calculator that solves this problem with enumerative technique is below :)",
null,
"#### Products and sums\n\n1st number\n\n2nd number\n\n3rd number\n\nTo be honest, I couldn't think of the way to solve this without a calculator. Probably you just need to reduce the number of options by, for example using the fact that the product of two grandchildren ages should be slightly more than the age of the grandfather. Perhaps there is some more elegant way to solve this - share if you know for sure.\n\nThe third task: 2011 + BON + JEU = MATH\nIn this rebus, each letter stands for a digit from 0 to 9. Two different letters are always replace two different digits and no number will start from 0. Determine the maximum value of MATH. Calculator that solves this problem with enumerative technique is below :) By the way it takes a lot of time (for a computer, of course)",
null,
"#### Nonrecurrent numbers\n\nFirst term\n\nSecond term\n\nSum\n\nIn principle, this problem can also be solved mentally. It is clear that B, J, M and A are located immediately. With the rest, you just need to fiddle a bit.\nThat's cheating of some sort\n\nURL copied to clipboard"
]
| [
null,
"https://planetcalc.com/img/32x32i.png",
null,
"https://planetcalc.com/img/32x32i.png",
null,
"https://planetcalc.com/img/32x32i.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9498556,"math_prob":0.9883514,"size":2573,"snap":"2021-31-2021-39","text_gpt3_token_len":604,"char_repetition_ratio":0.10198521,"word_repetition_ratio":0.033898305,"special_character_ratio":0.25223476,"punctuation_ratio":0.106589146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99353045,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T17:42:38Z\",\"WARC-Record-ID\":\"<urn:uuid:de4e5915-5a12-4393-928a-47b904973995>\",\"Content-Length\":\"58386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47fe3d66-13af-403d-9d92-7d015fe418c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc0f12b7-26ea-4fdb-9f70-54e9d2676d67>\",\"WARC-IP-Address\":\"104.217.251.114\",\"WARC-Target-URI\":\"https://planetcalc.com/1262/\",\"WARC-Payload-Digest\":\"sha1:6NFXBJ5M4EGISFIPVIUEDAKJCW6L7A7Z\",\"WARC-Block-Digest\":\"sha1:X75VBFSJ2VT37Q4LJOQ44Z67PH6KJH4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056548.77_warc_CC-MAIN-20210918154248-20210918184248-00707.warc.gz\"}"} |
https://cstheory.stackexchange.com/questions/37075/nonstandard-dual-parametrization-of-graph-problems/37082 | [
"# Nonstandard dual parametrization of graph problems\n\nOne fundamental result in parameterized complexity of graph problems is that VERTEX COVER parameterized by the solution size $k$ is fixed-parameter-tractable (FPT). On the other hand, when parameterized by the \"dual parameter\" $|V(G)|-k$, it becomes equivalent to INDEPENDENT SET parameterized by solution size (because any vertex cover is the complement of an independent set), and thus it is W-hard.\n\nAlthough this seems less natural, I am interested in the parameterized complexity of VERTEX COVER for the parameter $|E(G)|-k$. This is a larger parameter than $|V(G)|-k$. Is VERTEX COVER FPT for this parameter?\n\nNote: I am also interested in similar parameterizations for other graph problems (e.g. DOMINATING SET). The only place where I have seen both kinds of dual parameters studied is for the hypergraph problem TEST COVER in the 2012 paper \"Parameterized Study of the Test Cover Problem\" by Crowston, Gutin, Jones, Saurabh and Yeo. (also on arXiv)\n\nEdit (04/12/2016): This parameterization is also studied for the other hypergraph problem HITTING SET in the 2011 paper Kernels for below-upper-bound parameterizations of the hitting set and directed dominating set problems by Gutin, Mones and Yeo (arXiv link).\n\nLet $n:=|V(G)|$ and $m:= |E(G)|$. The dual parameter $m-k$ is always at least as large as $m-n$ which in turn is at least as large as the size of a feedback edge set, a set of edges whose removal makes $G$ acyclic.\n\nThe size of a smallest feedback edge set, let's call it feedback edge number $\\phi$, is also at least as large as the feedback vertex number and the treewidth of the graph. This directly implies that Vertex Cover is fixed-parameter tractable for $m-k$. Moreover, it has a polynomial kernel since Vertex Cover parameterized by feedback vertex number has one (this was shown by Jansen and Bodlaender in Vertex Cover Kernelization Revisited - Upper and Lower Bounds for a Refined Parameter. Theory Comput. Syst. 53(2): 263-299 (2013), http://dx.doi.org/10.1007/s00224-012-9393-4).\n\nA simple direct linear kernel for Vertex Cover parameterized by feedback edge number $\\phi$ should be obtainable as follows: Remove all degree-0 vertices, add the neighbor of any degree-1 vertex to the vertex cover, and reduce paths of degree-2 vertices that contain at least 2 vertices (decreasing the bound on $k$ accordingly). After exhaustive application of these reduction rules, in the resulting graph $n=O(\\phi)$. This directly implies a kernel for the larger parameter $m-k$.\n\nTo answer your question for references: I would look for feedback edge number which is smaller than the dual parameter $m-k$, has been considered in the literature, and often gives fixed-parameter tractability results also for Dominating Set (as the parameter is quite large). Here are three further examples:\n\nJohannes Uhlmann, Mathias Weller: Two-Layer Planarization parameterized by feedback edge set. Theor. Comput. Sci. 494: 99-111 (2013), http://dx.doi.org/10.1016/j.tcs.2013.01.029\n\nAndré Nichterlein, Rolf Niedermeier, Johannes Uhlmann, Mathias Weller: On tractable cases of Target Set Selection. Social Netw. Analys. Mining 3(2): 233-256 (2013), http://dx.doi.org/10.1007/s13278-012-0067-7\n\nSepp Hartung, Christian Komusiewicz, André Nichterlein: Parameterized Algorithmics and Computational Experiments for Finding 2-Clubs. J. Graph Algorithms Appl. 19(1): 155-190 (2015), http://dx.doi.org/10.7155/jgaa.00352\n\n• If there are any, then \"Remove all degree-0 vertices\" decreases n without changing m, so increases m-n. Accordingly, the resulting graph's size being linear in the resulting graph's parameter does not mean the resulting graph's size has any bound in terms of the input graph's parameter. – user6973 Dec 3 '16 at 0:28\n• Yes, thanks for pointing this out. I changed this part to a kernelization for the feedback edge number which is smaller. – C Komus Dec 3 '16 at 7:52\n• Subsidiary question: the 2 papers I pointed to were for hypergraph problems, but there $m-k$ is not necessary larger than $n-k$ since there can less hyperedges than vertices. Is there soem generic trick that works there? – Florent Foucaud Dec 5 '16 at 10:53\n\nI think this problem is FPT. Suppose that the graph contains a path on $2k+1$ vertices. Then, I claim the answer is YES: we select the second, fourth, sixth, etc. vertices of this path in a solution and remove them from the graph. We now have a graph $G'$ with $|E(G')|\\le |E(G)|-2k$. It is easy to find a vertex cover of $G'$ with size at most $|E(G')|$. Together with the removed vertices this gives a vertex cover of size at most $|E(G)|-k$ for $G$.\n\nIf the graph does not contain a path on $2k+1$ vertices, then a DFS tree of the graph has height at most $2k+1$, and hence can be used to construct a tree decomposition of width at most $2k$. We can then solve Vertex Cover optimally with the standard algorithm for treewidth.\n\n• Thanks, neat! If you know a reference where such parameter is studied (for other graph problems), please let me know. – Florent Foucaud Dec 2 '16 at 12:14"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8639296,"math_prob":0.9684897,"size":1220,"snap":"2019-51-2020-05","text_gpt3_token_len":291,"char_repetition_ratio":0.17351973,"word_repetition_ratio":0.0,"special_character_ratio":0.22131148,"punctuation_ratio":0.09049774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99689704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T15:46:09Z\",\"WARC-Record-ID\":\"<urn:uuid:17128be7-d109-4a15-8f37-18440648e0f4>\",\"Content-Length\":\"147250\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b726b2e3-31d3-4b1d-a86a-d814bbf5e8a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b763305-1ac6-4032-afea-507bd8bbee85>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/37075/nonstandard-dual-parametrization-of-graph-problems/37082\",\"WARC-Payload-Digest\":\"sha1:STDXXNDT6BVB7C7E6JJRFIL4WNIH3LEW\",\"WARC-Block-Digest\":\"sha1:OUXUUISZZBPNZRTOTTC7LR7NDISLSD4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598800.30_warc_CC-MAIN-20200120135447-20200120164447-00484.warc.gz\"}"} |
https://docs.oracle.com/cd/A97630_01/olap.920/a95295/olap_ta4.htm | [
"OLAP_TABLE Function, 4 of 6\n\n## Creating Object Type Definitions Used by OLAP_TABLE\n\nCreating type definitions that are used by the `OLAP_TABLE` function involves:\n\n1. Designing the objects that will represent the analytic workspace structures.\n2. Writing the object type definitions and the table definitions to define the analytic workspace data as a table of objects.\n\n### Designing the Objects\n\nEach object type represents a row in a table. When mapping analytic workspace structures to object types, typically, you do not define one object type for each analytic workspace structure. Instead, you map many analytic workspace structures to just a few objects:\n\n• Objects that represent measure tables. All multidimensional analytic workspace structures that share exactly the same dimensions can be mapped into a single object.\n• Objects that represent dimension tables. All one-dimensional analytic workspace structures that have exactly the same dimension can be mapped into the object type that you define for that dimension.\n\nFor a more complete discussion of the data warehouse designs that you can mimic in your design, see \"Data Structures in Relational and Multidimensional Data Stores\".\n\nFor each object, you need to identify the attributes that correspond to the columns of the table. To do this, you first need to determine if you want to support the use of `WHERE` clauses when selecting the data. Only those attributes (table columns) that appear in the `limit-map` parameter of the `OLAP_TABLE` function can be referenced in a `WHERE` clause.\n\nTypically, you will want to support the use of `WHERE` clauses. In this case, you need to determine the format of the `limit-map` parameter in order to determine the columns of each table. The columns of each table must correspond exactly to the columns specified in the `limit-ma`p parameter. For the syntax of the `limit-ma`p parameter, see \"Syntax: OLAP_TABLE Function\".\n\n### Creating Type Definitions for Multidimensional Data\n\nTo create the type definitions that define the analytic workspace data as a table of objects take the following steps:\n\n1. Create a type definition for an object that represents a row in the table and whose attributes represents the columns of the table. Simplified syntax for this definition is shown below.\n```CREATE TYPE object-name AS OBJECT (\ncolumn-first data-type,\ncolumn-next data-type,\ncolumn-last data-type);\n\n```\n2. Create a type definition for a table of these objects. Simplified syntax for this definition is shown below.\n```CREATE TYPE table-name AS TABLE OF object-name;\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.76195836,"math_prob":0.8398505,"size":1667,"snap":"2021-43-2021-49","text_gpt3_token_len":364,"char_repetition_ratio":0.114852674,"word_repetition_ratio":0.02264151,"special_character_ratio":0.21355729,"punctuation_ratio":0.103559874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9674068,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T10:46:52Z\",\"WARC-Record-ID\":\"<urn:uuid:5028e5f2-a127-4bd6-a51f-bc2625bdea8a>\",\"Content-Length\":\"9939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13de02a6-d804-4c76-a25f-2c831b19c2be>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f14c6a7-72bd-419b-96d3-147a8b83e9aa>\",\"WARC-IP-Address\":\"184.25.199.158\",\"WARC-Target-URI\":\"https://docs.oracle.com/cd/A97630_01/olap.920/a95295/olap_ta4.htm\",\"WARC-Payload-Digest\":\"sha1:RUJW4COZRUGLLXWI2PWV7CT5D4YYYCEV\",\"WARC-Block-Digest\":\"sha1:VENAAB5MPHVFKOZFH424EGKPUID6CNTP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585916.29_warc_CC-MAIN-20211024081003-20211024111003-00113.warc.gz\"}"} |
http://wiki.ros.org/wire/Tutorials/Tuning%20the%20world%20model%3A%20object%20propagation%20models | [
"Note: This tutorial assumes that you have completed the previous tutorials: ROS tutorials.",
null,
"Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.\n\n# Tuning the world model: object propagation models\n\nDescription: The models used for object propagation determine the probability of associations between predicted object positions and measured object positions. This tutorial shows how changing the model influences the world state estimate.\n\n## Goal\n\nThe world state estimate generated by wire depends on the probabilistic models underlying the world model. This tutorial shows how the process noise associated with the Kalman filter motion model influences the world state estimate. The three videos below show the output for three different settings. First, the process noise if rather low:\n\nIf the noise if increased a bit, the 'trust' in the motion model is lower and the output changes to:\n\nAnd finally, with a high process noise, the motion model is considered not reliable and the occlusion is sufficient to confuse the world model:\n\n## Approach\n\nFor each of the measurements entering the world modeling algorithm, all possible data association explanations are considered:\n\n• The measurement originates from an object not ye present in the world model (new object)\n• The measurement originates from any of the objects already present in the world model\n• The measurement represents a false positive (clutter)\n\nAs a result, a hypothesis tree in which each leaf represents a hypothesis and each hypothesis represents a possible world state is obtained. Each hypothesis gets a probability of being correct and the most probable hypothesis is given as resulting world state estimate.\n\nFor the calculation of the probabilities, a Bayesian update law is used. The probabilistic models representing the probability of measurements being any of the above options are crucial for the quality of the output of the algorithm. The models are defined in the models/world_object_models.xml file in the wire_core package and are object class dependent. For this tutorial, the focus lies on the motion model for tracking object positions:\n\n``` 1 <knowledge>\n2\n3 <prior_new value=\"0.14\" />\n4 <prior_existing value=\"0.14\" />\n5 <prior_clutter value=\"0.72\" />\n6\n7 <object_class name=\"object\">\n8\n9 <behavior_model attribute=\"position\" model=\"wire_state_estimators/PositionEstimator\">\n10 <pnew type=\"uniform\" dimensions=\"3\" density=\"0.0001\" />\n11 <pclutter type=\"uniform\" dimensions=\"3\" density=\"0.0001\" />\n12\n13 <param name=\"max_acceleration\" value=\"8\" />\n14 <param name=\"kalman_timeout\" value=\"1\" />\n15 <param name=\"fixed_pdf_cov\" value=\"0.008\" />\n16 </behavior_model>\n17\n18 <behavior_model attribute=\"color\" model=\"wire_state_estimators/DiscreteEstimator\">\n19 <pnew type=\"discrete\" domain_size=\"100\" />\n20 <pclutter type=\"discrete\" domain_size=\"100\" />\n21 </behavior_model>\n22\n23 <behavior_model attribute=\"class_label\" model=\"wire_state_estimators/DiscreteEstimator\">\n24 <pnew type=\"discrete\" domain_size=\"100\" />\n25 <pclutter type=\"discrete\" domain_size=\"100\" />\n26 </behavior_model>\n27\n28 <behavior_model attribute=\"shape\" model=\"wire_state_estimators/DiscreteEstimator\">\n29 <pnew type=\"discrete\" domain_size=\"10\" />\n30 <pclutter type=\"discrete\" domain_size=\"10\" />\n31 </behavior_model>\n32\n33 </object_class>\n34\n35 </knowledge>\n```\n\n### The Models XML File Explained\n\nNow, let's break the launch xml down.\n\n``` 9 <behavior_model attribute=\"position\" model=\"wire_state_estimators/PositionEstimator\">\n10 <pnew type=\"uniform\" dimensions=\"3\" density=\"0.0001\" />\n11 <pclutter type=\"uniform\" dimensions=\"3\" density=\"0.0001\" />\n12\n13 <param name=\"max_acceleration\" value=\"8\" />\n14 <param name=\"kalman_timeout\" value=\"1\" />\n15 <param name=\"fixed_pdf_cov\" value=\"0.008\" />\n16 </behavior_model>\n```\n\nThe position will be estimated using a position estimator. This estimator, defined in the package [wire_state_estimators] is a multiple model estimator that combines (i) a Kalman filter with a constant velocity motion model with (ii) a fixed state with fixed uncertainty.\n\nIf updates follow each other relatively quickly, the Kalman filter is used to estimate the position. However, if no updates are received for kalman_timeout seconds, the Kalman filter makes place for a fixed state and uncertainty. This fixed state is defined by a Gaussian, the mean of which is based on the last estimated position, and the covariance is chosen to be fixed_pdf_cov.\n\nIn this tutorial, the fixed_pdf_cov parameter will be varied. By increasing this value, it is indicated that the position uncertainty of the object is larger, i.e., that the object may have moved a more significant amount. As a result, measurements far away from the the estimated position are more easily associated with the object. If the position uncertainty fixed_pdf_cov is chosen to be low, then object detections far away from the estimated state are less likely to be associated.\n\n## Data\n\nIn order to be able to reproduce the result shown in the video above, make sure that you have downloaded and compiled the wire packages:\n\n```\\$ git clone https://github.com/tue-robotics/wire.git\n\\$ catkin_make```\n\n`\\$ rosbag decompress demo04.bag`\n\nThe bag file contains tfs, object detections and both rgb and depth images. The images are only included for ease of interpretation and inspection. These are not used by wire.\n\n## Reproducing the result\n\nStart a ROS core:\n\n`\\$ roscore`\n\nand set the use_sim_time parameter to true:\n\n`\\$ rosparam set use_sim_time true`\n\nThen make sure the world_object_models.xml file in the models folder of the world_model package is configured correctly. For reproducing the first result, set the following fixed_pdf_cov in the behavior model for the position attribute for object_class object in line 16 above:\n\n``` 1 <param name=\"fixed_pdf_cov\" value=\"0.001\" />\n```\n\nFor reproducing the second result (switching hypotheses), increase the fixed_pdf_cov parameter:\n\n``` 1 <param name=\"fixed_pdf_cov\" value=\"0.008\" />\n```\n\nFor the third result (incorrect association), increase the fixed_pdf_cov even further:\n\n``` 1 <param name=\"fixed_pdf_cov\" value=\"0.01\" />\n```\n\nTo test the different parameter settings, start WIRE as follows (you will have to restart the wire_core every time you change the parameter since the parameters are loaded during start-up):\n\nLaunch the wire_core:\n\n`\\$ roslaunch wire_core start.launch`\n\nIn a second terminal, launch the visualization:\n\n`\\$ roslaunch wire_tutorials rviz_wire_kinetic.launch`\n\nFinally, play back the data using rosbag_video:\n\n`\\$ rosbag play demo04.bag --clock`\n\nand inspect the results in RViz.\n\nWiki: wire/Tutorials/Tuning the world model: object propagation models (last edited 2017-05-30 11:39:29 by JosElfring)"
]
| [
null,
"http://wiki.ros.org/moin_static197/rostheme/img/idea.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7314251,"math_prob":0.8478216,"size":6745,"snap":"2021-43-2021-49","text_gpt3_token_len":1501,"char_repetition_ratio":0.1320279,"word_repetition_ratio":0.06859592,"special_character_ratio":0.23736101,"punctuation_ratio":0.09618875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99119633,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T03:16:10Z\",\"WARC-Record-ID\":\"<urn:uuid:aa3a7f1f-68e2-43ea-93af-7da28b2139a3>\",\"Content-Length\":\"55596\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66dcfdd0-e226-46ee-8e69-7b17bb29458c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ad49e68-9953-4cdb-b1b3-791052616a43>\",\"WARC-IP-Address\":\"140.211.9.98\",\"WARC-Target-URI\":\"http://wiki.ros.org/wire/Tutorials/Tuning%20the%20world%20model%3A%20object%20propagation%20models\",\"WARC-Payload-Digest\":\"sha1:5CLR3LSJE4MHDSS7XYN54TT5QCFDYUNM\",\"WARC-Block-Digest\":\"sha1:MLBN3AR3QCNKJQUTLQTDG6JOMUESJUCY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588053.38_warc_CC-MAIN-20211027022823-20211027052823-00441.warc.gz\"}"} |
https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-019-1956-0 | [
"$$L^{p }$$ ($$p>2$$)-strong convergence of multiscale integration scheme for jump-diffusion systems\n\n• 288 Accesses\n\nAbstract\n\nIn this paper we shall prove the $$L^{p}$$ ($$p>2$$)-strong convergence of multiscale integration scheme for stochastic jump-diffusion systems with two-time-scale, which gives a numerical method for effective dynamical systems.\n\nIntroduction\n\nThis paper focuses on the following two-time-scale jump-diffusion SDEs:\n\n\\begin{aligned} \\textstyle\\begin{cases} dx_{t}^{\\epsilon }=a(x_{t}^{\\epsilon },y_{t}^{\\epsilon })\\,dt+b(x_{t} ^{\\epsilon })\\,dB_{t} +c(x_{t}^{\\epsilon })\\,dP_{t}, & x_{0}^{\\epsilon }=x _{0}, \\\\ dy_{t}^{\\epsilon }=\\frac{1}{\\epsilon }f(x_{t}^{\\epsilon },y_{t}^{ \\epsilon }) \\,dt+\\frac{1}{\\sqrt{\\epsilon }} g(x_{t}^{\\epsilon },y_{t} ^{\\epsilon })\\,dW_{t}+h(x_{t}^{\\epsilon }, y_{t}^{\\epsilon }) \\,dN_{t} ^{\\epsilon },& y_{0}^{\\epsilon }=y_{0}, \\end{cases}\\displaystyle \\end{aligned}\n(1)\n\nwhere $$x_{t}^{\\epsilon }\\in \\mathbb{R}^{n}$$ and $$y_{t}^{\\epsilon } \\in \\mathbb{R}^{m}$$ are jump-diffusion processes. The functions $$a(x,y)\\in \\mathbb{R}^{n}$$ and $$f(x,y)\\in \\mathbb{R}^{m}$$ are the drift coefficients, the functions $$b(x)\\in \\mathbb{R}^{n\\times d_{1}}$$ and $$g(x,y)\\in \\mathbb{R}^{m\\times d_{2}}$$ are the diffusion coefficients, and the functions $$c(x)\\in \\mathbb{R}^{n}$$ and $$h(x,y)\\in \\mathbb{R} ^{m}$$ are the jump coefficients; $$B_{t}$$ and $$W_{t}$$ are $$d_{1}$$, $$d_{2}$$-dimensional independent Wiener processes, $$P_{t}$$ is a scalar simple Poisson process with intensity $$\\lambda _{1}$$, and $$N_{t}^{\\epsilon }$$ is a scalar simple Poisson process with intensity $$\\frac{\\lambda _{2}}{\\epsilon }$$. ϵ is a small parameter, which represents the ratio of time scale between the processes $$x_{t}^{ \\epsilon }$$ and $$y_{t}^{\\epsilon }$$. With this time scale, the vector $$x_{t}^{\\epsilon }$$ is referred to as the “slow component” and $$y_{t}^{\\epsilon }$$ as the “fast component”. Under suitable assumptions the authors [1, 2] proved that when $$\\epsilon \\rightarrow 0$$, the slow component $$x_{t}^{\\epsilon }$$ mean square converges to the solution of SDEs in the following form:\n\n\\begin{aligned} \\textstyle\\begin{cases} d\\bar{x}_{t}=\\bar{a}(\\bar{x}_{t})\\,dt+b(\\bar{x}_{t})\\,dB_{t} +c(\\bar{x} _{t})\\,dP_{t}, \\\\ \\bar{x}_{0}=x_{0}, \\end{cases}\\displaystyle \\end{aligned}\n(2)\n\nwith\n\n\\begin{aligned} \\bar{a}(x)= \\int _{\\mathbb{R}^{m}}a(x,y)\\mu ^{x}(dy), \\quad x\\in \\mathbb{R}^{n}, \\end{aligned}\n\nand $$\\mu ^{x}$$ is the invariant, ergodic measure generated by the following equation with frozen x:\n\n\\begin{aligned} dy_{t}=f(x,y_{t})\\,dt+g(x,y_{t})\\,dW_{t}+h(x, y_{t})\\,dN_{t}, \\quad y_{0}=y_{0}. \\end{aligned}\n\nMultiscale jump-diffusion stochastic differential equations arise in many applications and have already been studied widely. What is usually of interest for this kind of system (1) is the time evolution of the slow variable $$x_{t}^{\\epsilon }$$. Thus a simplified equation, which is independent of the fast variable and possesses the essential features of the system, is highly desirable. On the one hand, while averaging principle [1,2,3,4,5,6] plays an important role in the research of slow component by getting a reduced equation (2), the difficulty of obtaining the effective equation (2) lies in the fact that the coefficient $$\\bar{a}(\\cdot )$$ is given via expectation with respect to measure $$\\mu ^{x}(dy)$$, which is usually difficult or impossible to obtain analytically, especially when the dimension m is large. On the other hand, even if we get the reduced equation, the equation cannot be solved explicitly. Therefore, the construction of the efficient computational methods is of great importance. Furthermore, the idea of multiscale integration schemes (cf. ) overcomes these difficulties exactly, which solves $$\\bar{x}_{t}$$ with $$\\bar{a}(\\cdot )$$ being estimated on the fly using an empirical average of the original slow coefficients $$a(\\cdot )$$ with respect to numerical solutions of the fast processes. This is one of our motivations.\n\nFor another significant motivation, a substantial body of work has been done concerning multiscale integration scheme for fast-slow SDEs. Most of the existing research theories discuss the convergence in $$L^{p}$$ ($$0< p\\leqslant 2$$), even in a weaker sense [4, 5, 8,9,10]. Nevertheless, convergence in a stronger sense is what we want. In 2007, the $$L^{2}$$ averaging principle was proposed for a system, in which slow and fast dynamics were driven by Brownian noises and Poisson noises in . Subsequently, the authors gave a multiscale integration scheme for the result in . In 2015, Xu and Miao extended the result of to the $$L^{p}$$ ($$p>2$$) case under assumptions (H1)–(H5) in . A natural question is as follows: Can we also establish the $$L^{p}$$ ($$p>2$$) averaging principle by the multiscale integration scheme? It is well known that $$L^{1}$$ convergence and $$L^{2}$$ convergence cannot conclude $$L^{p}$$ ($$p>2$$) convergence. However, $$L^{1}$$ convergence and $$L^{2}$$ convergence can be deduced by $$L^{p}$$ ($$p>2$$) convergence. Once the $$L^{p}$$ ($$p>2$$) convergence has been established, then a much bigger degree of freedom for parameter q in research of $$L^{q}$$ ($$0< q< p$$) convergence would be obtained.\n\nBased on the above discussion, the aim of this paper is to prove the $$L^{p}$$ ($$p>2$$) convergence of the multiscale integration scheme under the following assumptions:\n\n(H1):\n\nThe measurable functions a, b, c, f, g, and h satisfy the global Lipschitz conditions, i.e., there is a positive constant L such that\n\n\\begin{aligned} & \\bigl\\vert a(u_{1},v_{1})-a(u_{2},v_{2}) \\bigr\\vert ^{2}+ \\bigl\\vert b(u_{1})-b(u_{2}) \\bigr\\vert ^{2} + \\bigl\\vert c(u _{1})-c(u_{2}) \\bigr\\vert ^{2} \\\\ &\\qquad {}+ \\bigl\\vert f(u_{1},v_{1})-f(u_{2},v_{2}) \\bigr\\vert ^{2}+ \\bigl\\vert g(u_{1},v_{1})-g(u_{2}, v _{2}) \\bigr\\vert ^{2}+ \\bigl\\vert h(u_{1},v_{1})-h(u_{2},v_{2}) \\bigr\\vert ^{2} \\\\ &\\quad \\leqslant L\\bigl( \\vert u_{1}-u_{2} \\vert ^{2}+ \\vert v_{1}-v_{2} \\vert ^{2}\\bigr) \\end{aligned}\n\nfor all $$u_{i}\\in \\mathbb{R}^{n}$$, $$v_{i}\\in \\mathbb{R}^{m}$$, $$i=1,2$$. Here and below we use $$|\\cdot |$$ to denote both the Euclidean vector norm and the Frobenius matrix norm.\n\nRemark 1.1\n\nWith the help of (H1), it immediately follows that there is a positive constant K such that\n\n\\begin{aligned} \\bigl\\vert &a(u,v) \\bigr\\vert ^{2}+ \\bigl\\vert b(u) \\bigr\\vert ^{2}+ \\bigl\\vert c(u) \\bigr\\vert ^{2} + \\bigl\\vert f(u,v) \\bigr\\vert ^{2}+ \\bigl\\vert g(u,v) \\bigr\\vert ^{2}+ \\bigl\\vert h(u,v) \\bigr\\vert ^{2} \\\\ &\\quad \\leqslant K\\bigl(1+ \\vert u \\vert ^{2}+ \\vert v \\vert ^{2}\\bigr) \\end{aligned}\n\nfor $$(u,v)\\in \\mathbb{R}^{n}\\times \\mathbb{R}^{m}$$. Thus a, b, c, f, g, and h satisfy the sublinear growth condition.\n\n(H2):\n\na, g, and h are globally bounded.\n\nRemark 1.2\n\nBy (H1) and (H2), it is easy to derive that ā in (2) is bounded and satisfies the Lipschitz condition .\n\n(H3):\n\nThere exist constants $$\\beta _{1}>0$$ and $$\\beta _{j} \\in \\mathbb{R}$$, $$j=2, 3, 4$$, which are all independent of $$(u_{1},v _{1},v_{2})$$, such that\n\n\\begin{aligned}& \\begin{gathered} v_{1}\\cdot f(u_{1},v_{1})\\leqslant -\\beta _{1} \\vert v_{1} \\vert ^{2}+\\beta _{2}, \\\\ \\bigl(f(u_{1},v_{1})-f(u_{1},v_{2}) \\bigr) (v_{1}-v_{2})\\leqslant \\beta _{3} \\vert v _{1}-v_{2} \\vert ^{2}, \\end{gathered} \\end{aligned}\n(3)\n\nand\n\n$$\\bigl(h(u_{1},v_{1})-h(u_{1},v_{2}) \\bigr) (v_{1}-v_{2})\\leqslant \\beta _{4} \\vert v _{1}-v_{2} \\vert ^{2}$$\n(4)\n\nfor all $$u_{1}\\in \\mathbb{R}^{n}$$ and $$v_{1},v_{2}\\in \\mathbb{R}^{m}$$.\n\n(H4):\n\n$$\\eta :=-(2\\beta _{3}+2\\lambda _{2}\\beta _{4}+C_{g}+ \\lambda _{2}C_{h})>0$$, here $$\\beta _{3}$$ and $$\\beta _{4}$$ are taken from (3) and (4), $$\\lambda _{2}$$ is from $$N_{t}^{\\epsilon }$$ with intensity $$\\lambda _{2}/\\epsilon$$, $$C_{g}$$, and $$C_{h}$$ are the Lipschitz coefficients for g and h, respectively, i.e.,\n\n$$\\bigl\\vert g(u_{1},v_{1})-g(u_{2},v_{2}) \\bigr\\vert ^{2}\\leqslant C_{g}\\bigl( \\vert u_{1}-u_{2} \\vert ^{2}+ \\vert v _{1}-v_{2} \\vert ^{2}\\bigr)$$\n\nand\n\n$$\\bigl\\vert h(u_{1},v_{1})-h(u_{2},v_{2}) \\bigr\\vert ^{2}\\leqslant C_{h}\\bigl( \\vert u_{1}-u_{2} \\vert ^{2}+ \\vert v_{1}-v_{2} \\vert ^{2}\\bigr)$$\n\nfor all $$u_{1},u_{2}\\in \\mathbb{R}^{n}$$, $$v_{1},v_{2}\\in \\mathbb{R}^{m}$$.\n\n(H5):\n\nThere exists a constant $$\\gamma >0$$, which is independent of $$(u,v)$$, such that\n\n$$v^{\\mathrm{T}}g(u,v)g^{\\mathrm{T}}(u,v)v\\geqslant \\gamma \\vert v \\vert ^{2}$$\n\nfor all $$(u,v)\\in \\mathbb{R}^{n}\\times \\mathbb{R}^{m}$$.\n\nAn example that satisfies (H1)–(H5) is $$a(u,v)= \\frac{1}{1+(u+v)^{2}}$$, $$b(u)=e^{-u^{2}}$$, $$c(u)=\\sin u$$, $$f(u,v)=-1.5( \\lambda _{2}+1)\\nu$$, $$g(u,\\nu )=\\frac{3+\\sin u+\\sin \\nu }{\\sqrt{2}}$$, and $$h(u,\\nu )=\\frac{\\sin u+\\sin \\nu }{\\sqrt{2}}$$.\n\nIt is worth pointing out that the $$L^{p}$$ ($$p>2$$) averaging principle under assumptions (H1)–(H5) had been established in .\n\nNow, we will introduce the multiscale integration scheme. The scheme is made up of a macro solver to evolve (2) and a micro solver to simulate the fast dynamics in (1):\n\n1. Macro solver. Let Δt be a fixed step, and let $$X_{n}$$ be a numerical approximation to the coarse variable at time $$t_{n}=n\\Delta t$$. The simplest choice is the Euler–Maruyama scheme\n\n\\begin{aligned} X_{n+1}=X_{n}+A(X_{n})\\Delta t+b(X_{n})\\Delta B_{n}+c(X_{n})\\Delta P _{n}, \\quad X_{0}=x_{0}, \\end{aligned}\n(5)\n\nwhere $$A(X_{n})$$ is estimated by an empirical average\n\n$$A(X_{n})=\\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},Y_{m}^{n} \\bigr).$$\n(6)\n\n2. Micro solver. To get $$A(X_{n})$$ used in the macro solver, we adopt the Euler–Maruyama scheme to generate $$Y_{m}^{n}$$:\n\n$$Y_{m+1}^{n}=Y_{m}^{n}+\\frac{1}{\\epsilon }f \\bigl(X_{n},Y_{m}^{n}\\bigr)\\delta t+ \\frac{1}{\\sqrt{ \\epsilon }}g\\bigl(X_{n},Y_{m}^{n}\\bigr)\\Delta W_{m}^{n}+h\\bigl(X_{n},Y_{m}^{n} \\bigr) \\Delta N_{m}^{n},$$\n(7)\n\nwith fixed $$X_{n}$$, and we denote the solution by $$Y_{m}^{n}$$, $$m=0, 1, \\ldots,M$$, where $$\\Delta W_{m}^{n}$$ are Brownian increments over a time interval δt, and $$\\Delta N_{m}^{n}$$ are Poisson increments with intensity $$\\frac{\\lambda _{2}}{\\epsilon }$$. Due to ergodicity of the fast dynamics, we can select, among other selections, $$Y_{0}^{n}=y_{0}$$.\n\nNote that the effective dynamics do not rely on ϵ. Meanwhile, since the discrete solution $$Y_{m}^{n}$$ obtained by the micro-solver is for $$X_{n}$$ fixed, it only depends on the ratio $$\\frac{\\delta t}{ \\epsilon }$$. Thus, without loss of generality, we may take $$\\epsilon =1$$. Then we have\n\n$$Y_{m+1}^{n}=Y_{m}^{n}+f \\bigl(X_{n},Y_{m}^{n}\\bigr)\\delta t+g \\bigl(X_{n}, Y_{m}^{n}\\bigr) \\Delta W_{m}^{n}+h\\bigl(X_{n},Y_{m}^{n} \\bigr)\\Delta N_{m}^{n},$$\n(8)\n\nwhere $$\\Delta W_{m}^{n}=W_{(m+1)\\delta t}^{n}-W_{m\\delta t}^{n}$$ are the Brownian increments, and $$\\Delta N_{m}^{n}=N_{(m+1) \\delta t}^{n}-N _{m\\delta t}^{n}$$ are the Poisson increments with intensity $$\\lambda _{2}$$.\n\nSimultaneously, $$Y_{m}^{n}$$ are numerically generated discrete solutions of the family of SDEs as well:\n\n$$dz_{t}^{n}=f\\bigl(X_{n},z_{t}^{n} \\bigr)\\,dt+g\\bigl(X_{n},z_{t}^{n} \\bigr)\\,dW_{t}^{n} +h\\bigl(X_{n},z _{t}^{n}\\bigr)\\,dN_{t}^{n},$$\n(9)\n\nwith initial conditions $$z_{0}^{n}=Y_{0}^{n}=y_{0}$$ and a time step δt (the choice of a fixed $$Y_{0}^{n}$$ for all n simplifies our estimates; in practice, we could take $$Y_{0}^{n}=Y_{M}^{n-1}$$ for all $$n>0$$).\n\nWe also present a discrete auxiliary process $$\\bar{X}_{n}$$, the Euler solution to the effective dynamics (2):\n\n$$\\bar{X}_{n+1}=\\bar{X}_{n}+\\bar{a}( \\bar{X}_{n})\\Delta t+ b(\\bar{X}_{n}) \\Delta B_{n}+c( \\bar{X}_{n})\\Delta P_{n}.$$\n(10)\n\nConcretely speaking, we are concentrating on estimating the $$L^{p}$$-strong error between the solution $$\\bar{x}_{t}$$ of the effective dynamics (2) and the solution $$X_{n}$$ of the multiscale integration scheme (5), (6), and (8) in this paper. Furthermore, we may easily obtain that the solution $$X_{n}$$ of the multiscale integration scheme can approximate the solution $$\\bar{x} _{t}$$ of the effective dynamics in both the sense of $$L^{q}$$ ($$0< q< p$$) and the probability by Hölder’s inequality and Chebyshev’s inequality. Then the process of proving the main result can be divided into two parts: $$(I')$$ the difference between the process $$\\bar{x}_{t_{n}}$$ and the auxiliary process $$\\bar{X}_{n}$$ (see Lemma 2.4 below); $$(\\mathit{II}')$$ the difference between the process $$X_{n}$$ and the auxiliary process $$\\bar{X}_{n}$$ (see Lemma 3.8 below).\n\nWe now describe the structure of the present paper. In Sect. 2, we introduce some a priori estimates to testify the error between the process $$\\bar{x}_{t_{n}}$$ and the auxiliary process $$\\bar{X}_{n}$$. In Sect. 3, we devote ourselves to proving the error between the process $$X_{n}$$ and the auxiliary process $$\\bar{X}_{n}$$. In Sect. 4, based on the above two estimates, we can derive our main result (see Theorem 4.1).\n\nThroughout this paper, we will denote by C or K a generic positive constant which may change its value from line to line. In chains of inequalities, we will adopt C, $$C^{\\prime }$$, $$C^{\\prime \\prime }$$, … or $$C_{1}$$, $$C_{2}$$, $$K_{1}$$, $$K_{2}$$, … to avoid confusion.\n\nSome a priori estimates\n\nIn this section, we shall give some a priori estimates in the first three lemmas. Then we can apply the obtained results to estimate the difference between the process $$\\bar{x}_{t_{n}}$$ and the auxiliary process $$\\bar{X}_{n}$$.\n\nFor convenience, we will extend the discrete numerical solution $$\\bar{X}_{n}$$ of (10) to continuous time. We first define the ‘step functions’\n\n$$Z(t)=\\sum_{k}\\bar{X}_{k}1_{[k\\Delta t, (k+1)\\Delta t)}(t),$$\n(11)\n\nwhere $$1_{G}$$ is the indicator function for the set G. Then we define\n\n$$\\bar{X}(t)=x_{0}+ \\int _{0}^{t}\\bar{a}\\bigl(Z(s)\\bigr) \\,ds+ \\int _{0}^{ t}b\\bigl(Z(s)\\bigr)\\,dB _{s}+ \\int _{0}^{t}c\\bigl(Z(s)\\bigr)\\,dP_{s}.$$\n(12)\n\n(Note that by construction $$Z(t-)=Z(t)$$ for $$t\\neq k\\Delta t$$.) It is not difficult to verify $$Z(t_{k})=\\bar{X}(t_{k})=\\bar{X}_{k}$$. The aim of this section is to prove a convergence result for $$\\bar{X}(t)$$ because the discrete numerical solution is interpolated to $$\\bar{X}(t)$$. Then, we can obtain the convergence result for $$\\bar{X}_{k}$$ straightly.\n\nFirstly, we show that the discrete numerical solution $$\\bar{X}_{k}$$ and the continuous approximation $$\\bar{X}(t)$$ have 2p bounded moments in the first two lemmas.\n\nLemma 2.1\n\nFor any $$p>1$$ and $$T>0$$, there exist positive constants $$\\Delta t^{ \\ast }$$ and $$C_{1}$$ such that, for all $$0<\\Delta t\\leqslant \\Delta t ^{\\ast }$$,\n\n$$\\mathbb{E} \\vert \\bar{X}_{k} \\vert ^{2p}\\leqslant C_{1}\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr)$$\n(13)\n\nfor $$k\\Delta t\\leqslant T$$, where $$C_{1}$$ is independent of $$(k, \\Delta t)$$.\n\nProof\n\nBy construction (12), we have\n\n\\begin{aligned} \\bar{X}_{k+1}= & x_{0}+ \\int _{0}^{(k+1)\\Delta t}\\bar{a}\\bigl(Z(s)\\bigr) \\,ds+ \\int _{0}^{(k+1)\\Delta t}b\\bigl(Z(s)\\bigr)\\,dB_{s}+ \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,dP _{s}. \\end{aligned}\n\nThen we obtain\n\n\\begin{aligned} \\mathbb{E} \\vert \\bar{X}_{k+1} \\vert ^{2p} \\leqslant & C \\vert x_{0} \\vert ^{2p}+C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\Delta t}\\bar{a}\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} +C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\Delta t}b\\bigl(Z(s)\\bigr)\\,dB_{s} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,dP_{s} \\biggr\\vert ^{2p} \\\\ :=&C\\bigl( \\vert x_{0} \\vert ^{2p}+I_{1}+I_{2}+I_{3} \\bigr) \\end{aligned}\n(14)\n\nfor $$(k+1)\\Delta t\\leqslant T$$. For $$I_{2}$$ and $$I_{3}$$, using $$\\tilde{P}_{t}:=P_{t}-\\lambda _{1}t$$, Burkhölder’s inequality , Hölder’s inequality, Remark 1.1, and (11), we have\n\n\\begin{aligned} I_{2} \\leqslant & C\\mathbb{E} \\biggl[ \\int _{0}^{(k+1)\\Delta t} \\bigl\\vert b\\bigl(Z(s)\\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p} \\\\ \\leqslant &C \\int _{0}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert b\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant &C \\int _{0}^{(k+1)\\Delta t}\\mathbb{E}\\bigl(1+ \\bigl\\vert Z(s) \\bigr\\vert ^{2p}\\bigr)\\,ds \\\\ \\leqslant &C+C\\Delta t\\sum_{i=0}^{k} \\mathbb{E} \\vert \\bar{X}_{i} \\vert ^{2p} \\end{aligned}\n(15)\n\nand\n\n\\begin{aligned} I_{3} =&\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,d \\tilde{P}_{s}+ \\lambda _{1} \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\ \\leqslant & C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,d \\tilde{P}_{s} \\biggr\\vert ^{2p}+C \\mathbb{E} \\biggl\\vert \\lambda _{1} \\int _{0}^{(k+1) \\Delta t}c\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\ \\leqslant & C\\mathbb{E} \\biggl[ \\int _{0}^{(k+1)\\Delta t} \\bigl\\vert c\\bigl(Z(s)\\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p}+C\\mathbb{E} \\biggl\\vert \\lambda _{1} \\int _{0}^{(k+1)\\Delta t}c\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\ \\leqslant & C \\int _{0}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert c\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds+C \\int _{0}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert c\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant & C \\int _{0}^{(k+1)\\Delta t}\\mathbb{E}\\bigl(1+ \\bigl\\vert Z(s) \\bigr\\vert ^{2p}\\bigr)\\,ds \\\\ \\leqslant & C+C\\Delta t\\sum_{i=0}^{k} \\mathbb{E} \\vert \\bar{X}_{i} \\vert ^{2p}. \\end{aligned}\n(16)\n\nSimilarly, we may deal with $$I_{1}$$ and have\n\n\\begin{aligned} I_{1} \\leqslant & C+C\\Delta t\\sum _{i=0}^{k}\\mathbb{E} \\vert \\bar{X}_{i} \\vert ^{2p}. \\end{aligned}\n(17)\n\nChoosing Δt sufficiently small and by (14)–(17), we have\n\n$$\\mathbb{E} \\vert \\bar{X}_{k+1} \\vert ^{2p}\\leqslant C \\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr)+C\\Delta t \\sum _{i=0}^{k}\\mathbb{E} \\vert \\bar{X}_{i} \\vert ^{2p},$$\n\nwhich, with the aid of discrete Gronwall’s inequality, gives the result. □\n\nLemma 2.2\n\nFor any $$p>1$$ and $$T>0$$, there exist positive constants $$\\Delta t^{ \\ast }$$ and $$C_{2}$$ such that, for all $$0<\\Delta t\\leqslant \\Delta t ^{\\ast }$$,\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t) \\bigr\\vert ^{2p}\\leqslant C_{2}\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr), \\end{aligned}\n(18)\n\nwhere $$C_{2}$$ is independent of Δt.\n\nProof\n\nFrom (12), we have\n\n\\begin{aligned} \\bigl\\vert \\bar{X}(t) \\bigr\\vert ^{2p} \\leqslant C \\vert x_{0} \\vert ^{2p}+C \\biggl\\vert \\int _{0}^{ t}\\bar{a}\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} &+C \\biggl\\vert \\int _{0}^{t}b\\bigl(Z(s)\\bigr)\\,dB_{s} \\biggr\\vert ^{2p}+C \\biggl\\vert \\int _{0}^{t}c\\bigl(Z(s)\\bigr)\\,dP_{s} \\biggr\\vert ^{2p}. \\end{aligned}\n\nThus, by the definition of $$\\tilde{P}_{t}$$, we have\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t) \\bigr\\vert ^{2p} \\leqslant & C \\vert x_{0} \\vert ^{2p}+C \\mathbb{E}\\sup_{t\\in [0,T]} \\biggl\\vert \\int _{0}^{ t}\\bar{a}\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p}+C \\mathbb{E}\\sup_{t\\in [0,T]} \\biggl\\vert \\int _{0}^{t}b\\bigl(Z(s)\\bigr)\\,dB_{s} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,T]} \\biggl\\vert \\int _{0}^{t}c\\bigl(Z(s)\\bigr)\\,d\\tilde{P} _{s} \\biggr\\vert ^{2p}+C\\mathbb{E}\\sup _{t\\in [0,T]} \\biggl\\vert \\lambda _{1} \\int _{0}^{t}c\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p}. \\end{aligned}\n(19)\n\nBy the same method as in the previous lemma, we obtain\n\n$$\\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t) \\bigr\\vert ^{2p}\\leqslant C\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr)+C \\int _{0}^{T}\\mathbb{E} \\bigl\\vert Z(s) \\bigr\\vert ^{2p}\\,ds.$$\n\nApplying (11) and Lemma 2.1 over the interval [0,T], we obtain result (18). □\n\nSecondly, we show that the continuous-time approximation remains close to the step functions Z(s) in a strong sense.\n\nLemma 2.3\n\nFor any $$p>1$$ and $$T>0$$, there exist positive constants $$\\Delta t^{ \\ast }$$ and $$C_{3}$$ such that, for all $$0<\\Delta t\\leqslant \\Delta t ^{\\ast }$$,\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)-Z(t) \\bigr\\vert ^{2p}\\leqslant C_{3} \\Delta t^{p}\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr), \\end{aligned}\n(20)\n\nwhere $$C_{3}$$ is independent of Δt.\n\nProof\n\nConsider $$t\\in [k\\Delta t,(k+1)\\Delta t]\\subseteq [0,T]$$, we have\n\n$$\\bar{X}(t)-Z(t)=\\bar{X}(t)-\\bar{X}_{k}= \\int _{k\\Delta t}^{t}\\bar{a}\\bigl(Z(s)\\bigr)\\,ds+ \\int _{k\\Delta t}^{t}b\\bigl(Z(s)\\bigr)\\,dB_{s}+ \\int _{k\\Delta t}^{t}c\\bigl(Z(s)\\bigr)\\,dP_{s}.$$\n\nThus\n\n$$\\bigl\\vert \\bar{X}(t)- Z(t) \\bigr\\vert ^{2p}\\leqslant C \\biggl\\vert \\int _{k\\Delta t}^{t}\\bar{a}\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p}+C \\biggl\\vert \\int _{k\\Delta t}^{t}b\\bigl(Z(s)\\bigr)\\,dB_{s} \\biggr\\vert ^{2p}+C \\biggl\\vert \\int _{k\\Delta t}^{t}c\\bigl(Z(s)\\bigr)\\,dP_{s} \\biggr\\vert ^{2p}$$\n\nfor each $$t\\in [k\\Delta t,(k+1)\\Delta t]$$. Then\n\n\\begin{aligned} \\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)-Z(t) \\bigr\\vert ^{2p} \\leqslant & \\max_{k=0,1,\\ldots T/\\Delta t-1}\\sup _{\\tau \\in [k\\Delta t,(k+1)\\Delta t]} \\biggl\\{ C \\biggl\\vert \\int _{k\\Delta t}^{\\tau }\\bar{a}\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\ &{}+C \\biggl\\vert \\int _{k\\Delta t}^{\\tau }b\\bigl(Z(s)\\bigr)\\,dB_{s} \\biggr\\vert ^{2p}+C \\biggl\\vert \\int _{k\\Delta t}^{\\tau }c\\bigl(Z(s)\\bigr)\\,d \\tilde{P}_{s} \\biggr\\vert ^{2p} \\\\ &{}+C \\biggl\\vert \\lambda _{1} \\int _{k\\Delta t}^{\\tau }c\\bigl(Z(s)\\bigr)\\,ds \\biggr\\vert ^{2p} \\biggr\\} . \\end{aligned}\n(21)\n\nNow, taking expectations on both sides of (21), then using Burkhölder’s inequality on the martingale integrals and by Hölder’s inequality, we have\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)- Z(t) \\bigr\\vert ^{2p} \\leqslant & \\max_{k=0,1,\\ldots T/\\Delta t-1} \\biggl\\{ C\\Delta t^{2p-1} \\int _{k\\Delta t}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert \\bar{a} \\bigl(Z(s)\\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ &{}+C\\Delta t^{p-1} \\int _{k\\Delta t}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert b\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ &{}+C\\Delta t^{p-1} \\int _{k\\Delta t}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert c\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ &{}+C\\Delta t^{2p-1} \\int _{k\\Delta t}^{(k+1)\\Delta t}\\mathbb{E} \\bigl\\vert c\\bigl(Z(s) \\bigr) \\bigr\\vert ^{2p}\\,ds \\biggr\\} . \\end{aligned}\n\nApplying Remarks 1.1 and 1.2, we have\n\n$$\\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)- Z(t) \\bigr\\vert ^{2p}\\leqslant \\max_{k=0,1,\\ldots T/\\Delta t-1} \\biggl\\{ C\\bigl( \\Delta t^{p-1}+\\Delta t^{2p-1}\\bigr) \\int _{k\\Delta t}^{(k+1)\\Delta t}\\bigl(1+\\mathbb{E} \\bigl\\vert Z(s) \\bigr\\vert ^{2p}\\bigr)\\,ds \\biggr\\} .$$\n\nBut $$Z(s)\\equiv \\bar{X}_{k}$$ on $$[k\\Delta t,(k+1)\\Delta t)$$, hence, it follows from Lemma 2.1 that\n\n$$\\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)- Z(t) \\bigr\\vert ^{2p}\\leqslant C\\bigl(\\Delta t^{p}+\\Delta t^{2p}\\bigr)\\bigl[1+C_{1}\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr)\\bigr],$$\n\nwhich yields result (20). □\n\nLastly, we prove a strong convergence result for $$\\bar{X}(t)$$.\n\nLemma 2.4\n\nFor any $$p>1$$ and $$T>0$$, there exist positive constants $$\\Delta t^{ \\ast }$$ and $$C_{4}$$ such that, for all $$0<\\Delta t\\leqslant \\Delta t ^{\\ast }$$,\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)- \\bar{x}(t) \\bigr\\vert ^{2p}\\leqslant C _{4}\\Delta t^{p}\\bigl(1+ \\vert x_{0} \\vert ^{2p}\\bigr), \\end{aligned}\n(22)\n\nwhere $$C_{4}$$ is independent of Δt.\n\nProof\n\nBy construction (12), we get\n\n\\begin{aligned} \\bar{X}(t)-\\bar{x}(t) =& \\int _{0}^{t}\\bigl[\\bar{a}\\bigl(Z(s)\\bigr)-\\bar{a} \\bigl(\\bar{x}(s)\\bigr)\\bigr]\\,ds+ \\int _{0}^{t}\\bigl[b\\bigl(Z(s)\\bigr)-b\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,dB_{s} \\\\ &{}+ \\int _{0}^{t}\\bigl[c\\bigl(Z(s)\\bigr)-c\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,dP_{s}. \\end{aligned}\n\nHence, we have\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\bigl\\vert \\bar{X}(t)- \\bar{x}(t) \\bigr\\vert ^{2p} \\leqslant & C\\mathbb{E}\\sup _{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[\\bar{a}\\bigl(Z(s)\\bigr)- \\bar{a} \\bigl(\\bar{x}(s)\\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[b\\bigl(Z(s)\\bigr)-b\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,dB_{s} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[c\\bigl(Z(s)\\bigr)-c\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,dP_{s} \\biggr\\vert ^{2p} \\\\ :=&C(I_{1}+I_{2}+I_{3}) \\end{aligned}\n(23)\n\nfor any $$0\\leqslant t_{1}\\leqslant T$$. By the definition of $$\\tilde{P}_{t}$$, Burkhölder’s inequality, Hölder’s inequality, and (H1), we obtain\n\n\\begin{aligned}& \\begin{aligned}[b] I_{2} &\\leqslant C\\mathbb{E} \\biggl[ \\int _{0}^{t_{1}} \\bigl\\vert b\\bigl(Z(s)\\bigr)-b \\bigl(\\bar{x}(s)\\bigr) \\bigr\\vert ^{2} \\,ds \\biggr]^{p} \\\\ &\\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert b\\bigl(Z(s) \\bigr)-b\\bigl(\\bar{x}(s)\\bigr) \\bigr\\vert ^{2p} \\,ds \\\\ &\\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert Z(s)- \\bar{x}(s) \\bigr\\vert ^{2p}\\,ds, \\end{aligned} \\end{aligned}\n(24)\n\\begin{aligned}& \\begin{aligned}[b] I_{3} &\\leqslant C\\mathbb{E}\\sup _{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[c\\bigl(Z(s)\\bigr)-c\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,d\\tilde{P}_{s} \\biggr\\vert ^{2p} \\\\ &\\quad {}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\lambda _{1} \\int _{0}^{t}\\bigl[c\\bigl(Z(s)\\bigr)-c\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p} \\\\ &\\leqslant C\\mathbb{E} \\biggl[ \\int _{0}^{t_{1}} \\bigl\\vert c\\bigl(Z(s)\\bigr)-c \\bigl(\\bar{x}(s)\\bigr) \\bigr\\vert ^{2} \\,ds \\biggr]^{p} \\\\ &\\quad {}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\lambda _{1} \\int _{0}^{t}\\bigl[c\\bigl(Z(s)\\bigr)-c\\bigl( \\bar{x}(s)\\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p} \\\\ &\\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert c\\bigl(Z(s) \\bigr)-c\\bigl(\\bar{x}(s)\\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ &\\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert Z(s)- \\bar{x}(s) \\bigr\\vert ^{2p}\\,ds. \\end{aligned} \\end{aligned}\n(25)\n\nDealing with $$I_{1}$$ similarly and combining (23)–(25), it follows that\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)-\\bar{x}(t) \\bigr\\vert ^{2p} \\leqslant & C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert Z(s)- \\bar{x}(s) \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant & C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert \\bar{X}(s)- \\bar{x}(s) \\bigr\\vert ^{2p}\\,ds+C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert \\bar{X}(s)-Z(s) \\bigr\\vert ^{2p}\\,ds. \\end{aligned}\n\nApplying Lemma 2.3, we obtain\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,T]} \\bigl\\vert \\bar{X}(t)-\\bar{x}(t) \\bigr\\vert ^{2p} &\\leqslant C _{5} \\Delta t^{p}\\bigl(1+ \\vert x_{0} \\vert ^{2p} \\bigr)+C_{6} \\int _{0}^{t_{1}}\\mathbb{E} \\sup_{t\\in [0,s]} \\bigl\\vert \\bar{X}(t)-\\bar{x}(t) \\bigr\\vert ^{2p}\\,ds. \\end{aligned}\n\nBy continuous Gronwall’s inequality, the desired estimate (22) is obtained. □\n\nStrong convergence of the scheme\n\nIn this section, some a priori estimates would be provided in the first seven lemmas. Then we can use the established estimates to get the error between the process $$X_{n}$$ and the auxiliary process $$\\bar{X}_{n}$$.\n\nNow, we firstly show the 2pth moment estimates for the processes $$z_{t}^{n}$$, $$X_{n}$$, and $$Y_{m}^{n}$$.\n\nLemma 3.1\n\nFor any $$p>1$$ and $$T>0$$, there exists a positive constant $$K_{1}$$ such that\n\n\\begin{aligned} \\sup_{0\\leqslant t\\leqslant T} \\mathbb{E} \\bigl\\vert z_{t}^{n} \\bigr\\vert ^{2p}\\leqslant K _{1}. \\end{aligned}\n(26)\n\nProof\n\nFor $$|z_{t}^{n}|^{2p}$$, direct computation with Itô’s formula gives that\n\n\\begin{aligned} \\bigl\\vert z_{t}^{n} \\bigr\\vert ^{2p} =& \\vert y_{0} \\vert ^{2p}+2p \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2}\\bigl(f\\bigl(X _{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr)\\,ds+2p \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2}\\bigl(g\\bigl(X _{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr)\\,dW_{s}^{n} \\\\ &{}+2p \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2}\\bigl( h\\bigl(X_{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr) \\,dN _{s}^{n}+2p(p-1) \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2(p-2)}\\bigl( g\\bigl(X_{n},z_{s}^{n} \\bigr),z _{s}^{n}\\bigr)^{2}\\,ds \\\\ &{}+2p(p-1)\\lambda _{2} \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2(p-2)}\\bigl( h\\bigl(X_{n},z_{s} ^{n}\\bigr),z_{s}^{n}\\bigr)^{2}\\,ds+p \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2} \\bigl\\vert g\\bigl(X_{n},z_{s} ^{n}\\bigr) \\bigr\\vert ^{2}\\,ds \\\\ &{}+p\\lambda _{2} \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2} \\bigl\\vert h\\bigl(X_{n},z_{s}^{n} \\bigr) \\bigr\\vert ^{2}\\,ds. \\end{aligned}\n(27)\n\nWe have by (H3)\n\n$$\\bigl(f\\bigl(X_{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr)\\leqslant -\\beta _{1} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2}+ \\beta _{2}.$$\n(28)\n\nBy Young’s inequality and (H2), we have\n\n\\begin{aligned}& \\begin{aligned} 2\\lambda _{2}\\bigl( h\\bigl(X_{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr)\\leqslant \\frac{\\lambda _{2}^{2}}{\\beta _{1}} \\bigl\\vert h\\bigl(X_{n},z_{s}^{n}\\bigr) \\bigr\\vert ^{2}+\\beta _{1} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2} \\leqslant C+\\beta _{1} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2}, \\end{aligned} \\end{aligned}\n(29)\n\\begin{aligned}& \\bigl(g\\bigl(X_{n},z_{s}^{n}\\bigr),z_{s}^{n} \\bigr)^{2}\\leqslant C \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2}, \\end{aligned}\n(30)\n\nand\n\n$$\\bigl(h\\bigl(X_{n},z_{s}^{n} \\bigr),z_{s}^{n}\\bigr)^{2}\\leqslant C \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2}.$$\n(31)\n\nTaking expectations on both sides of (27) and combining (28)–(31), we have\n\n$$\\mathbb{E} \\bigl\\vert z_{t}^{n} \\bigr\\vert ^{2p}\\leqslant \\vert y_{0} \\vert ^{2p}-p\\beta _{1}\\mathbb{E} \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p}\\,ds+C_{p,\\lambda _{2},\\beta _{1},\\beta _{2}} \\mathbb{E} \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p-2}\\,ds.$$\n\nMoreover, taking $$k>0$$ small enough for Young’s inequality in the form $$ab\\leqslant k|b|^{m}+C_{k,m}|a|^{m/m-1}$$, we have\n\n$$\\mathbb{E} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p}\\leqslant \\vert y_{0} \\vert ^{2p}-C_{p,\\lambda _{2}, \\beta _{1},\\beta _{2}} \\mathbb{E} \\int _{0}^{t} \\bigl\\vert z_{s}^{n} \\bigr\\vert ^{2p}\\,ds+C_{p, \\lambda _{2},\\beta _{1},\\beta _{2}}^{\\prime }t,$$\n\nwhich, with the help of continuous Gronwall’s inequality, yields the result. □\n\nThe proof of the following lemma is similar to Sect. 2. We omit the details.\n\nLemma 3.2\n\nFor any $$p>1$$ and small enough Δt, there exists a positive constant $$K_{2}$$ such that\n\n$$\\sup_{0\\leqslant n\\leqslant T/\\Delta t}\\mathbb{E} \\vert X_{n} \\vert ^{2p}\\leqslant K_{2},$$\n(32)\n\nwhere $$K_{2}$$ is independent of Δt.\n\nLemma 3.3\n\nFor small enough δt and $$p>1$$, there exists a positive constant $$K_{3}$$ such that\n\n$$\\sup_{\\stackrel{0\\leqslant n\\leqslant \\frac{T}{\\Delta t}}{0< m< M}}E \\bigl\\vert Y _{m}^{n} \\bigr\\vert ^{2p}\\leqslant K_{3},$$\n(33)\n\nwhere $$K_{3}$$ is independent of $$(M,\\delta t)$$.\n\nProof\n\nNow we give a definition of $$Y_{t}^{n}$$ by\n\n\\begin{aligned} Y_{t}^{n}:=y_{0}+ \\int _{0}^{t}f\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds+ \\int _{0}^{t}g\\bigl(X _{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dW_{s}^{n}+ \\int _{0}^{t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dN _{s}^{n}, \\end{aligned}\n\nwhere $$\\hat{Y}_{t}^{n}:=Y_{k}^{n}$$ for $$t\\in [k\\delta t,(k+1)\\delta t)$$, $$k=1,2,\\ldots M$$, and $$\\hat{Y}_{t_{k}}^{n}=Y_{t_{k}}^{n}=Y_{k}^{n}$$ ($$t_{k}=k\\delta t$$).\n\nThus we have\n\n\\begin{aligned}[b] Y_{k+1}^{n}&=y_{0}+ \\int _{0}^{(k+1)\\delta t}f\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds+ \\int _{0}^{(k+1)\\delta t}g\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dW_{s}^{n}\\\\ &\\quad {}+ \\int _{0} ^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dN_{s}^{n}. \\end{aligned}\n(34)\n\nTaking the 2pth moment and expectations on both sides of (34), we get\n\n\\begin{aligned} \\mathbb{E} \\bigl\\vert Y_{k+1}^{n} \\bigr\\vert ^{2p} =&\\mathbb{E} \\biggl\\vert y_{0}+ \\int _{0}^{(k+1) \\delta t}f\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds+ \\int _{0}^{(k+1)\\delta t}g\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dW_{s}^{n} \\\\ &{}+ \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dN_{s}^{n} \\biggr\\vert ^{2p} \\\\ \\leqslant &C \\vert y_{0} \\vert ^{2p}+C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}f\\bigl(X _{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds \\biggr\\vert ^{2p}+C \\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1) \\delta t}g\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dW_{s}^{n} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dN _{s}^{n} \\biggr\\vert ^{2p}. \\end{aligned}\n(35)\n\nUsing Hölder’s inequality, Remark 1.1, and the definition of $$\\hat{Y}_{t}^{n}$$, we have\n\n\\begin{aligned} \\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}f\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds \\biggr\\vert ^{2p} \\leqslant &C \\int _{0}^{(k+1)\\delta t}\\mathbb{E} \\bigl\\vert f \\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant &C \\int _{0}^{(k+1)\\delta t}\\mathbb{E}\\bigl(1+ \\vert X_{n} \\vert ^{2p}+ \\bigl\\vert \\hat{Y}_{s}^{n} \\bigr\\vert ^{2p}\\bigr)\\,ds \\\\ \\leqslant &C+C\\mathbb{E} \\vert X_{n} \\vert ^{2p}+C \\delta t\\sum_{i=0}^{k} \\mathbb{E} \\bigl\\vert Y_{i}^{n} \\bigr\\vert ^{2p}. \\end{aligned}\n(36)\n\nBy the definition of $$\\tilde{N}_{t}$$, Burkhölder’s inequality, Hölder’s inequality, and (H2), we obtain\n\n\\begin{aligned} \\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}g\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dW _{s}^{n} \\biggr\\vert ^{2p} \\leqslant &C\\mathbb{E} \\biggl[ \\int _{0}^{(k+1)\\delta t} \\bigl\\vert g\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p} \\\\ \\leqslant &C \\int _{0}^{(k+1)\\delta t}\\mathbb{E} \\bigl\\vert g \\bigl(X_{n},\\hat{Y}_{s} ^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant &C_{p,T} \\end{aligned}\n(37)\n\nand\n\n\\begin{aligned}& \\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,dN _{s}^{n} \\biggr\\vert ^{2p} \\\\& \\quad = \\mathbb{E} \\biggl\\vert \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,d\\tilde{N}_{s}^{n}+\\lambda _{2} \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\& \\quad \\leqslant C\\mathbb{E} \\biggl[ \\int _{0}^{(k+1)\\delta t} \\bigl\\vert h\\bigl(X_{n}, \\hat{Y} _{s}^{n}\\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p} \\\\& \\qquad {}+C\\mathbb{E} \\biggl\\vert \\lambda _{2} \\int _{0}^{(k+1)\\delta t}h\\bigl(X_{n}, \\hat{Y}_{s}^{n}\\bigr)\\,ds \\biggr\\vert ^{2p} \\\\& \\quad \\leqslant C \\int _{0}^{(k+1)\\delta t}\\mathbb{E} \\bigl\\vert h \\bigl(X_{n},\\hat{Y}_{s} ^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds \\\\& \\quad \\leqslant C_{p,T,\\lambda _{2}}. \\end{aligned}\n(38)\n\nSubstituting (36)–(38) into (35) gives that\n\n\\begin{aligned} \\mathbb{E} \\bigl\\vert Y_{k+1}^{n} \\bigr\\vert ^{2p}\\leqslant C\\bigl(1+ \\vert y_{0} \\vert ^{2p}\\bigr)+C\\mathbb{E} \\vert X _{n} \\vert ^{2p}+C\\delta t\\sum_{i=0}^{k} \\mathbb{E} \\bigl\\vert Y_{i}^{n} \\bigr\\vert ^{2p}. \\end{aligned}\n\nUsing Lemma 3.2 and discrete Gronwall’s inequality, we get the result. □\n\nNext, we give the 2pth moment deviation between two successive iterations of the micro-solver.\n\nLemma 3.4\n\nFor small enough δt and $$p>1$$, there exists a positive constant $$K_{4}$$ such that\n\n$$\\sup_{\\stackrel{0\\leqslant n\\leqslant \\frac{T}{\\Delta t}}{0< m< M}} \\mathbb{E} \\bigl\\vert Y_{m+1}^{n}- Y_{m}^{n} \\bigr\\vert ^{2p}\\leqslant K_{4}(\\delta t)^{p},$$\n(39)\n\nwhere $$K_{4}$$ is independent of $$(M, \\delta t)$$.\n\nProof\n\nIt is clear that\n\n$$Y_{m+1}^{n}- Y_{m}^{n}=f \\bigl(X_{n},Y_{m}^{n}\\bigr)\\delta t+g \\bigl(X_{n},Y_{m}^{n}\\bigr) \\Delta W_{m}^{n}+h\\bigl(X_{n},Y_{m}^{n} \\bigr)\\Delta \\tilde{N}_{m}^{n}+\\lambda _{2}h \\bigl(X_{n},Y_{m}^{n}\\bigr)\\delta t.$$\n(40)\n\nTaking the 2pth moment and expectations on both sides of (40), we get\n\n\\begin{aligned}& \\mathbb{E} \\bigl\\vert Y_{m+1}^{n}- Y_{m}^{n} \\bigr\\vert ^{2p} \\\\& \\quad = \\mathbb{E} \\bigl\\vert f\\bigl(X_{n},Y_{m} ^{n}\\bigr)\\delta t+g\\bigl(X_{n},Y_{m}^{n} \\bigr)\\Delta W_{m}^{n}+h\\bigl(X_{n},Y_{m}^{n} \\bigr) \\Delta \\tilde{N}_{m}^{n}+\\lambda _{2}h \\bigl(X_{n},Y_{m}^{n}\\bigr)\\delta t \\bigr\\vert ^{2p} \\\\& \\quad \\leqslant C\\delta t^{2p}\\mathbb{E} \\bigl\\vert f \\bigl(X_{n},Y_{m}^{n}\\bigr) \\bigr\\vert ^{2P}+C( \\delta t)^{p}\\mathbb{E} \\bigl\\vert g \\bigl(X_{n},Y_{m}^{n}\\bigr) \\bigr\\vert ^{2p} \\\\& \\qquad {}+C\\delta t^{2p}\\mathbb{E} \\bigl\\vert h\\bigl(X_{n},Y_{m}^{n} \\bigr) \\bigr\\vert ^{2p}+C\\delta t^{p} \\mathbb{E} \\bigl\\vert h\\bigl(X_{n},Y_{m}^{n}\\bigr) \\bigr\\vert ^{2p}. \\end{aligned}\n\nBy Remark 1.1 and (H2), we have\n\n$$\\mathbb{E} \\bigl\\vert Y_{m+1}^{n}- Y_{m}^{n} \\bigr\\vert ^{2p}\\leqslant C\\delta t^{2p}\\bigl(1+ \\mathbb{E} \\vert X_{n} \\vert ^{2p}+\\mathbb{E} \\bigl\\vert Y_{m}^{n} \\bigr\\vert ^{2p}\\bigr)+C\\delta t^{p}.$$\n\nUsing Lemmas 3.2 and 3.3, for small enough δt, we get\n\n$$\\mathbb{E} \\bigl\\vert Y_{m+1}^{n}- Y_{m}^{n} \\bigr\\vert ^{2p}\\leqslant K_{4}\\delta t^{p}.$$\n\n□\n\nLemma 2.1 in shows that $$z_{t}^{n}$$ is statistically equivalent to a shifted and rescaled version of $$y_{t}^{\\epsilon }$$, with x being a parameter, that is, $$z_{t}^{k}\\sim y_{t-t_{k}/\\epsilon }^{\\epsilon }$$.\n\nIt is proved in that dynamic (9) is ergodic with a unique invariant measure $$\\mu ^{X_{n}}$$ (Assumptions H3–H5), which possesses exponentially mixing property in the following sense. Let $$P^{X_{n}}(t, z, E)$$ denote the transition probability of (9). Then there exist positive constants $$\\eta , \\alpha <1$$ such that\n\n\\begin{aligned} \\bigl\\vert P^{X_{n}}(t, z, E)-\\mu ^{X_{n}}(E) \\bigr\\vert < \\eta \\alpha ^{t} \\end{aligned}\n\nfor every $$E\\in \\mathcal{B}(R^{m})$$.\n\nThen we establish the mixing properties of the auxiliary processes $$z_{t}^{n}$$. Note that $$\\bar{a}(X_{n})$$ is the average of $$a(X_{n},y)$$ with respect to $$\\mu ^{X_{n}}$$, which is the invariant measure induced by $$z_{t}^{n}$$. We denote $$z_{m}^{n}=z_{m\\delta t}^{n}$$.\n\nLemma 3.5\n\nFor small enough δt and $$p>1$$, there exists a positive constant $$K_{5}$$ such that\n\n$$\\mathbb{E} \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},z_{m}^{n} \\bigr)-\\bar{a}(X _{n}) \\Biggr\\vert ^{2p}\\leqslant K_{5} \\biggl[\\frac{-\\log _{\\alpha }{M\\delta t+1}}{M \\delta t}+\\frac{1}{M} \\biggr],$$\n(41)\n\nwhere $$K_{5}$$ is independent of $$(M, \\delta t)$$.\n\nProof\n\nBy (H2) and Remark (1.2), we have\n\n\\begin{aligned}& \\mathbb{E} \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},z_{m}^{n} \\bigr)-\\bar{a}(X _{n}) \\Biggr\\vert ^{2p} \\\\& \\quad = \\mathbb{E} \\Biggl[ \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},z_{m}^{n} \\bigr)- \\bar{a}(X_{n}) \\Biggr\\vert ^{2p-2} \\times \\Biggl\\vert \\frac{1}{M}\\sum_{m=1}^{M}a \\bigl(X _{n},z_{m}^{n}\\bigr)-\\bar{a}(X_{n}) \\Biggr\\vert ^{2} \\Biggr] \\\\& \\quad \\leqslant \\mathbb{E} \\Biggl[ \\Biggl(\\frac{1}{M}\\sum _{m=1}^{M} \\bigl\\vert a\\bigl(X_{n},z _{m}^{n}\\bigr)-\\bar{a}(X_{n}) \\bigr\\vert ^{2p-2} \\Biggr)\\times \\Biggl\\vert \\frac{1}{M} \\sum _{m=1}^{M}a\\bigl(X_{n},z_{m}^{n} \\bigr)-\\bar{a}(X_{n}) \\Biggr\\vert ^{2} \\Biggr] \\\\& \\quad \\leqslant C_{a,\\bar{a}}\\mathbb{E} \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X _{n},z_{m}^{n} \\bigr)-\\bar{a}(X_{n}) \\Biggr\\vert ^{2}. \\end{aligned}\n(42)\n\nIt remains to estimate the mean-square term, and the proof for the term is similar to the method in (Lemma 2.6). We omit the details. Thus we obtain the desired result (41). □\n\nAfterwards, we establish the 2pth moments deviation between (9) and its numerical approximation (8).\n\nLemma 3.6\n\nLet $$z_{t}^{n}$$ be the family of processes defined by (9). For small enough δt and $$p>1$$, there exists a positive constant $$K_{8}$$ such that\n\n$$\\max_{\\stackrel{0\\leqslant n\\leqslant \\lfloor \\frac{T}{\\Delta t} \\rfloor }{0\\leqslant m\\leqslant M}} \\mathbb{E} \\bigl\\vert Y_{m}^{n}-z_{m}^{n} \\bigr\\vert ^{2p} \\leqslant K_{8}\\delta t^{p},$$\n(43)\n\nwhere $$K_{8}$$ is independent of $$(M, \\delta t)$$.\n\nProof\n\nDefine $$t_{\\delta t}=\\lfloor t/\\delta t\\rfloor \\delta t$$. Let $$Y_{t}^{n}$$ be the Euler approximation $$Y_{m}^{n}$$, interpolated continuously by\n\n$$Y_{t}^{n}= \\int _{0}^{t}f\\bigl(X_{n},Y_{s_{\\delta t}}^{n} \\bigr)\\,ds+ \\int _{0}^{t}g\\bigl(X _{n},Y_{s_{\\delta t}}^{n} \\bigr)\\,dW_{s}^{n}+ \\int _{0}^{t}h\\bigl(X_{n},Y_{s_{\\delta t}}^{n} \\bigr)\\,dN_{s}^{n}.$$\n(44)\n\nHence, we have\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\bigl\\vert Y_{t}^{n}-z_{t}^{n} \\bigr\\vert ^{2p} \\leqslant & C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[f\\bigl(X_{n},Y_{s_{ \\delta t}}^{n} \\bigr)-f\\bigl(X_{n},z_{s}^{n}\\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[g\\bigl(X_{n},Y_{s _{\\delta t}}^{n} \\bigr)-g\\bigl(X_{n},z_{s}^{n}\\bigr) \\bigr]\\,dW_{s}^{n} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[h\\bigl(X_{n},Y_{s _{\\delta t}}^{n} \\bigr)-h\\bigl(X_{n},z_{s}^{n}\\bigr)\\bigr]\\,d \\tilde{N}_{s}^{n} \\biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\lambda _{2}\\bigl[h\\bigl(X _{n},Y_{s_{\\delta t}}^{n}\\bigr)-h\\bigl(X_{n},z_{s}^{n} \\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p} \\end{aligned}\n(45)\n\nfor any $$0\\leqslant t_{1}\\leqslant T$$, where we have used the definition of Ñ. Now, we use Burkhölder’s inequality and Hölder’s inequality on the two martingale terms to get\n\n\\begin{aligned}& \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[g\\bigl(X_{n},Y_{s_{ \\delta t}}^{n} \\bigr)-g\\bigl(X_{n},z_{s}^{n}\\bigr) \\bigr]\\,dW_{s}^{n} \\biggr\\vert ^{2p} \\\\& \\quad \\leqslant C \\mathbb{E} \\biggl[ \\int _{0}^{t_{1}} \\bigl\\vert g\\bigl(X_{n},Y_{s_{\\delta t}}^{n} \\bigr)-g\\bigl(X _{n},z_{s}^{n}\\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p} \\\\& \\quad \\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert g \\bigl(X_{n},Y_{s_{\\delta t}}^{n}\\bigr)-g\\bigl(X _{n},z_{s}^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds \\end{aligned}\n(46)\n\nand\n\n\\begin{aligned}& \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[h\\bigl(X_{n},Y_{s_{ \\delta t}}^{n} \\bigr)-h\\bigl(X_{n},z_{t}^{n}\\bigr)\\bigr]\\,d \\tilde{N}_{s}^{n} \\biggr\\vert ^{2p} \\\\& \\quad \\leqslant C\\mathbb{E} \\biggl[ \\int _{0}^{t_{1}}\\lambda _{2} \\bigl\\vert h \\bigl(X_{n},Y _{s_{\\delta t}}^{n}\\bigr)-h\\bigl(X_{n},z_{s}^{n} \\bigr) \\bigr\\vert ^{2}\\,ds \\biggr]^{p} \\\\& \\quad \\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert h \\bigl(X_{n},Y_{s_{\\delta t}}^{n}\\bigr)-h\\bigl(X _{n},z_{s}^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds. \\end{aligned}\n(47)\n\nBy Hölder’s inequality, we have\n\n\\begin{aligned}& \\begin{gathered}[b] \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\bigl[f\\bigl(X_{n},Y_{s_{ \\delta t}}^{n} \\bigr)-f\\bigl(X_{n},z_{s}^{n}\\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p}\\\\ \\quad\\leqslant C \\int _{0} ^{t_{1}}\\mathbb{E} \\bigl\\vert f \\bigl(X_{n},Y_{s_{\\delta t}}^{n}\\bigr)-f\\bigl(X_{n},z_{s}^{n} \\bigr) \\bigr\\vert ^{2p}\\,ds, \\end{gathered} \\end{aligned}\n(48)\n\\begin{aligned}& \\begin{gathered}[b]\\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\biggl\\vert \\int _{0}^{t}\\lambda _{2}\\bigl[h \\bigl(X_{n},Y _{s_{\\delta t}}^{n}\\bigr)-h\\bigl(X_{n},z_{s}^{n} \\bigr)\\bigr]\\,ds \\biggr\\vert ^{2p}\\\\ \\quad \\leqslant C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert h \\bigl(X_{n},Y_{s_{\\delta t}}^{n}\\bigr)-h\\bigl(X_{n},z_{s} ^{n}\\bigr) \\bigr\\vert ^{2p}\\,ds. \\end{gathered} \\end{aligned}\n(49)\n\nCombining (44)–(49) and applying the Lipschitz condition in (H1), we have\n\n\\begin{aligned} \\mathbb{E}\\sup_{t\\in [0,t_{1}]} \\bigl\\vert Y_{t}^{n}-z_{t}^{n} \\bigr\\vert ^{2p} \\leqslant & C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert Y_{s_{\\delta t}}^{n}-z_{s}^{n} \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant & C \\int _{0}^{t_{1}}\\mathbb{E} \\bigl\\vert Y_{s_{\\delta t}}^{n}-Y_{s} ^{n} \\bigr\\vert ^{2p}\\,ds+C \\int _{0}^{t_{1}}\\mathbb{E}\\sup_{t\\in [0,s]} \\bigl\\vert Y_{t}^{n}-z _{t}^{n} \\bigr\\vert ^{2p}\\,ds \\\\ \\leqslant & C^{\\prime }\\delta t^{p}+C^{\\prime \\prime }\\mathbb{E} \\int _{0}^{t_{1}}\\sup_{t\\in [0,s]} \\bigl\\vert Y_{t}^{n}-z_{t}^{n} \\bigr\\vert ^{2p}\\,ds, \\end{aligned}\n\nwhich, with the help of continuous Gronwall’s inequality, yields the result. □\n\nLemma 3.7\n\nThere exists a positive constant $$K_{6}$$ such that, for all $$p>1$$ and $$0\\leqslant n\\leqslant \\lfloor \\frac{T}{\\Delta t}\\rfloor$$,\n\n$$\\mathbb{E} \\bigl\\vert \\bar{a}(X_{n})-A(X_{n}) \\bigr\\vert ^{2p}\\leqslant K_{6} \\biggl(\\frac{- \\log _{\\alpha }M\\delta t+1}{M\\delta t}+ \\frac{1}{M}+\\delta t^{p} \\biggr),$$\n(50)\n\nwhere $$K_{6}$$ is independent of $$(M, \\delta t)$$.\n\nProof\n\nBy definition, we have\n\n\\begin{aligned} \\mathbb{E} \\bigl\\vert \\bar{a}(X_{n})-A(X_{n}) \\bigr\\vert ^{2p} =&\\mathbb{E} \\Biggl\\vert \\int _{\\mathbb{R}^{m}} a(X_{n},y)\\mu ^{X_{n}}(dy)- \\frac{1}{M}\\sum_{m=1} ^{M}a \\bigl(X_{n},Y_{m}^{n}\\bigr) \\Biggr\\vert ^{2p} \\\\ \\leqslant & CI_{1}^{n}+CI_{2}^{n}, \\end{aligned}\n(51)\n\nwhere\n\n\\begin{aligned}& I_{1}^{n}:=\\mathbb{E} \\Biggl\\vert \\int _{\\mathbb{R}^{m}} a(X_{n},y)\\mu ^{X_{n}}(dy)- \\frac{1}{M}\\sum_{m=1}^{M}a \\bigl(X_{n},z_{m}^{n}\\bigr) \\Biggr\\vert ^{2p}, \\\\& I_{2}^{n}:=\\mathbb{E} \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},z_{m}^{n} \\bigr)- \\frac{1}{M}\\sum_{m=1}^{M}a \\bigl(X_{n},Y_{m}^{n}\\bigr) \\Biggr\\vert ^{2p}, \\end{aligned}\n\nwhere $$z_{t}^{n}$$ is the family of processes defined by (9). $$I_{1}^{n}$$ is the difference between the ensemble average of $$a(X_{n},\\cdot )$$ with respect to the (exact) invariant measure of $$z_{t}^{n}$$ and its empirical average over M equi-distanced sample points. $$I_{2}^{n}$$ is the difference between empirical averages of $$a(X_{n},\\cdot )$$ over M equi-distanced sample points, once for the process $$z_{t}^{n}$$ and once for its Euler approximation $$Y_{m}^{n}$$.\n\nThe estimation of $$I_{1}^{n}$$ is given in Lemma 3.5,\n\n\\begin{aligned} I_{1}^{n} =&\\mathbb{E} \\Biggl\\vert \\int _{\\mathbb{R}^{m}} a(X_{n},y)\\mu ^{X _{n}}(dy)- \\frac{1}{M}\\sum_{m=1}^{M}a \\bigl(X_{n},z_{m}^{n}\\bigr) \\Biggr\\vert ^{2p} \\\\ \\leqslant &K_{5} \\biggl[\\frac{-\\log _{\\alpha }{M\\delta t+1}}{M\\delta t}+ \\frac{1}{M} \\biggr]. \\end{aligned}\n(52)\n\nThen we estimate $$I_{2}^{n}$$ by (H2)\n\n\\begin{aligned} I_{2}^{n} =&\\mathbb{E} \\Biggl\\vert \\frac{1}{M}\\sum _{m=1}^{M}a\\bigl(X_{n},z_{m} ^{n}\\bigr)-\\frac{1}{M}\\sum_{m=1}^{M}a \\bigl(X_{n},Y_{m}^{n}\\bigr) \\Biggr\\vert ^{2p} \\\\ \\leqslant &\\sum_{m=1}^{M}\\mathbb{E} \\bigl\\vert a\\bigl(X_{n},z_{m}^{n}\\bigr)-a \\bigl(X_{n},Y _{m}^{n}\\bigr) \\bigr\\vert ^{2p} \\\\ \\leqslant &C\\max_{m\\leqslant M}\\mathbb{E} \\bigl\\vert Y_{m}^{n}-z_{m}^{n} \\bigr\\vert ^{2p}. \\end{aligned}\n\nUsing Lemma 3.6, we obtain\n\n$$I_{2}^{n}\\leqslant CK_{8}\\delta t^{p}.$$\n(53)\n\nCombining (51)–(53), we get\n\n$$\\mathbb{E} \\bigl\\vert \\bar{a}(X_{n})-A(X_{n}) \\bigr\\vert ^{2p}\\leqslant K_{6} \\biggl[\\frac{- \\log _{\\alpha }{M\\delta t+1}}{M\\delta t}+ \\frac{1}{M}+\\delta t^{p} \\biggr],$$\n\nwhich is uniform in $$n\\leqslant T/\\Delta t$$. □\n\nFinally, we estimate the difference between the process $$X_{n}$$ and the auxiliary process $$\\bar{X}_{n}$$.\n\nLemma 3.8\n\nThere exist positive constants $$\\Delta t^{\\ast }$$ and $$K_{7}$$ such that, for $$p>1$$ and $$0<\\Delta t\\leqslant \\Delta t^{\\ast }$$,\n\n$$\\mathbb{E}\\sup_{0\\leqslant n\\leqslant \\lfloor T/\\Delta t\\rfloor } \\vert X _{n}- \\bar{X}_{n} \\vert ^{2p}\\leqslant K_{7} \\biggl( \\frac{-\\log _{\\alpha }{M \\delta t+1}}{M\\delta t}+\\frac{1}{M}+\\delta t^{p} \\biggr),$$\n(54)\n\nwhere $$K_{7}$$ is independent of $$(M, \\delta t, \\Delta t)$$.\n\nProof\n\nSet $$E_{n}=\\mathbb{E}\\sup_{l\\leqslant n}|\\bar{X}_{l}-X_{l}|^{2p}$$, then\n\n\\begin{aligned} E_{n} =&\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[\\bar{a}( \\bar{X}_{i})-A(X_{i})\\bigr]\\Delta t+\\sum _{i=0}^{l-1}\\bigl[b(\\bar{X}_{i})-b(X _{i})\\bigr]\\Delta W_{i}+\\sum_{i=0}^{l-1} \\bigl[c(\\bar{X}_{i})-c(X_{i})\\bigr]\\Delta P _{i} \\Biggr\\vert ^{2p} \\\\ =&\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[\\bar{a}( \\bar{X}_{i})-A(X_{i}) \\bigr]\\Delta t+\\sum_{i=0}^{l-1}\\bigl[b( \\bar{X}_{i})-b(X _{i})\\bigr]\\Delta W_{i}+\\sum _{i=0}^{l-1}\\bigl[c(\\bar{X}_{i})-c(X_{i}) \\bigr]\\Delta \\tilde{P}_{i} \\\\ &{}+\\lambda _{1}\\sum_{i=0}^{l-1} \\bigl[c(\\bar{X}_{i})-c(X_{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p} \\\\ \\leqslant & C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[ \\bar{a}(\\bar{X}_{i})-A(X_{i}) \\bigr]\\Delta t \\Biggr\\vert ^{2p}+C\\mathbb{E} \\sup_{l\\leqslant n} \\Biggl\\vert \\sum_{i=0}^{l-1}\\bigl[b( \\bar{X}_{i})-b(X_{i})\\bigr] \\Delta W_{i} \\Biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[c(\\bar{X}_{i})-c(X _{i})\\bigr]\\Delta \\tilde{P}_{i} \\Biggr\\vert ^{2p}+C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\lambda _{1}\\sum_{i=0}^{l-1}\\bigl[c( \\bar{X}_{i})-c(X_{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p}. \\end{aligned}\n\nWe split the first term on the right-hand side:\n\n\\begin{aligned} E_{n} \\leqslant & C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum_{i=0}^{l-1}\\bigl[ \\bar{a}( \\bar{X}_{i})-\\bar{a}(X_{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p}+C\\mathbb{E} \\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[\\bar{a}(X_{i})-A(X_{i}) \\bigr] \\Delta t \\Biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[b(\\bar{X}_{i})-b(X _{i})\\bigr]\\Delta W_{i} \\Biggr\\vert ^{2p}+C \\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[c(\\bar{X}_{i})-c(X_{i}) \\bigr]\\Delta \\tilde{P}_{i} \\Biggr\\vert ^{2p} \\\\ &{}+C\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\lambda _{1}\\sum_{i=0}^{l-1}\\bigl[c( \\bar{X}_{i})-c(X_{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p}. \\end{aligned}\n(55)\n\nThe first and fifth terms on the right-hand side are estimated using the Lipschitz continuity of ā and c:\n\n\\begin{aligned} \\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[\\bar{a}(\\bar{X} _{i})- \\bar{a}(X_{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p} \\leqslant & C\\mathbb{E} \\sup_{l\\leqslant n}\\sum_{i=0}^{l-1} \\bigl\\vert \\bar{a}(\\bar{X}_{i})-\\bar{a}(X _{i}) \\bigr\\vert ^{2p}\\Delta t^{2p} \\\\ \\leqslant & C\\sum_{i=0}^{n-1}\\mathbb{E} \\vert \\bar{X}_{i}-X_{i} \\vert ^{2p} \\Delta t \\leqslant C\\sum_{i=0}^{n-1}E_{i} \\Delta t \\end{aligned}\n(56)\n\nand\n\n\\begin{aligned} \\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\lambda _{1}\\sum _{i=0}^{l-1}\\bigl[c( \\bar{X}_{i})-c(X_{i}) \\bigr]\\Delta t \\Biggr\\vert ^{2p} \\leqslant & C\\mathbb{E} \\sup _{l\\leqslant n}\\sum_{i=0}^{l-1} \\bigl\\vert c(\\bar{X}_{i})-c(X_{i}) \\bigr\\vert ^{2p} \\Delta t^{2p} \\\\ \\leqslant & C\\sum_{i=0}^{n-1}\\mathbb{E} \\vert \\bar{X}_{i}-X_{i} \\vert ^{2p} \\Delta t \\leqslant C\\sum_{i=0}^{n-1}E_{i} \\Delta t. \\end{aligned}\n(57)\n\nNow using Burkhölder’s inequality and (H2) on the two martingale terms, we get\n\n\\begin{aligned} \\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[c(\\bar{X}_{i})-c(X _{i})\\bigr]\\Delta \\tilde{P}_{i} \\Biggr\\vert ^{2p} \\leqslant & C\\mathbb{E} \\Biggl\\vert \\sum _{i=0}^{n-1}\\lambda _{1}\\Delta t\\bigl[c( \\bar{X}_{i})-c(X_{i})\\bigr]^{2} \\Biggr\\vert ^{p} \\\\ \\leqslant & C\\sum_{i=0}^{n-1}\\mathbb{E} \\vert \\bar{X}_{i}-X_{i} \\vert ^{2p} \\Delta t \\leqslant C\\sum_{i=0}^{n-1}E_{i} \\Delta t \\end{aligned}\n(58)\n\nand\n\n\\begin{aligned} \\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[b(\\bar{X}_{i})-b(X_{i})\\bigr]\\Delta W_{i} \\Biggr\\vert ^{2p} \\leqslant & C\\mathbb{E} \\Biggl\\vert \\sum _{i=0}^{n-1}\\Delta t\\bigl[b(\\bar{X}_{i})-b(X_{i}) \\bigr]^{2} \\Biggr\\vert ^{p} \\\\ \\leqslant & C\\sum_{i=0}^{n-1}\\mathbb{E} \\vert \\bar{X}_{i}-X_{i} \\vert ^{2p} \\Delta t \\leqslant C\\sum_{i=0}^{n-1}E_{i} \\Delta t. \\end{aligned}\n(59)\n\nThe second term on the right-hand side can be bounded as follows:\n\n$$\\mathbb{E}\\sup_{l\\leqslant n} \\Biggl\\vert \\sum _{i=0}^{l-1}\\bigl[\\bar{a}(X_{i})-A(X _{i})\\bigr]\\Delta t \\Biggr\\vert ^{2p}\\leqslant C\\max _{i< n}\\mathbb{E} \\bigl\\vert \\bar{a}(X _{i})-A(X_{i}) \\bigr\\vert ^{2p}.$$\n(60)\n\nCombining (55)–(60) with Lemma 3.7, we obtain a discrete linear integral inequality\n\n$$E_{n}\\leqslant C\\sum_{i=0}^{n-1}E_{i} \\Delta t+CK_{6} \\biggl(\\frac{- \\log _{\\alpha }M\\delta t+1}{M\\delta t}+\\frac{1}{M}+\\delta t^{p} \\biggr),$$\n\nwith the initial condition $$E_{0}=0$$. It follows that, for sufficiently small Δt,\n\n\\begin{aligned} E_{n} \\leqslant & CK_{6} \\biggl(\\frac{-\\log _{\\alpha }M\\delta t+1}{M \\delta t}+ \\frac{1}{M}+\\delta t^{p} \\biggr)\\{1+C\\Delta t\\}^{n} \\\\ \\leqslant & CK_{6} \\biggl(\\frac{-\\log _{\\alpha }M\\delta t+1}{M\\delta t}+ \\frac{1}{M}+ \\delta t^{p} \\biggr)e^{CT}. \\end{aligned}\n\nThis estimate proves the lemma with $$K_{7}=CK_{6}e^{CT}$$. □\n\nMain result\n\nNow we can state and prove our main theorem readily.\n\nTheorem 4.1\n\nSuppose that conditions (H1)–(H5) hold, then there exist positive constants $$K^{\\prime }$$ and $$K^{\\prime \\prime }$$ such that\n\n$$\\mathbb{E}\\sup_{0\\leqslant n\\leqslant \\lfloor T/\\Delta t\\rfloor } \\bigl\\vert X _{n}- \\bar{x}(t_{n}) \\bigr\\vert ^{2p}\\leqslant K^{\\prime } \\Delta t^{p}+K^{\\prime \\prime } \\biggl(\\frac{-\\log _{\\alpha }{M\\delta t+1}}{M\\delta t}+ \\frac{1}{M}+\\delta t^{p} \\biggr),$$\n\nwhere $$K^{\\prime }$$, $$K^{\\prime \\prime }$$ are independent of $$(\\Delta t, \\delta t, M)$$.\n\nProof\n\nWe begin our proof with subtracting and adding the term $$\\bar{X}_{n}$$:\n\n\\begin{aligned} \\mathbb{E}\\sup_{0\\leqslant n\\leqslant \\lfloor T/\\Delta t\\rfloor } \\bigl\\vert X _{n}- \\bar{x}(t_{n}) \\bigr\\vert ^{2p} \\leqslant &C\\mathbb{E} \\sup _{0\\leqslant n\\leqslant \\lfloor T/\\Delta t\\rfloor } \\vert X_{n}-\\bar{X} _{n} \\vert ^{2p}+C\\mathbb{E} \\sup_{0\\leqslant n\\leqslant \\lfloor T/\\Delta t\\rfloor } \\bigl\\vert \\bar{X}_{n}- \\bar{x}(t_{n}) \\bigr\\vert ^{2p} \\\\ \\leqslant &K^{\\prime }\\Delta t^{p}+K^{\\prime \\prime } \\biggl( \\frac{- \\log _{\\alpha }{M\\delta t+1}}{M\\delta t}+\\frac{1}{M}+\\delta t^{p} \\biggr), \\end{aligned}\n\nwhere we have used the result of Lemmas 2.4 and 3.8. □\n\nConclusions\n\nIn this paper, the $$L^{p}$$ ($$p>2$$)-strong convergence of the multiscale integration scheme has been studied for the two-time-scale jump-diffusion systems. By Lemmas 2.4 and 3.8, we obtained our desired main result. The results in [2, 9] are extended in this paper. First, we provide a numerical method for the $$L^{p}$$ ($$p>2$$) averaging principle in ; second, in , the authors only studied $$L^{2}$$ convergence of the multiscale integration scheme, and we extended the result into the $$L^{p}$$ ($$p>2$$) case.\n\nReferences\n\n1. 1.\n\nGivon, D.: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. Multiscale Model. Simul. 6(2), 577–594 (2007)\n\n2. 2.\n\nXu, J., Miao, Y.: $$L^{p}$$ ($$p>2$$)-strong convergence of an averaging principle for two-time-scales jump-diffusion stochastic differential equations. Nonlinear Anal. Hybrid Syst. 18, 33–47 (2015)\n\n3. 3.\n\nKhasminskii, R.Z.: On the principle of averaging the Itô’s stochastic differential equations. Kybernetika 4, 260–279 (1968) (in Russian)\n\n4. 4.\n\nE, W., Liu, D., Vanden-Eijnden, E.: Analysis of multiscale methods for stochastic differential equations. Commun. Pure Appl. Math. 58(11), 1544–1585 (2005)\n\n5. 5.\n\nLiu, D.: Strong convergence of principle of averaging for multiscale stochastic dynamical systems. Commun. Math. Sci. 8(4), 999–1020 (2010)\n\n6. 6.\n\nLi, Z., Yan, L.: Stochastic averaging for two-time-scale stochastic partial differential equations with fractional Brownian motion. Nonlinear Anal. Hybrid Syst. 31, 317–333 (2019)\n\n7. 7.\n\nVanden-Eijnden, E.: Numerical techniques for multi-scale dynamical systems with stochastic effects. Commun. Math. Sci. 1(2), 385–391 (2003)\n\n8. 8.\n\nGivon, D., Kevrekidis, I.G., Kupferman, R.: Strong convergence of projective integration schemes for singularly perturbed stochastic differential systems. Commun. Math. Sci. 4(4), 707–729 (2006)\n\n9. 9.\n\nGivon, D., Kevrekidis, I.G.: Multiscale integration schemes for jump-diffusion systems. Multiscale Model. Simul. 7(2), 495–516 (2008)\n\n10. 10.\n\nLiu, D.: Analysis of multiscale methods for stochastic dynamical systems with multiple time scales. Multiscale Model. Simul. 8(3), 944–964 (2010)\n\n11. 11.\n\nCerrai, S., Freidlin, M.I.: Averaging principle for a class of stochastic reaction-diffusion equations. Probab. Theory Relat. Fields 144(1–2), 147–177 (2009)\n\n12. 12.\n\nProtter, P.: Stochastic Integration and Differential Equations, 2nd edn. Springer, Berlin (2004)\n\nAcknowledgements\n\nThe first author is very grateful to Associate Professor Jie Xu for his encouragement and useful discussions.\n\nNot applicable.\n\nFunding\n\nThe author acknowledges the support provided by NSFs of China No. U1504620 and Youth Science Foundation of Henan Normal University Grant No. 2014QK02.\n\nAuthor information\n\nThe authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the final manuscript.\n\nCorrespondence to Jiaping Wen.\n\nEthics declarations\n\nNot applicable.\n\nCompeting interests\n\nThe authors declare that they have no competing interests.\n\nConsent for publication\n\nNot applicable.",
null,
""
]
| [
null,
"https://advancesindifferenceequations.springeropen.com/track/article/10.1186/s13662-019-1956-0",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7985323,"math_prob":1.0000058,"size":25025,"snap":"2019-43-2019-47","text_gpt3_token_len":7942,"char_repetition_ratio":0.124135725,"word_repetition_ratio":0.08164983,"special_character_ratio":0.34205794,"punctuation_ratio":0.12649305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000088,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T22:32:56Z\",\"WARC-Record-ID\":\"<urn:uuid:77b7c65c-9356-4883-8de4-0beb2ea5a32d>\",\"Content-Length\":\"235735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:826c3a3e-e9e4-4a14-9fec-8d2c14e40ec3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c38a6f9c-3917-4e19-946a-4996c9621a6e>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-019-1956-0\",\"WARC-Payload-Digest\":\"sha1:JI2UVHX2K6JHH7LD2A4UQ64V3G43UXB4\",\"WARC-Block-Digest\":\"sha1:2LUGQ7XN6NG43HEQ64XYKIWE4KA634HC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987795253.70_warc_CC-MAIN-20191021221245-20191022004745-00315.warc.gz\"}"} |
https://visualstudiomagazine.com/articles/2021/05/20/pul-pytorch.aspx | [
"#",
null,
"",
null,
"",
null,
"The Data Science Lab\n\n### Positive and Unlabeled Learning (PUL) Using PyTorch\n\nDr. James McCaffrey of Microsoft Research provides a code-driven tutorial on PUL problems, which often occur with security or medical data in cases like training a machine learning model to predict if a hospital patient has a disease or not.",
null,
"A positive and unlabeled learning (PUL) problem occurs when a machine learning set of training data has only a few positive labeled items and many unlabeled items. PUL problems often occur with security or medical data. For example, suppose you want to train a machine learning model to predict if a hospital patient has a disease or not, based on predictor variables such as age, blood pressure, and so on. The training data might have a few dozen instances of items that are positive (class 1 = patient has disease) and many hundreds or thousands of instances of data items that are unlabeled and so could be either class 1 = patient has disease, or class 0 = patient does not have disease.\n\nThe goal of PUL is to use the information contained in the dataset to guess the true labels of the unlabeled data items. After the class labels of some of the unlabeled items have been guessed, the resulting labeled dataset can be used to train a binary classification model using any standard machine learning technique, such as k-nearest neighbors classification, neural binary classification, logistic regression classification, naive Bayes classification, and so on.\n\nA good way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The demo uses a 200-item dataset of employee information where the ultimate goal is to classify an employee as an introvert (class 0) or an extrovert (class 1). The dataset is positive and unlabeled: there are just 20 positive (extrovert) employees but the remaining 180 employees are unlabeled and could be either introvert or extrovert.\n\nPUL is challenging and there are several techniques to tackle such problems. The demo program repeatedly (eight times) trains a helper binary classifier using the 20 positive employee data items and 20 randomly selected unlabeled items which are temporarily treated as negative. During the eight model training sessions, information about the unused, unlabeled employee data items is accumulated, in a way that will be explained shortly.\n\nAfter the eight models have been trained and analyzed, the accumulated information is used to guess the true labels of some of the 180 unlabeled employees. Based on two user-supplied threshold values of 0.30 and 0.90, the PUL system believes it has enough evidence to make intelligent guesses for 57 of the 180 unlabeled employees (32 percent of them).\n\nThe true class labels for all 200 employees is known by the demo system. Of the 57 class label guesses, 49 were correct and 8 were incorrect (86 percent accuracy). The demo does not continue by using the now-labeled 97 employee data items (the original 20 positive labeled plus the 57 newly labeled) to create a binary classifier, but that would be the next step in a non-demo scenario.",
null,
"[Click on image for larger view.] Figure 1: Positive and Unlabeled Learning (PUL) in Action\n\nThis article assumes you have an intermediate or better familiarity with a C-family programming language, preferably Python, and a basic familiarity with the PyTorch code library. The source code for the demo program is a bit too long to present in its entirety in this article, but the complete code and training data are available in the accompanying file download. (The PUL data is embedded in commented-form into the source code).\n\nThis article focuses on explaining the key ideas you need to understand in order to analyze and process PUL data to suit your problem scenarios. All normal error checking code has been omitted to keep the main ideas as clear as possible.\n\nTo run the demo program, you must have Python and PyTorch installed on your machine. The demo programs were developed on Windows 10 using the Anaconda 2020.02 64-bit distribution (which contains Python 3.7.6) and PyTorch version 1.8.0 for CPU installed via pip. Installation is not trivial. You can find detailed step-by-step installation instructions for this configuration in my blog post.\n\nThe PUL Employee Data\nThe data file of employee information has 200 tab-delimited items. The data looks like:\n\n```-2 0.39 0 0 1 0.5120 0 1 0\n1 0.24 1 0 0 0.2950 0 0 1\n-2 0.36 1 0 0 0.4450 0 1 0\n-2 0.50 0 1 0 0.5650 0 1 0\n-2 0.19 0 0 1 0.3270 1 0 0\n. . .```\n\nThe first column is introvert or extrovert, encoded as 1 = positive = extrovert (20 items), and -2 = unlabeled (180 items). The goal of PUL is to intelligently guess 0 = negative, or 1 = positive, for as many of the unlabeled data items as possible.\n\nThe other columns in the dataset are employee age (normalized by dividing by 100), city (one of three, one-hot encoded), annual income (normalized by dividing by \\$100,000), and job-type (one of three, one-hot encoded).\n\nThe dataset was artificially constructed so that even numbered items , , , etc. are actually class 0 = negative, and odd numbered items , , , etc. are actually class 1. This allows the PUL system to measure its accuracy. In a non-demo PUL scenario, you usually won't know the true class labels.\n\nThe PUL Algorithm\nThe technique presented in this article is based on a 2013 research paper by F. Mordelet and J.P. Vert, titled \"A Bagging SVM to Learn from Positive and Unlabeled Examples\". That paper uses a SVM (support vector machine) binary classifier to analyze unlabeled data. This article uses a neural binary classifier instead.\n\nIn pseudo-code:\n\n```create a 40-item train dataset with all 20 positive\nand 20 randomly selected unlabeled items that\nare temporarily treated as negative\n\nloop several times\ntrain a binary classifier using the 40-item train data\nuse trained model to score the 160 unused unlabeled\ndata items\naccumulate the p-score for each unused unlabeled item\n\ngenerate a new train dataset with the 20 positive\nand 20 different unlabeled items treated as negative\nend-loop\n\nfor-each of the 180 unlabeled items\ncompute the average p-value\n\nif avg p-value < lo threshold\nguess its label as negative\nelse-if avg p-value > hi threshold\nguess its label as positive\nelse\ninsufficient evidence to make a guess\nend-if\nend-for```\n\nEach time through the training loop, the binary classifier will make fairly poor predictions but the average prediction for all iterations will likely be good. Recall that a neural binary classifier will predict by generating a p-value (pseudo-probability) between 0.0 and 1.0 where a p-value less than 0.5 indicates class 0 = negative, and a p-value greater than 0.5 indicates class 1 = positive. Suppose that an unlabeled data item is not used as part of the training data, three times. And suppose that the trained model scores that unlabeled data item as 0.65, 0.22, 0.58 which mean that the unlabeled item was predicted to be class 1 = positive twice, and class 0 = negative once. The average p-value for the item is (0.65 + 0.32 + 0.78) / 3 = 0.58. Because the average p-value of the unlabeled item is greater than 0.5, it is most likely class 1 = positive.\n\nIf you use a decision threshold of 0.5, every unlabeled data item will be guessed as positive or negative. However, many of the guesses where the average p-value is close to 0.5 will likely be incorrect. An alternative approach taken by the demo is to only guess labels where the average p-value is below a low threshold (0.3) or above a high threshold (0.90). Items with average p-values between 0.30 and 0.90 are judged to be ambiguous so no label is guessed.\n\nThe low and high threshold values are system hyperparameters that must be determined by the nature of your problem scenario. Adjusting the threshold values towards 0.5 will increase the number of guesses for the unlabeled data items, but probably decrease the accuracy of those guesses.\n\nGenerating a Dynamic Dataset\nSomewhat unexpectedly, the most difficult part of a PUL system is wrangling the data to generate dynamic (changing) training datasets. The challenge is to be able to create an initial training dataset with the 20 positive items and 20 randomly selected unlabeled items like so:\n\n```train_file = \".\\\\Data\\\\employee_pul_200.txt\"\ntrain_ds = EmployeeDataset(train_file, 20, 180)```\n\nAnd then inside a loop, be able to reinitialize the training dataset with the same 20 positive items but 20 different unlabeled items:\n\n`train_ds.reinit()`\n\nThe dynamic dataset architecture used by the demo program is illustrated in Figure 2. The diagram shows a 14-item dummy PUL dataset with four positive items and ten unlabeled items rather than the 200-item demo dataset. The source PUL data is read into memory as four Python lists of arrays. The first two lists hold the predictors and labels for the four positive items. Note that the positive labels don't need to be explicitly stored because they're all 1, but explicit storage make the code easier to work with. The second two lists hold the predictors and labels for the more numerous unlabeled items, where the unlabeled classes are temporarily all marked as class 0 = negative.",
null,
"[Click on image for larger view.] Figure 2: A Dynamic Virtual Dataset for PUL\n\nEach dynamic dataset needs all four of the positive items and four randomly selected unlabeled-marked-as-negative items. It would be inefficient to duplicate data, so all that's needed is information about which rows of the data in memory belong to the positive items and which rows belong to the unlabeled-negative items. And because all positive items are always used in each dynamic dataset, the only information needed is the four rows of unlabeled-negative that are in the dynamic dataset (, , , ), and which six rows are not part of the dynamic dataset (, , , , , ).\n\nThe virtual dynamic dataset has size 8. To fetch a specified item from it, if the requested index is between and the item can be accessed directly because it must be a positive item. For example, to get the predictor values for virtual item , is fetched from memory giving (7.0, 8.0) from Figure 2.\n\nIf the requested index is greater than then the requested index must be mapped to its location in memory. For example, to get virtual item , the 6 is mapped by subtracting 4 (number of unlabeled-negative items), giving . That value is used to look into the p array that stores memory locations, giving from Figure 2. Item is looked up in memory giving predictor values (11.0, 12.0).\n\nThe key takeaway is that PUL systems are not trivial. You must spend a significant amount of engineering time and effort to deal with data wrangling.\n\nThe demo code that implements a dynamic virtual dataset for the employee PUL data is presented in Listing 1. As is often the case, data wrangling code is tedious and tricky.\n\nListing 1: Defining a Dynamic Dataset for PUL Data\n\n```class EmployeeDataset(T.utils.data.Dataset):\n# label age city income job-type\n# 1 0.39 1 0 0 0.5432 1 0 0\n# -2 0.29 0 0 1 0.4985 0 1 0 (unlabeled)\n# . . .\n# [2 3 4] [6 7 8]\n\ndef __init__(self, fn, tot_num_pos, tot_num_unl):\nself.rnd = np.random.RandomState(1)\n\nself.tot_num_pos = tot_num_pos # number positives\nself.tot_num_unl = tot_num_unl # num unlabeleds\n\npos_x_lst = []; pos_y_lst = [] # lists of vectors\nunl_x_lst = []; unl_y_lst = []\n\nln = 0 # line number (not including comments)\nj = 0 # counter for unlabeleds\n\nself.unl_idx_to_line_num = dict()\n# key = idx of an unlabeled item in memory,\n# val = corresponding line number in src data file\n\nfin = open(fn, \"r\") # four lists of arrays\nfor line in fin:\nline = line.strip()\nif line.startswith(\"#\"): continue\n\narr = np.fromstring(line, sep=\"\\t\", \\\ndtype=np.float32)\nif arr == 1:\npos_x = arr[[1,2,3,4,5,6,7,8]]\npos_y = 1 # always 1\npos_x_lst.append(pos_x)\npos_y_lst.append(pos_y)\nelif arr == -2: # unlabeled\nunl_x = arr[[1,2,3,4,5,6,7,8]]\nunl_y = 0 # treat unlabeleds as negative\nunl_x_lst.append(unl_x)\nunl_y_lst.append(unl_y)\nself.unl_idx_to_line_num[j] = ln\nj +=1\nelse:\nprint(\"Fatal: unknown label in file\")\n\nln += 1 # only data lines\n\nfin.close()\n\n# data actual storage in 4 tensor-arrays\nself.train_x_pos = T.tensor(pos_x_lst, \\\ndtype=T.float32) # predictors for positives\nself.train_y_pos = T.tensor(pos_y_lst, \\\ndtype=T.float32).reshape(-1,1) # positives (1s)\nself.train_x_unl = T.tensor(unl_x_lst, \\\ndtype=T.float32) # predictors for unlabels\nself.train_y_unl = T.tensor(unl_y_lst, \\\ndtype=T.float32).reshape(-1,1)\n\nself.num_pos_unl = 2 * tot_num_pos\n\n# indices of active and inactive unlabeled items\nall_unl_indices = np.arange(tot_num_unl) # 180\nself.rnd.shuffle(all_unl_indices)\nself.p = all_unl_indices[0 : tot_num_pos] # 20\nself.q = all_unl_indices[tot_num_pos : tot_num_unl]\n\ndef __len__(self):\nreturn self.num_pos_unl # virtual ds size\n\ndef __getitem__(self, idx):\nif idx < self.tot_num_pos: # small: fetch directly\nreturn (self.train_x_pos[idx], self.train_y_pos[idx])\nelse: # large index = an unlabeled = map index\nofset = idx - self.tot_num_pos\nii = self.p[ofset] # index of active unlabeled item\nreturn (self.train_x_unl[ii], self.train_y_unl[ii])\n\ndef reinit(self): # get (20) different unlabeled items\nall_unl_indices = np.arange(self.tot_num_unl)\nself.rnd.shuffle(all_unl_indices)\nself.p = all_unl_indices[0 : self.tot_num_pos]\nself.q = all_unl_indices[self.tot_num_pos : \\\nself.tot_num_unl]```\n\nThe code presented in Listing 1 is specific to the employee PUL data. However, by analyzing it carefully, you will be able to adapt the code to meet your own PUL scenarios.\n\nOverall Program Structure\nThe overall program structure of the demo is presented in Listing 2. The code is moderately long and complex, but PUL problem scenarios are difficult.\n\nListing 2: Overall Program Structure\n\n```import torch as T\n# employee_pul.py\n# PyTorch 1.8.0-CPU Anaconda3-2020.02 Python 3.7.6\n# Windows 10\n\nimport numpy as np\nimport torch as T\ndevice = T.device(\"cpu\") # apply to Tensor or Module\n\n# ----------------------------------------------------------\n\nclass EmployeeDataset(T.utils.data.Dataset):\n# see Listing 1\n\n# ----------------------------------------------------------\n\nclass Net(T.nn.Module):\n# see Listing 3\n\n# ----------------------------------------------------------\n\ndef train(net, ds, bs, me, le, lr, verbose):\n# see Listing 4\n\n# ----------------------------------------------------------\n\ndef truth_of_line(ln):\n# actual label for 0-based line number of PUL file\n# in non-demo, you'd have to compute\nif ln % 2 == 0: return 0 # files set up this way\nelse: return 1\n\n# ----------------------------------------------------------\n\ndef main():\n# 0. get started\nprint(\"\\nEmployee PUL using PyTorch \\n\")\nT.manual_seed(1)\nnp.random.seed(1)\n\n# 1. create data objects\nprint(\"Creating dynamic Employee train Dataset \")\nprint(\"Dataset has 20 positive and 180 unlabeled \")\ntrain_file = \".\\\\Data\\\\employee_pul_200.txt\"\ntrain_ds = EmployeeDataset(train_file, 20, 180)\n\n# 2. create neural network\nprint(\"\\nCreating 8-(10-10)-1 binary NN classifier \")\nnet = Net().to(device)\n\n# 3. prepare for training multiple times\nprint(\"\\nSetting training parameters \\n\")\nbat_size = 10\nlrn_rate = 0.01\nmax_epochs = 800\nep_log_interval = 100\n\nprint(\"batch size = \" + str(bat_size))\nprint(\"lrn_rate = %0.2f \" % lrn_rate)\nprint(\"max_epochs = \" + str(max_epochs))\nprint(\"loss function = BCELoss() \")\nprint(\"optimizer = SGD \\n\")\n\n# track number times each inactive unlabeled is evaluated\n# accumulate sum of p-values from each evaluation\neval_counts = np.zeros(180, dtype=np.int64)\neval_sums = np.zeros(180, dtype=np.float32)\n\n# ----------------------------------------------------------\n\n# 4. accumulate p-values for inactive items after session\nnum_trials = 8 # number times to train on a subset\nfor trial in range(num_trials):\nprint(\"Training model \" + str(trial) + \" of \" + \\\nstr(num_trials), end=\"\")\ntrain(net, train_ds, bat_size, max_epochs, \\\nep_log_interval, lrn_rate, verbose=False)\n\nprint(\" Done. Scoring inactive unlabeled items \")\nnet.eval()\nfor i in train_ds.q: # idxs of inactive unlabeleds\nx = train_ds.train_x_unl[i] # predictors\np = net(x) # between 0.0 and 1.0\neval_counts[i] += 1\neval_sums[i] += p.item()\n\ntrain_ds.reinit() # get different unlabeleds\n\n# ----------------------------------------------------------\n\n# 5. guess 0 or 1 labels for unlabeled items\nprint(\"\\nGuessing 0 or 1 for unlabeled items \")\n\nlo = 0.30; hi = 0.90 # tune for accuracy vs. quantity\n\n# to label an unknown as positive you need a higher\n# p-value threshold criterion.\n\nprint(\"pseudo-prob thresholds: %0.2f %0.2f \" % (lo, hi))\n\nnum_correct = 0; num_wrong = 0\nfor i in range(180): # process each unlabeled data item\nln = train_ds.unl_idx_to_line_num[i] # line num in PUL file\n\nif eval_counts[i] == 0:\nprint(\"Fatal: Never evaluated this unlabeled item \")\ninput()\nelse:\navg_p = (eval_sums[i] * 1.0) / eval_counts[i]\n\nif avg_p >= lo and avg_p <= hi: # too close to 0.5\npass\nelif avg_p < lo and truth_of_line(ln) == 0: # even class 0\nnum_correct += 1\nelif avg_p > hi and truth_of_line(ln) == 1: # odd class 1\nnum_correct += 1\nelse:\nnum_wrong += 1\n\nprint(\"\\n---------------\\n\")\nnum_guessed = num_correct + num_wrong\nprint(\"num labels guessed = \" + str(num_guessed))\nprint(\"num correct guessed labels = \" + str(num_correct))\nprint(\"num wrong guessed labels = \" + str(num_wrong))\nacc = (1.0 * num_correct) / (num_correct + num_wrong)\npct = (1.0 * (num_correct + num_wrong)) / 180\n\nprint(\"pct of unlabeled items guessed = %0.4f \" % pct)\nprint(\"accuracy of guessed items = %0.4f \" % acc)\n\nprint(\"\\nEnd PUL demo \")\n\n# ----------------------------------------------------------\n\nif __name__ == \"__main__\":\nmain()```\n\nThe demo begins by setting system random number generators so that program runs are reproducible:\n\n```def main():\nT.manual_seed(1)\nnp.random.seed(1)\n. . .```\n\nNext, an initial version of the employee dataset is created, and the binary classifier is instantiated:\n\n``` train_file = \".\\\\Data\\\\employee_pul_200.txt\"\ntrain_ds = EmployeeDataset(train_file, 20, 180)\nnet = Net().to(device)```\n\nTraining is prepared using these statements:\n\n``` bat_size = 10\nlrn_rate = 0.01\nmax_epochs = 800\nep_log_interval = 100\neval_counts = np.zeros(180, dtype=np.int64)\neval_sums = np.zeros(180, dtype=np.float32)\nnum_trials = 8 # number times to train on a subset```\n\nThe eval_counts array holds the number of times each of the 180 unlabeled items is not used as part of the training data and is therefore scored by the current binary classification model. The eval_sums array holds the accumulated p-values for each unlabeled item (when it's not used as part of the training data and is scored).\n\nThe main loop is:\n\n``` for trial in range(num_trials):\ntrain(net, train_ds, bat_size, max_epochs, \\\nep_log_interval, lrn_rate, verbose=False)\n\nprint(\" Done. Scoring inactive unlabeled items \")\nnet.eval()\nfor i in train_ds.q: # idxs of inactive unlabeleds\nx = train_ds.train_x_unl[i] # predictors\np = net(x) # between 0.0 and 1.0\neval_counts[i] += 1\neval_sums[i] += p.item()\n\ntrain_ds.reinit() # get different unlabeleds```\n\nAfter a binary classifier is trained using a dynamic dataset that has all 20 positive items and 20 randomly selected unlabeled data items that have been marked as negative, the code walks through the unlabeled items that were not part of the training data. Each of these unused unlabeled items is scored.\n\nAfter training has occurred several times (8 in the demo), the accumulated p-values are analyzed to guess the class of each unlabeled item:\n\n``` for i in range(180): # process each unlabeled data item\nln = train_ds.unl_idx_to_line_num[i] # line num\n\nif eval_counts[i] == 0:\nprint(\"Fatal: Never evaluated this unlabeled item \")\ninput()\nelse:\navg_p = (eval_sums[i] * 1.0) / eval_counts[i]\n\nif avg_p >= lo and avg_p <= hi: # too close to 0.5\npass\nelif avg_p < lo and truth_of_line(ln) == 0: # class 0\nnum_correct += 1\nelif avg_p > hi and truth_of_line(ln) == 1: # class 1\nnum_correct += 1\nelse:\nnum_wrong += 1```\n\nIf an eval_count value for an unlabeled item is 0, that means the item was never left out of the training data, which means it was part of the randomly selected unlabeled items in the training dataset on every training iteration. This is statistically nearly impossible, and almost certainly indicates a logic error.\n\nThe Binary Classifier\nThe definition of the neural binary classifier used by the PUL system is presented in Listing 4.\n\nListing 4: Neural Binary Classifier Definition\n\n```class Net(T.nn.Module):\n# binary classifier for Employee data\n\ndef __init__(self):\nsuper(Net, self).__init__()\nself.hid1 = T.nn.Linear(8, 10) # 8-(10-10)-1\nself.hid2 = T.nn.Linear(10, 10)\nself.oupt = T.nn.Linear(10, 1)\n\nT.nn.init.xavier_uniform_(self.hid1.weight)\nT.nn.init.zeros_(self.hid1.bias)\nT.nn.init.xavier_uniform_(self.hid2.weight)\nT.nn.init.zeros_(self.hid2.bias)\nT.nn.init.xavier_uniform_(self.oupt.weight)\nT.nn.init.zeros_(self.oupt.bias)\n\ndef forward(self, x):\nz = T.tanh(self.hid1(x))\nz = T.tanh(self.hid2(z))\nz = T.sigmoid(self.oupt(z))\nreturn z```\n\nThe classifier accepts 8 input values -- the predictors of age, city (3), income, and job-type (3) -- and emits a single value between 0.0 and 1.0 because sigmoid() activation is used on the output node. The classifier has two hidden layers, each with 10 nodes, and with tanh() activation. The number of hidden layers, the number of nodes in each layer, and the activation function for each layer, are all hyperparameters that must be determined by trial and error guided by experience.\n\nThe classifier uses explicit weight and bias initialization rather than allowing the PyTorch system to supply default initialization values. Neural weight and bias initialization can often have a big impact (good or bad) on model classification accuracy and performance.\n\nTraining the Binary Classifier\nThe demo program encapsulates the training code into a single train() function. The definition of train() is presented in Listing 5.\n\nListing 5: Training the Binary Classifier\n\n```def train(net, ds, bs, me, le, lr, verbose):\n# NN, dataset, batch_size, max_epochs,\n# log_every, learn_rate. optimizer and loss hard-coded.\nnet.train()\nshuffle=True)\nloss_func = T.nn.BCELoss() # assumes sigmoid activation\nopt = T.optim.SGD(net.parameters(), lr=lr)\nfor epoch in range(0, me):\nepoch_loss = 0.0\nfor (batch_idx, batch) in enumerate(data_ldr):\nX = batch # inputs\nY = batch # targets\n\noupt = net(X) # compute output/target\nloss_val = loss_func(oupt, Y) # a tensor\nepoch_loss += loss_val.item() # accumulate for display\nopt.step() # update weights\n\nif epoch % le == 0 and verbose == True:\nprint(\"epoch = %4d loss = %0.4f\" % (epoch, epoch_loss))```\n\nThe training code is ordinary in the sense that there is nothing special needed for a PUL scenario. The train() method uses SGD optimization (stochastic gradient descent). This is the most rudimentary optimization technique. For complex data with many features and complex neural binary classification architecture, the Adam optimizer often gives better results.\n\nWrapping Up\nWhen you have a dataset with just positive (class 1) and unlabeled (could be class 0 or class 1) data items, there is no magic technique. You must intelligently guess the true class labels for the unlabeled items using information in the dataset. Because the process of labeling training data for a machine learning model is often costly and time consuming, PUL problem scenarios are becoming increasingly common.\n\nBecause the PUL guessing process is probabilistic, there are many approaches you can use. The approach presented in this article is based on a deep neural binary classifier, and is new and mostly unexplored. Most alternative techniques for guessing class labels for PUL data use some form of k-means clustering. The idea is that unlabeled data items that are close to positive class items are more likely to be positive than negative, and that a cluster of data items that are all unlabeled, and which are all far from a cluster of mostly positive data items, are most likely negative.\n\nThe PUL guessing approaches that use k-means all assume that the distance between two data items can be measured (usually by Euclidean distance), but this means that all data items must be strictly numeric. The technique presented in this article has the advantage that it can work with both numeric and categorical data (or mixed numeric and categorical as in the demo data)."
]
| [
null,
"https://visualstudiomagazine.com/~/media/ECG/visualstudiomagazine/design/vsmlogowhite.ashx",
null,
"https://visualstudiomagazine.com/articles/2021/05/20/~/media/ECG/VSLive/landingpage/mobilemenubutton.svg",
null,
"https://visualstudiomagazine.com/articles/2021/05/20/~/media/ECG/VSLive/2019/lasvegas/mobileclosebutton.svg",
null,
"https://visualstudiomagazine.com/articles/2021/05/20/~/media/ECG/visualstudiomagazine/Images/introimages/VSMSchwartz0513BigDataBrust.ashx",
null,
"https://visualstudiomagazine.com/articles/2021/05/20/~/media/ECG/visualstudiomagazine/Images/2021/05/pul1_s.ashx",
null,
"https://visualstudiomagazine.com/articles/2021/05/20/~/media/ECG/visualstudiomagazine/Images/2021/05/pul2_s.ashx",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8169396,"math_prob":0.88054067,"size":24387,"snap":"2021-43-2021-49","text_gpt3_token_len":6136,"char_repetition_ratio":0.1565435,"word_repetition_ratio":0.10343022,"special_character_ratio":0.28318366,"punctuation_ratio":0.13594772,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861358,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T13:50:20Z\",\"WARC-Record-ID\":\"<urn:uuid:afd15dd0-6bc3-4be8-a9c8-216c10aa21e0>\",\"Content-Length\":\"130395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f48a068e-3b64-4d5c-8073-16a6ffcbc238>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ff28be7-6ea5-498a-aae4-97f7c9da66c7>\",\"WARC-IP-Address\":\"172.66.42.231\",\"WARC-Target-URI\":\"https://visualstudiomagazine.com/articles/2021/05/20/pul-pytorch.aspx\",\"WARC-Payload-Digest\":\"sha1:76NO2ZITCPGIG6O46ILYKK57ATR4ZUWV\",\"WARC-Block-Digest\":\"sha1:VPCW42FDZBXD5WX6W3H6KIFW3TVXWS3V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587711.69_warc_CC-MAIN-20211025123123-20211025153123-00004.warc.gz\"}"} |
https://www.colorhexa.com/c8a27c | [
"# #c8a27c Color Information\n\nIn a RGB color space, hex #c8a27c is composed of 78.4% red, 63.5% green and 48.6% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 19% magenta, 38% yellow and 21.6% black. It has a hue angle of 30 degrees, a saturation of 40.9% and a lightness of 63.5%. #c8a27c color hex could be obtained by blending #fffff8 with #914500. Closest websafe color is: #cc9966.\n\n• R 78\n• G 64\n• B 49\nRGB color chart\n• C 0\n• M 19\n• Y 38\n• K 22\nCMYK color chart\n\n#c8a27c color description : Slightly desaturated orange.\n\n# #c8a27c Color Conversion\n\nThe hexadecimal color #c8a27c has RGB values of R:200, G:162, B:124 and CMYK values of C:0, M:0.19, Y:0.38, K:0.22. Its decimal value is 13148796.\n\nHex triplet RGB Decimal c8a27c `#c8a27c` 200, 162, 124 `rgb(200,162,124)` 78.4, 63.5, 48.6 `rgb(78.4%,63.5%,48.6%)` 0, 19, 38, 22 30°, 40.9, 63.5 `hsl(30,40.9%,63.5%)` 30°, 38, 78.4 cc9966 `#cc9966`\nCIE-LAB 69.167, 8.771, 25.061 40.378, 39.577, 24.58 0.386, 0.379, 39.577 69.167, 26.552, 70.71 69.167, 27.297, 31.401 62.91, 4.475, 20.871 11001000, 10100010, 01111100\n\n# Color Schemes with #c8a27c\n\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #7ca2c8\n``#7ca2c8` `rgb(124,162,200)``\nComplementary Color\n• #c87c7c\n``#c87c7c` `rgb(200,124,124)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #c8c87c\n``#c8c87c` `rgb(200,200,124)``\nAnalogous Color\n• #7c7cc8\n``#7c7cc8` `rgb(124,124,200)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #7cc8c8\n``#7cc8c8` `rgb(124,200,200)``\nSplit Complementary Color\n• #a27cc8\n``#a27cc8` `rgb(162,124,200)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #7cc8a2\n``#7cc8a2` `rgb(124,200,162)``\nTriadic Color\n• #c87ca2\n``#c87ca2` `rgb(200,124,162)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #7cc8a2\n``#7cc8a2` `rgb(124,200,162)``\n• #7ca2c8\n``#7ca2c8` `rgb(124,162,200)``\nTetradic Color\n• #ae7c49\n``#ae7c49` `rgb(174,124,73)``\n• #b98958\n``#b98958` `rgb(185,137,88)``\n• #c0956a\n``#c0956a` `rgb(192,149,106)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #d0af8e\n``#d0af8e` `rgb(208,175,142)``\n• #d7bca0\n``#d7bca0` `rgb(215,188,160)``\n• #dfc8b2\n``#dfc8b2` `rgb(223,200,178)``\nMonochromatic Color\n\n# Alternatives to #c8a27c\n\nBelow, you can see some colors close to #c8a27c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #c88f7c\n``#c88f7c` `rgb(200,143,124)``\n• #c8957c\n``#c8957c` `rgb(200,149,124)``\n• #c89c7c\n``#c89c7c` `rgb(200,156,124)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #c8a87c\n``#c8a87c` `rgb(200,168,124)``\n• #c8af7c\n``#c8af7c` `rgb(200,175,124)``\n• #c8b57c\n``#c8b57c` `rgb(200,181,124)``\nSimilar Colors\n\n# #c8a27c Preview\n\nText with hexadecimal color #c8a27c\n\nThis text has a font color of #c8a27c.\n\n``<span style=\"color:#c8a27c;\">Text here</span>``\n#c8a27c background color\n\nThis paragraph has a background color of #c8a27c.\n\n``<p style=\"background-color:#c8a27c;\">Content here</p>``\n#c8a27c border color\n\nThis element has a border color of #c8a27c.\n\n``<div style=\"border:1px solid #c8a27c;\">Content here</div>``\nCSS codes\n``.text {color:#c8a27c;}``\n``.background {background-color:#c8a27c;}``\n``.border {border:1px solid #c8a27c;}``\n\n# Shades and Tints of #c8a27c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #070503 is the darkest color, while #fcfaf8 is the lightest one.\n\n• #070503\n``#070503` `rgb(7,5,3)``\n• #150f09\n``#150f09` `rgb(21,15,9)``\n• #23190f\n``#23190f` `rgb(35,25,15)``\n• #312314\n``#312314` `rgb(49,35,20)``\n• #3e2c1a\n``#3e2c1a` `rgb(62,44,26)``\n• #4c3620\n``#4c3620` `rgb(76,54,32)``\n• #5a4026\n``#5a4026` `rgb(90,64,38)``\n• #684a2c\n``#684a2c` `rgb(104,74,44)``\n• #765431\n``#765431` `rgb(118,84,49)``\n• #835d37\n``#835d37` `rgb(131,93,55)``\n• #91673d\n``#91673d` `rgb(145,103,61)``\n• #9f7143\n``#9f7143` `rgb(159,113,67)``\n• #ad7b49\n``#ad7b49` `rgb(173,123,73)``\nShade Color Variation\n• #b78553\n``#b78553` `rgb(183,133,83)``\n• #bc8e60\n``#bc8e60` `rgb(188,142,96)``\n• #c2986e\n``#c2986e` `rgb(194,152,110)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #ceac8a\n``#ceac8a` `rgb(206,172,138)``\n• #d4b698\n``#d4b698` `rgb(212,182,152)``\n• #d9bfa5\n``#d9bfa5` `rgb(217,191,165)``\n• #dfc9b3\n``#dfc9b3` `rgb(223,201,179)``\n• #e5d3c1\n``#e5d3c1` `rgb(229,211,193)``\n• #ebddcf\n``#ebddcf` `rgb(235,221,207)``\n• #f1e7dd\n``#f1e7dd` `rgb(241,231,221)``\n• #f6f0eb\n``#f6f0eb` `rgb(246,240,235)``\n• #fcfaf8\n``#fcfaf8` `rgb(252,250,248)``\nTint Color Variation\n\n# Tones of #c8a27c\n\nA tone is produced by adding gray to any pure hue. In this case, #a4a2a0 is the less saturated color, while #faa24a is the most saturated one.\n\n• #a4a2a0\n``#a4a2a0` `rgb(164,162,160)``\n• #aba299\n``#aba299` `rgb(171,162,153)``\n• #b3a291\n``#b3a291` `rgb(179,162,145)``\n• #baa28a\n``#baa28a` `rgb(186,162,138)``\n• #c1a283\n``#c1a283` `rgb(193,162,131)``\n• #c8a27c\n``#c8a27c` `rgb(200,162,124)``\n• #cfa275\n``#cfa275` `rgb(207,162,117)``\n• #d6a26e\n``#d6a26e` `rgb(214,162,110)``\n• #dda267\n``#dda267` `rgb(221,162,103)``\n• #e5a25f\n``#e5a25f` `rgb(229,162,95)``\n• #eca258\n``#eca258` `rgb(236,162,88)``\n• #f3a251\n``#f3a251` `rgb(243,162,81)``\n• #faa24a\n``#faa24a` `rgb(250,162,74)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #c8a27c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.52224183,"math_prob":0.627006,"size":3715,"snap":"2021-21-2021-25","text_gpt3_token_len":1683,"char_repetition_ratio":0.1293452,"word_repetition_ratio":0.011090573,"special_character_ratio":0.54024225,"punctuation_ratio":0.23370786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9681679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T14:17:31Z\",\"WARC-Record-ID\":\"<urn:uuid:232fe502-ed63-45fc-ad8a-1da4e643613c>\",\"Content-Length\":\"36348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2044fcb-0283-4100-b9b7-aa084357ac84>\",\"WARC-Concurrent-To\":\"<urn:uuid:748889be-d627-4c04-97cf-4b20836aff59>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/c8a27c\",\"WARC-Payload-Digest\":\"sha1:5M45QUGRV3BSRZEUZ2SAUDE7BAICO2SJ\",\"WARC-Block-Digest\":\"sha1:SFW6KENZHCNWVRLCBF53637U7WV6YXYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487662882.61_warc_CC-MAIN-20210620114611-20210620144611-00140.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-3-exponents-polynomials-and-functions-chapter-test-page-289/14 | [
"## Intermediate Algebra: Connecting Concepts through Application\n\n9x$^2$y-4xy-2y$^2$+8\n(4x$^2$y+3xy-2y$^2$)+(5x$^2$y-7xy+8) =4x$^2$y+3xy-2y$^2$+5x$^2$y-7xy+8 =9x$^2$y-4xy-2y$^2$+8"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7737593,"math_prob":0.999987,"size":385,"snap":"2019-51-2020-05","text_gpt3_token_len":162,"char_repetition_ratio":0.15485564,"word_repetition_ratio":0.0,"special_character_ratio":0.36103895,"punctuation_ratio":0.045454547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T07:17:26Z\",\"WARC-Record-ID\":\"<urn:uuid:18d28b7d-f2f8-46b7-aee7-1f0e0233e6b1>\",\"Content-Length\":\"67447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a4650a4-0f3c-433c-8a20-fc4eba342729>\",\"WARC-Concurrent-To\":\"<urn:uuid:f243d4e3-3024-4a66-8949-417a8eb82dd0>\",\"WARC-IP-Address\":\"54.210.73.90\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-3-exponents-polynomials-and-functions-chapter-test-page-289/14\",\"WARC-Payload-Digest\":\"sha1:NXKVOYPS2BS4YYANKF4IXJCE34JYSNDZ\",\"WARC-Block-Digest\":\"sha1:Y3ZK76QYH2BCLBSTSDSSVJKGP3OKHZ3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541318556.99_warc_CC-MAIN-20191216065654-20191216093654-00003.warc.gz\"}"} |
https://www.excitoninteractive.com/articles/read/133/vue-projects/adding-account-deposits-and-securities-to-the-vuex-store | [
"# #30 Adding Account Deposits And Securities To The Vuex Store\n\n## In this article we will start the process of making our accounts more useful in that we will expand the amount of information that is associated with them in our vuex store. Up until this point an account has solely consisted of a name but once we are done with this article each account will also be associated with any deposits made and the number and identifier for securities that have been purchased.",
null,
"Watch In YouTube\n\n## store-constants.ts (1:34)\n\n``````...\nexport const STATE_ACCOUNTS_DEPOSITS = \"STATE_ACCOUNTS_DEPOSITS\";\nexport const STATE_ACCOUNTS_SECURITIES = \"STATE_ACCOUNTS_SECURITIES\";\n...``````\nsrc ⟩ store ⟩ store-constants.ts\n\n## account-deposit-model.ts (2:01)\n\n``````import { AccountModel } from \"@/store/account-model\";\n\nexport interface IAccountDepositModelConfig {\naccountId: number;\namount: number;\ndate: Date;\nid: number;\n}\n\nexport class AccountDepositModel {\nprivate _account?: AccountModel;\nprivate _accountId: number;\nprivate _amount: number;\nprivate _date: Date;\nprivate _id: number;\n\npublic get account() {\nif (typeof this._account === \"undefined\") {\nthrow new Error(\"The account has not been defined.\");\n}\nreturn this._account;\n}\npublic set account(account: AccountModel) {\nthis._account = account;\n}\npublic get accountId() {\nreturn this._accountId;\n}\npublic set accountId(id: number) {\nthis._accountId = id;\n}\npublic get amount() {\nreturn this._amount;\n}\npublic set amount(amount: number) {\nthis._amount = amount;\n}\npublic get date() {\nreturn this._date;\n}\npublic set date(date: Date) {\nthis._date = date;\n}\npublic get id() {\nreturn this._id;\n}\npublic set id(id: number) {\nthis._id = id;\n}\n\nconstructor(config: IAccountDepositModelConfig) {\nthis._accountId = config.accountId;\nthis._amount = config.amount;\nthis._date = config.date;\nthis._id = config.id;\n}\n}``````\nsrc ⟩ store ⟩ account-deposit-model.ts\n\n## account-security-model.ts (5:48)\n\n``````import { AccountModel } from \"@/store/account-model\";\nimport { SecurityModel } from \"@/store/security-model\";\n\nexport interface IAccountSecurityModelConfig {\naccountId: number;\nid: number;\nsecurity?: SecurityModel;\nsecurityId: number;\nshares: number;\n}\n\nexport class AccountSecurityModel {\nprivate _id: number;\nprivate _account?: AccountModel;\nprivate _accountId: number;\nprivate _security?: SecurityModel;\nprivate _securityId: number;\nprivate _shares: number;\n\npublic get account() {\nif (typeof this._account === \"undefined\") {\nthrow new Error(\"The account has not been defined.\");\n}\nreturn this._account;\n}\npublic set account(account: AccountModel) {\nthis._account = account;\n}\npublic get accountId() {\nreturn this._accountId;\n}\npublic set accountId(accountId: number) {\nthis._accountId = accountId;\n}\npublic get id() {\nreturn this._id;\n}\npublic set id(id: number) {\nthis._id = id;\n}\npublic get security() {\nif (typeof this._security === \"undefined\") {\nthrow new Error(\"The security has not been defined.\");\n}\nreturn this._security;\n}\npublic set security(security: SecurityModel) {\nthis._security = security;\n}\npublic get securityId() {\nreturn this._securityId;\n}\npublic get shares() {\nreturn this._shares;\n}\npublic set shares(value: number) {\nthis._shares = value;\n}\npublic get value() {\nreturn this.shares * this.security.last;\n}\n\nconstructor(config: IAccountSecurityModelConfig) {\nthis._accountId = config.accountId;\nthis._id = config.id;\nthis._securityId = config.securityId;\nthis._shares = config.shares;\n\nif (typeof config.security !== \"undefined\") {\nthis._security = config.security;\n}\n}\n}``````\nsrc ⟩ store ⟩ account-security-model.ts\n\n## account-types.ts (10:05)\n\n``````import {\n...,\nSTATE_ACCOUNTS_DEPOSITS,\nSTATE_ACCOUNTS_SECURITIES,\n} from \"@/store/store-constants\";\n\n...\nimport { AccountDepositModel } from \"@/store/account-deposit-model\";\nimport { AccountSecurityModel } from \"@/store/account-security-model\";\n...\nexport interface IAccountDepositModelState {\nindex: number;\nitems: AccountDepositModel[];\n}\n\nexport interface IAccountSecurityModelState {\nindex: number;\nitems: AccountSecurityModel[];\n}\n\nexport interface IAccountState {\n...\n[STATE_ACCOUNTS_DEPOSITS]: IAccountDepositModelState;\n[STATE_ACCOUNTS_SECURITIES]: IAccountSecurityModelState;\n}``````\nsrc ⟩ store ⟩ account-types.ts\n\n## account-deposit-initial-state.ts (12:00)\n\n``````import { IAccountDepositModelConfig, AccountDepositModel } from \"@/store/account-deposit-model\";\nimport { initialState as accountState } from \"@/store/account-initial-state\";\nimport { IAccountDepositModelState } from \"@/store/account-types\";\n\nconst deposits: AccountDepositModel[] = [];\n\nfunction createDeposit(id: number, config: Omit<IAccountDepositModelConfig, \"id\">) {\nconst deposit = new AccountDepositModel({ id, ...config });\ndeposits.push(deposit);\nreturn (id += 1);\n}\n\nlet index = 1;\nif (process.env.NODE_ENV === \"development\") {\nindex = createDeposit(index, { accountId: accountState.items.id, amount: 2300, date: new Date(2019, 0, 4) });\nindex = createDeposit(index, { accountId: accountState.items.id, amount: 375, date: new Date(2018, 3, 19) });\nindex = createDeposit(index, { accountId: accountState.items.id, amount: 5000, date: new Date(2017, 5, 14) });\nindex = createDeposit(index, { accountId: accountState.items.id, amount: 10000, date: new Date(2015, 8, 27) });\n}\n\nexport const initialState: IAccountDepositModelState = {\nindex,\nitems: deposits,\n};``````\nsrc ⟩ store ⟩ account-deposit-initial-state.ts\n\n## account-security-initial-state.ts (15:43)\n\n``````import { IAccountSecurityModelConfig, AccountSecurityModel } from \"@/store/account-security-model\";\nimport { IAccountSecurityModelState } from \"@/store/account-types\";\nimport { SecurityModel } from \"@/store/security-model\";\n\nimport { initialState as accountState } from \"@/store/account-initial-state\";\nimport { initialState as securitiesState } from \"@/store/security-model-initial-state\";\n\nimport { sort } from \"@/store/functions\";\n\nconst accountSecurities: AccountSecurityModel[] = [];\n\nfunction createAccountSecurity(id: number, security: SecurityModel, config: Omit<IAccountSecurityModelConfig, \"id\">) {\nconst existing = accountSecurities.find(\n(x) => x.accountId === config.accountId && x.securityId === config.securityId,\n);\nif (typeof existing === \"undefined\") {\nconst accountSecurity = new AccountSecurityModel({ id, ...config });\naccountSecurity.security = security;\naccountSecurities.push(accountSecurity);\nreturn (id += 1);\n} else {\nexisting.shares += config.shares;\nreturn id;\n}\n}\n\nlet index = 1;\nif (process.env.NODE_ENV === \"development\") {\naccountState.items.forEach((x) => {\nfor (let i = 0; i < 10; i++) {\nconst shares = Math.floor(Math.random() * 100 + 1);\nconst securityIndex = Math.floor(Math.random() * (securitiesState.items.length - 2) + 1);\nconst security = securitiesState.items[securityIndex];\nindex = createAccountSecurity(index, security, { accountId: x.id, securityId: security.id, shares });\n}\n});\n}\n\nexport const initialState: IAccountSecurityModelState = {\nindex,\nitems: sort(accountSecurities, (x) => x.security.symbol),\n};``````\nsrc ⟩ store ⟩ account-security-initial-state.ts\n\n## account-module.ts (24:57)\n\n``````...\nimport {\n...,\nSTATE_ACCOUNTS_DEPOSITS,\nSTATE_ACCOUNTS_SECURITIES,\n} from \"@/store/store-constants\";\n...\nimport { initialState as depositState } from \"@/store/account-deposit-initial-state\";\nimport { initialState as securityState } from \"@/store/account-security-initial-state\";\n...\nexport const accountsState = {\n...,\n[STATE_ACCOUNTS_DEPOSITS]: depositState,\n[STATE_ACCOUNTS_SECURITIES]: securityState,\n};``````\nsrc ⟩ store ⟩ account-module.ts",
null,
"Exciton Interactive LLC"
]
| [
null,
"https://img.youtube.com/vi/kWA9-IfiBHQ/hqdefault.jpg",
null,
"https://www.excitoninteractive.com/images/logo.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5446587,"math_prob":0.69349295,"size":8536,"snap":"2020-34-2020-40","text_gpt3_token_len":2110,"char_repetition_ratio":0.21729958,"word_repetition_ratio":0.16514598,"special_character_ratio":0.2785848,"punctuation_ratio":0.28601107,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95348674,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T06:48:00Z\",\"WARC-Record-ID\":\"<urn:uuid:d9ba3f73-3600-453f-a608-8d5ef232081b>\",\"Content-Length\":\"37583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8af92477-e2c9-4408-bef8-38370d54cbda>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccd82479-f244-4ac7-bb85-2954f93c40f1>\",\"WARC-IP-Address\":\"96.31.37.74\",\"WARC-Target-URI\":\"https://www.excitoninteractive.com/articles/read/133/vue-projects/adding-account-deposits-and-securities-to-the-vuex-store\",\"WARC-Payload-Digest\":\"sha1:ZUVXSKHOLWUAIMD33CGWPTSTAGQRQ46T\",\"WARC-Block-Digest\":\"sha1:6C4D6UWQ5A43TUKPWWSZ5LV2EYCI2ITE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402124756.81_warc_CC-MAIN-20201001062039-20201001092039-00483.warc.gz\"}"} |
https://ixtrieve.fh-koeln.de/birds/litie/document/25171 | [
"# Document (#25171)\n\nAuthor\nStock, M.\nTitle\nBoulevard online : ASV Infopool\nSource\nPassword. 2002, H.10, S.22-27\nYear\n2002\nAbstract\nDer Artikel ist eine kritische Beschreibung professioneller Online-Produkte des Axel-Springer-Verlages. Eingegangen wird vor allem auf 3 Datenbanken: Artikeldatenbank, biographische Datenbank und Filmdatenbank, indem wir die Inhalte sowie die Retrievalfunktionalität skizzieren. Inhaltliche Schwerpunkte von ASV Infopool sind Boulevard, Kultur, Politik, Sport und Tagesgeschehen, vorwiegend in Deutschland; Zielgruppe des Angebotes sinf Verlage und andere Medienunternehmen\nTheme\nInformationsmittel\nForm\nZeitungen\nFilme\nObject\nASV Infopool\n\n## Similar documents (author)\n\n1. Stock, M.; Stock, W.G.: Klassifikation und terminologische Kontrolle : Yahoo!, Open Directory und Oingo im Vergleich (2000) 4.92\n```4.917514 = sum of:\n4.917514 = weight(author_txt:stock in 494) [ClassicSimilarity], result of:\n4.917514 = fieldWeight in 494, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.954415 = idf(docFreq=112, maxDocs=43556)\n0.5 = fieldNorm(doc=494)\n```\n2. Stock, M.; Stock, W.G.: Internet-Suchwerkzeuge im Vergleich (III) : Informationslinguistik und -statistik: AltaVista, FAST und Northern Light (2001) 4.92\n```4.917514 = sum of:\n4.917514 = weight(author_txt:stock in 576) [ClassicSimilarity], result of:\n4.917514 = fieldWeight in 576, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.954415 = idf(docFreq=112, maxDocs=43556)\n0.5 = fieldNorm(doc=576)\n```\n3. Stock, M.; Stock, W.G.: Internet-Suchwerkzeuge im Vergleich (IV) : Relevance Ranking nach \"Popularität\" von Webseiten: Google (2001) 4.92\n```4.917514 = sum of:\n4.917514 = weight(author_txt:stock in 769) [ClassicSimilarity], result of:\n4.917514 = fieldWeight in 769, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.954415 = idf(docFreq=112, maxDocs=43556)\n0.5 = fieldNorm(doc=769)\n```\n4. Stock, M.; Stock, W.G.: Internet-Suchwerkzeuge im Vergleich : Teil 1: Retrievaltests mit Known Item searches (2000) 4.92\n```4.917514 = sum of:\n4.917514 = weight(author_txt:stock in 770) [ClassicSimilarity], result of:\n4.917514 = fieldWeight in 770, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.954415 = idf(docFreq=112, maxDocs=43556)\n0.5 = fieldNorm(doc=770)\n```\n5. Stock, M.; Stock, W.G.: Medizininformationen : Literaturnachweise, Volltexte und klinische Entscheidungen aus einer Hand (2004) 4.92\n```4.917514 = sum of:\n4.917514 = weight(author_txt:stock in 4286) [ClassicSimilarity], result of:\n4.917514 = fieldWeight in 4286, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.954415 = idf(docFreq=112, maxDocs=43556)\n0.5 = fieldNorm(doc=4286)\n```\n\n## Similar documents (content)\n\n1. Charlier, M.: Wer abonniert schon Mickeymaus plus Bill Gates? : immer mehr Online-Dienste mit neuen Angeboten und Strategien erzeugen Goldgräberstimmung im Cyberspace (1995) 0.13\n```0.12815814 = sum of:\n0.12815814 = product of:\n1.0679845 = sum of:\n0.0699408 = weight(abstract_txt:online in 2749) [ClassicSimilarity], result of:\n0.0699408 = score(doc=2749,freq=1.0), product of:\n0.087179884 = queryWeight, product of:\n1.2709975 = boost\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.018702745 = queryNorm\n0.80225843 = fieldWeight in 2749, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.21875 = fieldNorm(doc=2749)\n0.3931822 = weight(abstract_txt:springer in 2749) [ClassicSimilarity], result of:\n0.3931822 = score(doc=2749,freq=1.0), product of:\n0.21876699 = queryWeight, product of:\n1.4236802 = boost\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.018702745 = queryNorm\n1.7972647 = fieldWeight in 2749, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.21875 = fieldNorm(doc=2749)\n0.6048615 = weight(abstract_txt:axel in 2749) [ClassicSimilarity], result of:\n0.6048615 = score(doc=2749,freq=1.0), product of:\n0.29153445 = queryWeight, product of:\n1.6434878 = boost\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.018702745 = queryNorm\n2.0747514 = fieldWeight in 2749, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.21875 = fieldNorm(doc=2749)\n0.12 = coord(3/25)\n```\n2. Kind, J.: Praxis des Information Retrieval (2004) 0.09\n```0.0888626 = sum of:\n0.0888626 = product of:\n0.444313 = sum of:\n0.10881366 = weight(abstract_txt:datenbanken in 3932) [ClassicSimilarity], result of:\n0.10881366 = score(doc=3932,freq=5.0), product of:\n0.107933655 = queryWeight, product of:\n5.771006 = idf(docFreq=368, maxDocs=43556)\n0.018702745 = queryNorm\n1.0081532 = fieldWeight in 3932, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n5.771006 = idf(docFreq=368, maxDocs=43556)\n0.078125 = fieldNorm(doc=3932)\n0.057421118 = weight(abstract_txt:inhalte in 3932) [ClassicSimilarity], result of:\n0.057421118 = score(doc=3932,freq=1.0), product of:\n0.12052367 = queryWeight, product of:\n1.0567147 = boost\n6.0983067 = idf(docFreq=265, maxDocs=43556)\n0.018702745 = queryNorm\n0.4764302 = fieldWeight in 3932, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.0983067 = idf(docFreq=265, maxDocs=43556)\n0.078125 = fieldNorm(doc=3932)\n0.058848176 = weight(abstract_txt:datenbank in 3932) [ClassicSimilarity], result of:\n0.058848176 = score(doc=3932,freq=1.0), product of:\n0.12251237 = queryWeight, product of:\n1.0653971 = boost\n6.148413 = idf(docFreq=252, maxDocs=43556)\n0.018702745 = queryNorm\n0.48034477 = fieldWeight in 3932, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.148413 = idf(docFreq=252, maxDocs=43556)\n0.078125 = fieldNorm(doc=3932)\n0.06118545 = weight(abstract_txt:online in 3932) [ClassicSimilarity], result of:\n0.06118545 = score(doc=3932,freq=6.0), product of:\n0.087179884 = queryWeight, product of:\n1.2709975 = boost\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.018702745 = queryNorm\n0.7018299 = fieldWeight in 3932, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.078125 = fieldNorm(doc=3932)\n0.15804458 = weight(abstract_txt:professioneller in 3932) [ClassicSimilarity], result of:\n0.15804458 = score(doc=3932,freq=1.0), product of:\n0.23670693 = queryWeight, product of:\n1.4809045 = boost\n8.5463085 = idf(docFreq=22, maxDocs=43556)\n0.018702745 = queryNorm\n0.6676804 = fieldWeight in 3932, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.5463085 = idf(docFreq=22, maxDocs=43556)\n0.078125 = fieldNorm(doc=3932)\n0.2 = coord(5/25)\n```\n3. Koch, H.-A.: Biographische Lexika (1990) 0.07\n```0.06912703 = sum of:\n0.06912703 = product of:\n1.7281758 = sum of:\n1.7281758 = weight(title_txt:biographische in 3810) [ClassicSimilarity], result of:\n1.7281758 = score(doc=3810,freq=1.0), product of:\n0.29153445 = queryWeight, product of:\n1.6434878 = boost\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.018702745 = queryNorm\n5.927861 = fieldWeight in 3810, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.625 = fieldNorm(doc=3810)\n0.04 = coord(1/25)\n```\n4. SpringerLink : Umfang mehr als verdoppelt (2005) 0.07\n```0.06756062 = sum of:\n0.06756062 = product of:\n0.4222539 = sum of:\n0.04866295 = weight(abstract_txt:datenbanken in 4334) [ClassicSimilarity], result of:\n0.04866295 = score(doc=4334,freq=1.0), product of:\n0.107933655 = queryWeight, product of:\n5.771006 = idf(docFreq=368, maxDocs=43556)\n0.018702745 = queryNorm\n0.45085984 = fieldWeight in 4334, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.771006 = idf(docFreq=368, maxDocs=43556)\n0.078125 = fieldNorm(doc=4334)\n0.057421118 = weight(abstract_txt:inhalte in 4334) [ClassicSimilarity], result of:\n0.057421118 = score(doc=4334,freq=1.0), product of:\n0.12052367 = queryWeight, product of:\n1.0567147 = boost\n6.0983067 = idf(docFreq=265, maxDocs=43556)\n0.018702745 = queryNorm\n0.4764302 = fieldWeight in 4334, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.0983067 = idf(docFreq=265, maxDocs=43556)\n0.078125 = fieldNorm(doc=4334)\n0.035325434 = weight(abstract_txt:online in 4334) [ClassicSimilarity], result of:\n0.035325434 = score(doc=4334,freq=2.0), product of:\n0.087179884 = queryWeight, product of:\n1.2709975 = boost\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.018702745 = queryNorm\n0.40520167 = fieldWeight in 4334, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.078125 = fieldNorm(doc=4334)\n0.28084442 = weight(abstract_txt:springer in 4334) [ClassicSimilarity], result of:\n0.28084442 = score(doc=4334,freq=4.0), product of:\n0.21876699 = queryWeight, product of:\n1.4236802 = boost\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.018702745 = queryNorm\n1.2837605 = fieldWeight in 4334, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.078125 = fieldNorm(doc=4334)\n0.16 = coord(4/25)\n```\n5. Buhl, O.: Nutzen und Nutzung netzvermittelter Pressearchive : Erhebung und Bewertung von Print-Archivdaten als Internet-Angebot und -Recherchequelle des Axel-Springer-Verlags (1997) 0.07\n```0.0658173 = sum of:\n0.0658173 = product of:\n0.5484775 = sum of:\n0.049455613 = weight(abstract_txt:online in 2480) [ClassicSimilarity], result of:\n0.049455613 = score(doc=2480,freq=2.0), product of:\n0.087179884 = queryWeight, product of:\n1.2709975 = boost\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.018702745 = queryNorm\n0.5672824 = fieldWeight in 2480, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.667467 = idf(docFreq=3023, maxDocs=43556)\n0.109375 = fieldNorm(doc=2480)\n0.1965911 = weight(abstract_txt:springer in 2480) [ClassicSimilarity], result of:\n0.1965911 = score(doc=2480,freq=1.0), product of:\n0.21876699 = queryWeight, product of:\n1.4236802 = boost\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.018702745 = queryNorm\n0.89863235 = fieldWeight in 2480, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.216067 = idf(docFreq=31, maxDocs=43556)\n0.109375 = fieldNorm(doc=2480)\n0.30243075 = weight(abstract_txt:axel in 2480) [ClassicSimilarity], result of:\n0.30243075 = score(doc=2480,freq=1.0), product of:\n0.29153445 = queryWeight, product of:\n1.6434878 = boost\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.018702745 = queryNorm\n1.0373757 = fieldWeight in 2480, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.484578 = idf(docFreq=8, maxDocs=43556)\n0.109375 = fieldNorm(doc=2480)\n0.12 = coord(3/25)\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6227489,"math_prob":0.99540967,"size":7796,"snap":"2022-40-2023-06","text_gpt3_token_len":3019,"char_repetition_ratio":0.21239734,"word_repetition_ratio":0.409234,"special_character_ratio":0.5289892,"punctuation_ratio":0.28053543,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99955934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T20:33:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b8b0e862-10d8-435d-97c3-3d0a80b35586>\",\"Content-Length\":\"20047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffb8c9d5-f73b-40d8-9739-191ba2d66df1>\",\"WARC-Concurrent-To\":\"<urn:uuid:b10c2ffc-023f-40d2-920b-e85ef1bd1cec>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/25171\",\"WARC-Payload-Digest\":\"sha1:TI7K2XY5ZIR2MX6RQKPZQBDE6Q5STY2L\",\"WARC-Block-Digest\":\"sha1:OLSX5BPJQSSUQKVGH44UKXNEVAELPNCU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335365.63_warc_CC-MAIN-20220929194230-20220929224230-00611.warc.gz\"}"} |
http://forums.wolfram.com/mathgroup/archive/2000/Jul/msg00152.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Re: Mathematica gives bad integral ??\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg24324] Re: Mathematica gives bad integral ??\n• From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>\n• Date: Sun, 9 Jul 2000 04:52:34 -0400 (EDT)\n• Organization: Universitaet Leipzig\n• References: <8k3n38\\[email protected]>\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Hi,\n\nI don't know an explanation because\n\nires = FullSimplify[Integrate[1/Sqrt[1 - Sin[2x]], x]];\n\n(ArcTanh[(1 + Tan[x/2])/Sqrt]*(Cos[x] - Sin[x]))/\nSqrt[1/2 - Cos[x]*Sin[x]]\n\nand\n\nD[ires, x] // FullSimplify\n\ngives\n\n1/Sqrt[1 - Sin[2*x]]\n\nSince you don't supply any Input I can only tell you\nthat Mathematica has no error.\n\nRegards\nJens\n\n\"J.R. Chaffer\" wrote:\n>\n> Hi, this newbie gets erroneous results with Mathematica\n> 4.0 (for students), with the following integral. Hopefully\n> someone can tell me why, and what I may be doing\n> wrong. I have tried \"Assumptions -> x e Reals\", or\n> x > 0, with same results. Integral in question is,\n>\n> Integrate[1/Sqrt[1-Sin[2x]]]\n>\n> The result is somewhat involved, instead of the expected\n> result (Schaum, \"Calculus\" 4E, p. 297),\n>\n> integral = - (1/Sqrt)Log[Abs[Csc[Pi/4-x]-Cot[Pi/4-x]]]\n>\n> One expects to get differing forms with any computer\n> algebra system, since there are so many equivalent forms\n> of algebraic expressions. However, Mathematica's form\n> and the Schaum (correct) form differ by significant\n> numerical values, as plotting shows (i.e., not some E-16\n> or some such).\n>\n> Further, and what really seems wrong, is that when one\n> differentiates Mathematica's result for the integral, one\n> does NOT get the original integrand, or anything even\n> close, numerically.\n>\n> So, I am confused. Anyone who knows the explanation\n> would be welcome to share it.\n>\n> Thank you.\n>\n> John Chaffer\n\n```\n\n• Prev by Date: Keeping Invisible Commas invisible\n• Next by Date: Re: Divisors"
]
| [
null,
"http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif",
null,
"http://forums.wolfram.com/mathgroup/images/head_archive.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/2.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/search_archive.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8525042,"math_prob":0.7495921,"size":1803,"snap":"2019-26-2019-30","text_gpt3_token_len":542,"char_repetition_ratio":0.105058365,"word_repetition_ratio":0.014035088,"special_character_ratio":0.31114808,"punctuation_ratio":0.17777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861943,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T03:32:11Z\",\"WARC-Record-ID\":\"<urn:uuid:0d43112e-e99d-440e-b279-fb768fd7bf1a>\",\"Content-Length\":\"43206\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a02e9ebb-566c-40fd-b043-e351cd8fc277>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9f56672-0464-4455-9069-a2b72b2415dd>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2000/Jul/msg00152.html\",\"WARC-Payload-Digest\":\"sha1:EIIQV7HYVSZK75GRMEF4FJLI5A7QB3EM\",\"WARC-Block-Digest\":\"sha1:MG2QHRSAH5KUAACKMJZMVFWETK36C5Z6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525483.64_warc_CC-MAIN-20190718022001-20190718044001-00445.warc.gz\"}"} |
https://m.wikihow.com/Convert-Pounds-to-Kilograms | [
"# How to Convert Pounds to Kilograms\n\nCo-authored by wikiHow Staff\n\nUpdated: July 26, 2019\n\nBoth pounds (lbs) and kilograms (kg) are used to measure weight or mass. Pounds are an imperial unit most often used in America, while kilograms are a metric unit commonly used in European countries. Note that 1 pound is equal to 0.454 kilograms and 1 kilogram is equal to 2.2046 pounds. There are multiple ways to easily convert between the two units.\n\n### Method 1 of 2: Pounds to Kilograms\n\n1. 1\nDivide the number of pounds by 2.2046 to use the standard equation. For example, if you want to convert 50 pounds to kilograms, divide 50 by 2.2046, which is equal to 22.67985 kg. To convert 200 pounds to kilograms, divide 200 by 2.2046, which is equal to 90.71940 kg.\n2. 2\nMultiply the number of pounds by 0.454 as an alternative. If you find multiplication easier to do in your head than division, you can use a different conversion factor to convert pounds to kilograms. For instance, to convert 100 pounds to kilograms, multiply 100 by 0.454, which is 45.4 kg.\n3. 3\nRound your answer to the hundredths place. In most instances, you won’t need to use more than 3 numbers after the decimal point. So, if your answer is 22.67985, round that to 22.68. As another example, round 90.71940 to 90.72.\n• Avoid rounding the number of pounds before you convert them to kilograms.\n\n### Method 2 of 2: Kilograms to Pounds\n\n1. 1\nMultiply the number of kilograms by 2.2046 to use the traditional formula. For instance, to convert 75 kilograms to pounds, multiply 75 by 2.2046, which is 165.345 lbs. To convert 350 kilograms to pounds, multiply 350 by 2.2046, which is 771.61 lbs.\n2. 2\nDivide the number of kilograms by 0.454 if you find that easier. If you prefer, you can use division to convert kilograms to pounds. For example, if you want to convert 25 kilograms to pounds, divide 25 by 0.454, which is 55.066 lbs. Or, divide 500 kilograms by 0.454 to get 1,101.321 lbs.\n3. 3\nRemember that there will be more pounds than kilograms. Because 1 kilogram is equal to 2.2046 pounds, there will always be more pounds than kilograms once you do the conversion. Keep this in mind and recheck your calculations if you ever have more pounds than kilograms.\n• For instance, 30 kilograms is equal to 66.138 pounds and 1,000 kilograms is equal to 2,204.6 pounds. In both examples, there are more pounds than kilograms.\n\n## Community Q&A\n\nSearch\n• Question\nHow many pounds does sixty kilograms equal?\nwikiHow Staff Editor\n60 kilograms is equal to 132.277 pounds. You can use the methods above or an online calculator to convert between the units.\n• Question\n74.4 kilograms is how many pounds?\nDonagan\nOne kilogram equals 2.2 pounds.\n• Question\nHow do I convert kilograms to pounds?\nDonagan\nMultiply kilograms by 2.2.\n• Question\nFor method #2, how would I calculate with three digit numbers?\nSubtract the first two digits of the number and then divide it by 2. If it was four digits, you'd subtract the first three digits. If it was a five digit number, you'd subtract the first four digits, and so on.\n• Question\nWhat is the kilogram to pounds formula?\nDonagan\nOne kilogram equals approximately 2.2 pounds, so multiply kilograms by 2.2 to get pounds.\n• Question\nWhen calculating the problem in method #2, why subtract the value 4 from the 46?\nYou should subtract 4 from 46 because 4 is the first digit of 46. You have to subtract the first digit because in order to convert from pounds to kilos, you subtract the first digit of the number in pounds and then divide the answer you get by two.\n• Question\nHow do I convert the per pound price to per kilogram prices?\nOne kilogram is approximately 2.21 pounds. So, to get the per kilogram price, multiply the per pound price by 2.21.\n• Question\nHow do I convert inches into centimeters?\nMultiply the number of centimeters by 2.5, since there are 2.5 centimeters per inch.\n• Question\nHow do I get a mathematically basic method without using the standard formula?\nSubtract the first digit of the weight in pounds from the total number and then divide by two.\n• Question\nIn method 2, where did they get the 4?\nDonagan\n4 is the first digit of 46.\n200 characters left\n\n## Tips\n\n• There are plenty of free online conversion sites that can help you with these calculations.\nThanks!\n\nCo-Authored By:\nwikiHow Staff Editor\nThis article was co-authored by our trained team of editors and researchers who validated it for accuracy and comprehensiveness. Together, they cited information from 6 references.\nCo-authors: 28\nUpdated: July 26, 2019\nViews: 801,281\nCategories: Conversion Aids\nArticle SummaryX\n\nTo convert pounds to kilograms, you can use the standard equation by dividing the number of pounds by 2.2046 to calculate the kilograms. Use a calculator to make it easy on yourself! Alternatively, you can multiply the number of pounds by 0.45, which will also give you the number of kilograms. No matter which method you use, be sure to round your answer to the hundredths place. For example, if your answer is 22.67985, you should round that to 22.68 kilograms. For tips on converting kilograms back to pounds, read on!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8580395,"math_prob":0.985653,"size":5336,"snap":"2020-10-2020-16","text_gpt3_token_len":1349,"char_repetition_ratio":0.18473369,"word_repetition_ratio":0.057761732,"special_character_ratio":0.26105696,"punctuation_ratio":0.15722121,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999311,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T10:04:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4d7fc560-8a74-48ae-b774-6877c2dad7cc>\",\"Content-Length\":\"235891\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8aee9875-c542-4b0f-ad37-07ac3ae99042>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bdddd08-520d-4655-9bff-38c8cb4e7c96>\",\"WARC-IP-Address\":\"151.101.250.110\",\"WARC-Target-URI\":\"https://m.wikihow.com/Convert-Pounds-to-Kilograms\",\"WARC-Payload-Digest\":\"sha1:FMH7TWICOALMY43XC6UDRXLY6H7Y6H3N\",\"WARC-Block-Digest\":\"sha1:7NWLHJUOS3REC2LEH3T2GG53JXTYVYFV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370490497.6_warc_CC-MAIN-20200328074047-20200328104047-00008.warc.gz\"}"} |
https://math.stackexchange.com/questions/957933/question-about-cross-product-and-tensor-notation/3222006 | [
"Question about cross product and tensor notation\n\nI am a bit rusty on tensor algebra and calculus and may use some wrong terminology, but I know that the cross-product can be expressed in tensor notation with the aid of the Levi-Civita tensor as follows: $$\\mathbf{A}\\times \\mathbf{B}=\\varepsilon_{ijk}A_iB_j$$ Can someone tell me the domain of validity of the above equation? I think it is only valid for orthogonal coordinate systems where covariant and contravariant vectors are indistinguishable. If this is the case, how is the cross product defined for the most general coordinate system which may not be orthogonal?\n\nApart from this fact I heard that there is a cross-product tensor and cross-product is analogous to an operation called exterior-product. I think the cross-product tensor is expressed as below: $$(\\mathbf{A}\\times \\mathbf{B})_{ij}=A_iB_j-A_jB_i$$ I also heard that this tensor is defined in 3 or more dimensions (PDF by Patrick Guio) and it is not possible to contract it in spaces of dimensionality 4 or higher since it has $\\frac{1}{2}n(n-1)$ independent components. I have troubles with making the connections between cross-product tensor and vector owing to the fact that I do not know why such a definition for the tensor is used.\n\nFurthermore, in Wikipedia-Cross Product website cross-product vector is defined as the Hodge-dual of the bivector $\\mathbf{a} \\wedge \\mathbf{b}$ as follows: $$\\mathbf{a}\\times \\mathbf{b} = *(\\mathbf{a} \\wedge \\mathbf{b})$$ I have no idea about why this equation holds. Can someone provide a concise explanation or provide a reference where I can learn these concepts if possible?\n\n• From the title I thought I could answer. Now your question becomes my own.; – amcalde Oct 4 '14 at 13:26\n• @amcalde Thanks, I did not have formal training on tensors I learned them by myself, but there are gaps that need to be filled. – Vesnog Oct 4 '14 at 13:58\n\nYou can define a tensor agrees with the Levi-Civita symbol for orthogonal coordinate systems but that has the correct components for non-orthonormal systems.\n\nThis, and other results, can be derived in the setting of clifford algebra.\n\nClifford algebra deals with a \"quotient\" of the tensor algebra--an interesting subset, if you will, of tensors that correspond to vectors, planes, and other objects that can often be interpreted geometrically.\n\nTo facilitate this, clifford algebra introduces a \"geometric product\" of vectors, which has the following laws:\n\n1. If two vectors $a, b$ are orthogonal, then $ab = -ba$ under the product.\n2. The product of a vector with itself is a scalar, i.e. $aa = |a|^2$.\n3. The product is associative: $(ab)c = a(bc)$ for all vectors $a, b, c$.\n4. The product is distributive over addition: $a(b+c) = ab + ac$.\n\nFrom this definition, we can build up various objects that are not vectors but are produced from products of vectors under the geometric product.\n\nWith the geometric product in place, consider two vectors $a, b$, and write $b = b_\\parallel + b_\\perp$, the parts of $b$ parallel and perpendicular to $a$, respectively. Now then, we can write the product $ab$ as\n\n$$ab = a b_\\parallel + a b_\\perp$$\n\nThe first term, $a b_\\parallel$ is a scalar: $b\\parallel = \\alpha a$ for some scalar $\\alpha$, and $aa = |a|^2$, a scalar, under rule 2.\n\nThe second term cannot be reduced, but we know from rule 1 that it anticommutes: $a b_\\perp = -b_\\perp a$. This is just like the cross product.\n\nIndeed, if you write out this product with components, you get the following:\n\n$$ab = (a^1 b^1 + a^2 b^2 + a^3 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2 + \\ldots = a \\cdot b + \\frac{1}{2} a^i b^j e_i e_j$$\n\n(summation implied). The latter term is called a bivector and is traditionally denoted $a \\wedge b$.\n\nYou might have noticed now that we have at least three different kinds of objects: vectors, scalars, and bivectors. In clifford algebra, we number these objects by the number of vectors needed to form them, and we call this number the grade of the object. Scalars are grade-0, vectors grade-1, and bivectors grade-2. In 3d, you can also form a grade-3 object, a trivector. One choice might be $\\epsilon = e_1 e_2 e_3$.\n\nNow, what happens when you multiply a bivector with $\\epsilon$?\n\nFirst, the result must be a vector. Each bivector can be written as a linear combination of $e_1 e_2, e_2 e_3, e_3 e_1$, and $\\epsilon$ has all of those in it. You can see that $e_1 e_2 \\epsilon = e_1 e_2 e_1 e_2 e_3 = -e_3$ (use rule 1 for anticommuting swaps and rule 2 for same vectors to annihilate). The same holds for all other terms.\n\nBy convention, then, we can define a product\n\n$$a \\times b = -\\epsilon (a \\wedge b)$$\n\nThis coincides with the usual definition of the cross product. You can verify this term by term of you like; it's not that interesting to do algebraically, but geometrically, one comes to understand that multiplication by $\\epsilon$ produces orthogonal complements of subspaces: a vector goes to its complementary plane, a plane to its normal vector, and so on. That is why I called this 3-vector $\\epsilon$, as well: its components are those of the correct Levi-Civita tensor (not the symbol) that should have different components in nonorthonormal coordinate systems. And this is exactly what is meant in differential forms parlance when one uses the Hodge star operator.\n\nOutside of 3d, the dual of a bivector is no longer a vector (4d: a bivector has another bivector totally complementary to it), and so the cross product as we typically imagine it no longer makes sense.\n\n• Thanks for the through answer Muphrid. Apart from this can you point me to a source where I can learn all these concepts and which is not too advanced? – Vesnog Oct 4 '14 at 21:45\n• I would recommend Alan Macdonald's two books on geometric algebra and calculus. They are designed for an undergraduate audience and try to highlight how to relate the material to traditional linear algebra (esp. with matrices) and to vector calculus as it is traditionally taught. – Muphrid Oct 4 '14 at 21:46\n• Well thanks, I will check if our library has a copy of it. – Vesnog Oct 4 '14 at 22:18\n• By the way what about the star that is the Hodge dual operetion, does it have an intuitive interpretation? I mean the line $\\mathbf{a}\\times \\mathbf{b} = *(\\mathbf{a} \\wedge \\mathbf{b})$ in my OP. In your case you depicted this a bit differently. – Vesnog Oct 4 '14 at 22:24\n• Yeah, differential forms people use $\\star$, and clifford algebra people just multiply by $\\epsilon$ (or $-\\epsilon$, depending on the particular case--the star is somewhat inconsistent in this respect). They're two different notations for the same thing, but the geometric interpretation is always that, if a $k$-vector corresponds to a subspace, then its dual (found by the star, or by multiplication with $\\epsilon$) is that subspace's orthogonal complement--it's perpendicular to the original. – Muphrid Oct 4 '14 at 23:13\n\nTo answer the first part of your question: The first equation you have is incorrect as written for the following reason: the cross product $A \\times B$ is a vector independent of any basis. On the right hand side, you have (in einstein summation convention) the components of this cross product in a cartesian basis. To set the equation right, you'll have to introduce the cartesian basis vector on the right hand side, $$A \\times B = \\epsilon_{ijk} A_j B_k \\hat{e}_i$$ where $\\hat{e}_i$ is a cartesian basis vector. It is obvious that this expression is valid only for a cartesian basis.\n\n• Thanks for the answer, I think this is due to the implied summation convention and what I wrote is only the component. – Vesnog Oct 4 '14 at 21:41\n\nLevi-Civita tensor is okay for any coordinate system, including curvilinear ones.\n\nFor any three coordinates $$q^i$$ ($$i = 1,2,3$$), the location vector can be uniquely represented as function of these coordinates $$\\boldsymbol{r}=\\boldsymbol{r}(q^i)$$. Then basis (“tangent”) vectors are $$\\boldsymbol{r}_i \\equiv \\partial_i \\boldsymbol{r}$$ (in Leibnitz’s notation $$\\partial_i \\equiv \\frac{\\partial}{\\partial q^i}$$, $$\\boldsymbol{r}_i = \\frac{\\partial}{\\partial q^i} \\boldsymbol{r} = \\frac{\\partial \\boldsymbol{r}}{\\partial q^i}$$). Then cobasis (dual basis, “cotangent” basis) vectors $$\\boldsymbol{r}^i$$ can be found using fundamental property of cobasis $$\\boldsymbol{E} = (\\sum_i)\\, \\boldsymbol{r}^i \\boldsymbol{r}_i = (\\sum_i)\\, \\boldsymbol{r}_i \\boldsymbol{r}^i \\,\\Leftrightarrow\\: \\boldsymbol{r}^i \\cdot \\boldsymbol{r}_j = \\delta^i_j$$, where $$\\boldsymbol{E}$$ is the bivalent “unit” tensor which is neutral to dot product operation (another names for this the same thing are “metric” tensor and “identity” tensor), and $$\\delta^i_j$$ is Kronecker’s delta.\n\nHere comes the trivalent Levi-Civita (“volumetric”, “trimetric”) tensor:\n\n$${^3\\!\\boldsymbol{\\epsilon}} = (\\sum_{i,j,k})\\, \\boldsymbol{r}_i \\times \\boldsymbol{r}_j \\cdot \\boldsymbol{r}_k \\; \\boldsymbol{r}^i \\boldsymbol{r}^j \\boldsymbol{r}^k = (\\sum_{i,j,k})\\, \\boldsymbol{r}^i \\times \\boldsymbol{r}^j \\cdot \\boldsymbol{r}^k \\; \\boldsymbol{r}_i \\boldsymbol{r}_j \\boldsymbol{r}_k$$\n\nwith its components $$\\boldsymbol{r}_i \\times \\boldsymbol{r}_j \\cdot \\boldsymbol{r}_k \\equiv \\epsilon_{ijk}$$ or $$\\boldsymbol{r}^i \\times \\boldsymbol{r}^j \\cdot \\boldsymbol{r}^k \\equiv \\epsilon^{ijk}$$.\n\nThen for some two vectors $$\\boldsymbol{a} = (\\sum_i)\\, a_{i} \\boldsymbol{r}^i = (\\sum_i)\\, a^{i} \\boldsymbol{r}_i$$ and $$\\boldsymbol{b} = (\\sum_i)\\, b_{i} \\boldsymbol{r}^i = (\\sum_i)\\, b^{i} \\boldsymbol{r}_i$$\n\n$$\\boldsymbol{a} \\times \\boldsymbol{b} = (\\sum_i)\\, a_i \\boldsymbol{r}^i \\times (\\sum_j)\\, b_j \\boldsymbol{r}^j = (\\sum_{i,j})\\, a_i b_j \\: \\boldsymbol{r}^i \\times \\boldsymbol{r}^j,$$\n\nwhere cross product of cobasis vectors (as well as cross product of basis vectors, if you prefer to use another decomposition with second set of components) comes from definition of components of Levi-Civita tensor:\n\n$$\\boldsymbol{r}^i \\times \\boldsymbol{r}^j \\cdot \\boldsymbol{r}^k = \\epsilon^{ijk} \\,\\Leftrightarrow\\: (\\sum_k)\\, \\boldsymbol{r}^i \\times \\boldsymbol{r}^j \\cdot \\boldsymbol{r}^k \\boldsymbol{r}_k = (\\sum_k)\\, \\epsilon^{ijk} \\boldsymbol{r}_k \\,\\Leftrightarrow\\: \\boldsymbol{r}^i \\times \\boldsymbol{r}^j \\cdot \\boldsymbol{E} = (\\sum_k)\\, \\epsilon^{ijk} \\boldsymbol{r}_k$$ and finally $$\\boldsymbol{r}^i \\times \\boldsymbol{r}^j = (\\sum_k)\\, \\epsilon^{ijk} \\boldsymbol{r}_k$$.\n\nThus\n\n$$\\boldsymbol{a} \\times \\boldsymbol{b} = (\\sum_{i,j,k})\\, a_i b_j \\, \\epsilon^{ijk} \\, \\boldsymbol{r}_k = \\boldsymbol{b} \\boldsymbol{a} \\cdot \\! \\cdot \\, {^3\\!\\boldsymbol{\\epsilon}} = - \\boldsymbol{a} \\boldsymbol{b} \\cdot \\! \\cdot \\, {^3\\!\\boldsymbol{\\epsilon}}$$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90650153,"math_prob":0.9985435,"size":3598,"snap":"2019-43-2019-47","text_gpt3_token_len":979,"char_repetition_ratio":0.12381747,"word_repetition_ratio":0.0,"special_character_ratio":0.26792663,"punctuation_ratio":0.12482853,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988925,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T03:20:29Z\",\"WARC-Record-ID\":\"<urn:uuid:13029ba1-324d-4a19-ad07-b52801157443>\",\"Content-Length\":\"166567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db23b2e0-019d-4f35-bcf7-90be74b26a3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c6d148c-044f-4d13-a1a9-ba00afb1cec5>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/957933/question-about-cross-product-and-tensor-notation/3222006\",\"WARC-Payload-Digest\":\"sha1:QJ5OJMZLTNSXYOUM2JHT7RKCXYCLU53D\",\"WARC-Block-Digest\":\"sha1:5ZCJK3HINH4A5ZQFBFRH3FX67XGREUOM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987751039.81_warc_CC-MAIN-20191021020335-20191021043835-00014.warc.gz\"}"} |
http://tasks.illustrativemathematics.org/content-standards/NF/5/B/6/tasks/296 | [
"# Half of a Recipe\n\nAlignments to Content Standards: 5.NF.B.6\n\nKendra is making $\\frac{1}{2}$ of a recipe. The full recipe calls for $3\\frac{1}{4}$ cup of flour. How many cups of flour should Kendra use?\n\n## IM Commentary\n\nThis is the third problem in a series of three tasks involving fraction multiplication that can be solved with pictures or number lines. The first, 5.NF Running to school, does not require that the unit fractions that comprise $\\frac{3}{4}$ be subdivided in order to find $\\frac{1}{3}$ of $\\frac{3}{4}$. The second task, 5.NF Drinking Juice, does require students to subdivide the unit fractions that comprise $\\frac{1}{2}$ in order to find $\\frac{3}{4}$ of $\\frac{1}{2}$. This task also requires subdivision and involves multiplying a fraction and a mixed number.\n\nNote that the context here involves volume. While the picture can be drawn to look something like measuring cups, the number line representation is more abstract relative to the context. This helps transition students to a more abstract understanding of fraction multiplication.\n\n## Solutions\n\nSolution: Drawing a Picture\n\nThe first picture represents $3\\frac{1}{4}$ cup of flour.",
null,
"The second picture represents $\\frac{1}{2}$ of $3\\frac{1}{4}$ cup of flour.",
null,
"Kendra should use $\\frac{1}{2}$ + $\\frac{1}{2}$ + $\\frac{1}{2}$ + $\\frac{1}{8}$ = $\\frac{13}{8}$ = $1\\frac{5}{8}$ cup of flour.\n\nSolution: Using a Number Line\n\nFirst plot a point at $3\\frac{1}{4}$ to represent the amount of flour in the whole recipe.",
null,
"If we mark half way between 0 and $3\\frac{1}{4}$, we see the point is half-way between $1\\frac{1}{2}$ and $1\\frac{3}{4}$.",
null,
"We can identify what point this is by putting tick marks between each of the tick marks representing fourths.\n\nThis way we can see that $1\\frac{5}{8}$ is half-way between 0 and $1\\frac{3}{4}$. So Kendra should $1\\frac{5}{8}$ cup of flour.\n\nSolution: Computing a Product\n\nSince we want to know what $\\frac{1}{2}$ of $3\\frac{1}{4}$ is, we can multiply: \\begin{equation} \\frac{1}{2} \\times 3\\frac{1}{4} = \\frac{1}{2} \\times \\frac{13}{4} = \\frac{1 \\times 13}{2 \\times 4} = \\frac{13}{8} = 1\\frac{5}{8} \\end{equation} Kendra should put $1\\frac{5}{8}$ cups of flour.\n\nSolution: Use the Distributive Property\n\nA student could solve by multiplying $\\frac{1}{2}$ by $3 \\frac{1}{4}$, using the distributive property to avoid rewriting $3 \\frac{1}{4}$ as an improper fraction.\n\n\\begin{align*} \\frac{1}{2} \\left( 3 + \\frac{1}{4} \\right) & = \\frac{1}{2} \\times 3 + \\frac{1}{2} \\times \\frac{1}{4}\\ & = \\frac{3}{2} + \\frac {1}{8} \\ & = 1 \\frac{1}{2} + \\frac {1}{8} \\ & = 1 \\frac{4}{8} + \\frac{1}{8}\\ & = 1 \\frac{5}{8} \\end{align*}\n\nThis strategy should be compared side-by-side with the \"Draw a Picture\" solution above. Taken together, these solutions represent essentially the same strategy represented pictorially and numerically.\n\nSolution: Use Decimal Numbers\n\nIf the student recognizes that this problem is calling for a product of fractions, he or she could approach this problem by converting to decimals and finding the product of $0.5$ and $3.25$. Now\n\n$$3.25 \\times 0.5 = 1.625$$\n\nSo that Kendra needs $1.625 = 1 \\frac{5}{8}$ cups of flour."
]
| [
null,
"http://s3.amazonaws.com/illustrativemathematics/images/000/002/914/large/1_b163f38c32fddda3842cf11b1fd3c908.jpg",
null,
"http://s3.amazonaws.com/illustrativemathematics/images/000/002/915/large/2_05678958da9e5c291c15010052944292.jpg",
null,
"http://s3.amazonaws.com/illustrativemathematics/images/000/002/916/large/3_c4c55459befed79d584592d1981587f0.jpg",
null,
"http://s3.amazonaws.com/illustrativemathematics/images/000/002/917/large/4_aaf442c2e269c65177e60b6ce8a3b1c0.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82805896,"math_prob":0.99996054,"size":3245,"snap":"2022-40-2023-06","text_gpt3_token_len":988,"char_repetition_ratio":0.17834002,"word_repetition_ratio":0.1010101,"special_character_ratio":0.33898306,"punctuation_ratio":0.0777439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999137,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T17:53:59Z\",\"WARC-Record-ID\":\"<urn:uuid:7eb0a8ee-43e3-47b2-8107-832413c88e94>\",\"Content-Length\":\"27842\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54b9c4e3-9a34-4fb4-9490-e0997a7fc793>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbfacae8-7c8b-4116-945d-91722a324544>\",\"WARC-IP-Address\":\"54.243.152.249\",\"WARC-Target-URI\":\"http://tasks.illustrativemathematics.org/content-standards/NF/5/B/6/tasks/296\",\"WARC-Payload-Digest\":\"sha1:MBLPLQGWFK5QRUTSI4ZUPK3S2FXCZM5O\",\"WARC-Block-Digest\":\"sha1:W66RET3TCY4Q3QJHNDXTENPJOUXKMFVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336880.89_warc_CC-MAIN-20221001163826-20221001193826-00248.warc.gz\"}"} |
https://byjus.com/question-answer/a-caprolactum-xrightarrow-hydrolysis-a-b-a-glycine-rightarrow-b-polymer-here-b-is-i/ | [
"",
null,
"",
null,
"Question\n\na. $$Caprolactum\\: \\xrightarrow {hydrolysis} A$$b. $$A + Glycine \\rightarrow B (Polymer)$$Here, $$B$$ is :I. Condensation PolymerII. Addition PolymerIII. Polyamide PolymerIV. Polyester PolymerV. Biodegradable Polymer\n\nA\nI, III, V are corect.",
null,
"",
null,
"B\nII, IV are correct.",
null,
"",
null,
"C\nII, III, IV are correct.",
null,
"",
null,
"D\nII, IV, V are correct.",
null,
"",
null,
"Solution\n\nThe correct option is B I, III, V are corect.The polymer B which is obtained is Dextron. It is condensation polymer of Glycine and Caprolactum as shown in diagram. It is polyamide polymer since it contains [CO-NH] linkage and it is biodegradable.Hence,option A is correct.",
null,
"Chemistry\n\nSuggest Corrections",
null,
"",
null,
"0",
null,
"",
null,
"Similar questions\nView More",
null,
"",
null,
"People also searched for\nView More",
null,
""
]
| [
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://search-static.byjusweb.com/question-images/toppr_ext/questions/74987_7568_ans_6e5af78e23024971a42f65c610dc0cc4.png",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86211234,"math_prob":0.8584011,"size":486,"snap":"2022-05-2022-21","text_gpt3_token_len":137,"char_repetition_ratio":0.19502075,"word_repetition_ratio":0.024390243,"special_character_ratio":0.23868313,"punctuation_ratio":0.21818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96112293,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T21:26:03Z\",\"WARC-Record-ID\":\"<urn:uuid:0c6d82c2-7a0a-47d6-9e21-6aaeb52991e6>\",\"Content-Length\":\"202844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffa0beef-db9e-413a-96cc-71966eba2197>\",\"WARC-Concurrent-To\":\"<urn:uuid:19076525-3444-4f0d-baf3-032528c4b03d>\",\"WARC-IP-Address\":\"162.159.130.41\",\"WARC-Target-URI\":\"https://byjus.com/question-answer/a-caprolactum-xrightarrow-hydrolysis-a-b-a-glycine-rightarrow-b-polymer-here-b-is-i/\",\"WARC-Payload-Digest\":\"sha1:NIXRQEOWIDN6ZDIKF676AOLGIDVSWL66\",\"WARC-Block-Digest\":\"sha1:TA5NUYWFINXA6XQAXIAS5PUDBMLEGPL7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304961.89_warc_CC-MAIN-20220126192506-20220126222506-00712.warc.gz\"}"} |
http://www.ijmlc.org/index.php?m=content&c=index&a=show&catid=97&id=964 | [
"###### Home > Archive > 2019 > Volume 9 Number 4 (Aug. 2019) >\nIJMLC 2019 Vol.9(4): 506-512 ISSN: 2010-3700\nDOI: 10.18178/ijmlc.2019.9.4.833\n\n## Monotonic Estimation for Probability Distribution and Multivariate Risk Scales by Constrained Minimum Generalized Cross-Entropy\n\nBill Huajian Yang\n\nAbstract—Minimum cross-entropy estimation is an extension to the maximum likelihood estimation for multinomial probabilities. Given a probability distribution {ri}ki=1, we show in this paper that the monotonic estimates {pi}ki=1 for the probability distribution by minimum cross-entropy are each given by the simple average of the given distribution values over some consecutive indexes. Results extend to the monotonic estimation for multivariate outcomes by generalized cross-entropy. These estimates are the exact solution for the corresponding constrained optimization and coincide with the monotonic estimates by least squares. A non-parametric algorithm for the exact solution is proposed. The algorithm is compared to the “pool adjacent violators” algorithm in least squares case for the isotonic regression problem. Applications to monotonic estimation of migration matrices and risk scales for multivariate outcomes are discussed.\n\nIndex Terms—Maximum likelihood, cross-entropy, least squares, isotonic regression, constrained optimization, multivariate risk scales.\n\nBill Huajian Yang is with Royal Bank of Canada, Canada (e-mail: [email protected]).\n\n[PDF]\n\nCite: Bill Huajian Yang, \"Monotonic Estimation for Probability Distribution and Multivariate Risk Scales by Constrained Minimum Generalized Cross-Entropy,\" International Journal of Machine Learning and Computing vol. 9, no. 4, pp. 506-512, 2019.\n\nCopyright © 2019 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).\n\n#### General Information\n\n• ISSN: 2010-3700 (Online)\n• Abbreviated Title: Int. J. Mach. Learn. Comput.\n• Frequency: Bimonthly\n• DOI: 10.18178/IJMLC\n• Editor-in-Chief: Dr. Lin Huang\n• Executive Editor: Ms. Cherry L. Chen\n• Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals Library.\n• E-mail: [email protected]"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83260274,"math_prob":0.63311,"size":1659,"snap":"2020-45-2020-50","text_gpt3_token_len":325,"char_repetition_ratio":0.13111782,"word_repetition_ratio":0.0,"special_character_ratio":0.18324292,"punctuation_ratio":0.124542125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9667339,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T09:03:58Z\",\"WARC-Record-ID\":\"<urn:uuid:2c9880b9-8ed3-447f-86f7-81a98662ca3b>\",\"Content-Length\":\"12963\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c477ee1f-639b-4f09-b12c-cc053dcf28ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:af41755a-d8c6-47d0-8d2c-b88aa06371ca>\",\"WARC-IP-Address\":\"139.162.103.195\",\"WARC-Target-URI\":\"http://www.ijmlc.org/index.php?m=content&c=index&a=show&catid=97&id=964\",\"WARC-Payload-Digest\":\"sha1:DZLAY27I2BFBHW4RJIJIGK25PVXNOS7W\",\"WARC-Block-Digest\":\"sha1:W4Z3USZAWWTIKHGWU7JO7XNZ4ME25ZRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191511.46_warc_CC-MAIN-20201127073750-20201127103750-00016.warc.gz\"}"} |
https://git.vuxu.org/mirror/zsh/tree/Doc/Zsh/arith.yo?h=zsh-3.1.5-pws-22&id=a2159285e80508bb682d90a71270fbddada8bd05 | [
"aboutsummaryrefslogtreecommitdiff log msg author committer range\npath: root/Doc/Zsh/arith.yo\nblob: f22b3579454e653c37627c13ac36078e3b0ec941 (plain) (blame)\n ```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 ``` ``````texinode(Arithmetic Evaluation)(Conditional Expressions)(Jobs & Signals)(Top) chapter(Arithmetic Evaluation) ifzman(\\ sect(Arithmetic Evaluation) )\\ cindex(arithmetic evaluation) cindex(evaluation, arithmetic) findex(let, use of) The shell can perform integer arithmetic, either using the builtin tt(let), or via a substitution of the form tt(\\$((...))). Usually arithmetic is performed with em(long) integers; however, on certain systems where a em(long) has 4-byte precision, zsh may be compiled to use 8-byte precision instead. This can be tested, for example, by giving the command `tt(print - \\$(( 12345678901 )))'; if the number appears unchanged, the precision is at least 8 bytes. The tt(let) builtin command takes arithmetic expressions as arguments; each is evaluated separately. Since many of the arithmetic operators, as well as spaces, require quoting, an alternative form is provided: for any command which begins with a `tt(LPAR()LPAR())', all the characters until a matching `tt(RPAR()RPAR())' are treated as a quoted expression and arithmetic expansion performed as for an argument of tt(let). More precisely, `tt(LPAR()LPAR())var(...)tt(RPAR()RPAR())' is equivalent to `tt(let \")var(...)tt(\")'. For example, the following statement example((( val = 2 + 1 ))) is equivalent to example(let \"val = 2 + 1\") both assigning the value 3 to the shell variable tt(foo) and returning a zero status. cindex(bases, in arithmetic) Numbers can be in bases other than 10. A leading `tt(0x)' or `tt(0X)' denotes hexadecimal. Numbers may also be of the form `var(base)tt(#)var(n)', where var(base) is a decimal number between two and thirty-six representing the arithmetic base and var(n) is a number in that base (for example, `tt(16#ff)' is 255 in hexadecimal). The var(base)tt(#) may also be omitted, in which case base 10 is used. For backwards compatibility the form `tt([)var(base)tt(])var(n)' is also accepted. cindex(arithmetic operators) cindex(operators, arithmetic) An arithmetic expression uses nearly the same syntax, precedence, and associativity of expressions in C. The following operators are supported (listed in decreasing order of precedence): startsitem() sitem(tt(PLUS() - ! ~ PLUS()PLUS() --))(unary plus/minus, logical NOT, complement, {pre,post}{in,de}crement) sitem(tt(<< >>))(bitwise shift left, right) sitem(tt(&))(bitwise AND) sitem(tt(^))(bitwise XOR) sitem(tt(|))(bitwise OR) sitem(tt(**))(exponentiation) sitem(tt(* / %))(multiplication, division, modulus (remainder)) sitem(tt(PLUS() -))(addition, subtraction) sitem(tt(< > <= >=))(comparison) sitem(tt(== !=))(equality and inequality) sitem(tt(&&))(logical AND) sitem(tt(|| ^^))(logical OR, XOR) sitem(tt(? :))(ternary operator) sitem(tt(= PLUS()= -= *= /= %= &= ^= |= <<= >>= &&= ||= ^^= **=))(assignment) sitem(tt(,))(comma operator) endsitem() The operators `tt(&&)', `tt(||)', `tt(&&=)', and `tt(||=)' are short-circuiting, and only one of the latter two expressions in a ternary operator is evaluated. Note the precedence of the bitwise AND, OR, and XOR operators. An expression of the form `tt(#\\)var(x)' where var(x) is any character gives the ascii value of this character and an expression of the form `tt(#)var(foo)' gives the ascii value of the first character of the value of the parameter var(foo). Note that this is different from the expression `tt(\\$#)var(foo)', a standard parameter substitution which gives the length of the parameter var(foo). Named parameters and subscripted arrays can be referenced by name within an arithmetic expression without using the parameter expansion syntax. For example, example((((val2 = val1 * 2)))) assigns twice the value of tt(\\$val1) to the parameter named tt(val2). An internal integer representation of a named parameter can be specified with the tt(integer) builtin. cindex(parameters, integer) cindex(integer parameters) findex(integer, use of) Arithmetic evaluation is performed on the value of each assignment to a named parameter declared integer in this manner. ``````"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6350017,"math_prob":0.99511206,"size":4292,"snap":"2022-40-2023-06","text_gpt3_token_len":1213,"char_repetition_ratio":0.13922575,"word_repetition_ratio":0.0031298904,"special_character_ratio":0.31849954,"punctuation_ratio":0.1013597,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956195,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T04:40:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a33988a2-04f2-4fdf-bd69-2c41efa22072>\",\"Content-Length\":\"12286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de70707a-af57-42ab-a88b-bfd652cdc535>\",\"WARC-Concurrent-To\":\"<urn:uuid:d77f0bc3-8f9a-4eda-a0e6-57b016979e53>\",\"WARC-IP-Address\":\"168.119.90.161\",\"WARC-Target-URI\":\"https://git.vuxu.org/mirror/zsh/tree/Doc/Zsh/arith.yo?h=zsh-3.1.5-pws-22&id=a2159285e80508bb682d90a71270fbddada8bd05\",\"WARC-Payload-Digest\":\"sha1:ITAZFQNDL5UBW5IVWDNW3RQB53CD2T4W\",\"WARC-Block-Digest\":\"sha1:IOFJTPOOB5ZEMNVKR5QAOOWP26MASLAQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337537.25_warc_CC-MAIN-20221005042446-20221005072446-00673.warc.gz\"}"} |
https://docs.scipy.org/doc/numpy-1.14.2/reference/generated/numpy.matrix.ptp.html | [
"# numpy.matrix.ptp¶\n\n`matrix.``ptp`(axis=None, out=None)[source]\n\nPeak-to-peak (maximum - minimum) value along the given axis.\n\nRefer to `numpy.ptp` for full documentation.\n\nNotes\n\nSame as `ndarray.ptp`, except, where that would return an `ndarray` object, this returns a `matrix` object.\n\nExamples\n\n```>>> x = np.matrix(np.arange(12).reshape((3,4))); x\nmatrix([[ 0, 1, 2, 3],\n[ 4, 5, 6, 7],\n[ 8, 9, 10, 11]])\n>>> x.ptp()\n11\n>>> x.ptp(0)\nmatrix([[8, 8, 8, 8]])\n>>> x.ptp(1)\nmatrix([,\n,\n])\n```\n\n#### Previous topic\n\nnumpy.matrix.prod\n\nnumpy.matrix.put"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.52916974,"math_prob":0.9882681,"size":484,"snap":"2019-43-2019-47","text_gpt3_token_len":184,"char_repetition_ratio":0.13541667,"word_repetition_ratio":0.0,"special_character_ratio":0.41528925,"punctuation_ratio":0.29268292,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968564,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T17:11:39Z\",\"WARC-Record-ID\":\"<urn:uuid:2ab4deb5-9c37-47db-a4f5-a063d24f2db0>\",\"Content-Length\":\"8110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afbb6e46-acf8-4022-abbe-08f20139a122>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9a7156f-678b-4954-8a9a-35d5e68853d9>\",\"WARC-IP-Address\":\"50.17.248.72\",\"WARC-Target-URI\":\"https://docs.scipy.org/doc/numpy-1.14.2/reference/generated/numpy.matrix.ptp.html\",\"WARC-Payload-Digest\":\"sha1:N2EPDEJJVD23RMBVRCBYABELA6RPOAH5\",\"WARC-Block-Digest\":\"sha1:6RQZZBV7RPH27Y7AA72RNQIBZPQIVKVX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668529.43_warc_CC-MAIN-20191114154802-20191114182802-00283.warc.gz\"}"} |
https://www.jiskha.com/questions/14999/thanks-for-any-help-four-20-ohm-resistors-are-connected-in-parallel-and-the-combination | [
"# Physics\n\nThanks for any help :)\n\nFour 20 ohm resistors are connected in parallel and the combination is connected to a 20V emf device. The current is:\nA) 0.25A\nB) 1.0 A\nC) 4.0 A\nD) 5.0 A\nE) 100 E\n\nParallel circuits:\nI know that the voltage is 20V and the Resistance is 20 V. So, I divided I=V/R\nI= (20/20)=1 The current for a parallel circuit is found by I= I + I+ I+I\nThe current is 1 so therefore I total would be 4 or C.\n\nFour 20 ohm resistors are connected in series and the combination is connected to a 20V emf device. The potential difference across any one of the resistors is:\nA) 1 V\nB) 4 V\nC) 5 V\nD) 20V\nE) 80 V\n\nSo there are four resistors that are 20 ohms. Their connected to a 20 V device. The question wants to known volts. Its says for a parallel circuit. I just dividied 20V/4 and got 5V.Is that ok to think of it that way?\n\nThank you\n\ncorrect on the first. However, if the second circuit is parallel, each resistor has 20V across it.\n\n1. 👍\n2. 👎\n3. 👁\n1. A potential difference of 2.0 V is applied across a wire of cross sectional area 2.5 mm^2. The current which passes through the wire is 3.2 × 10-3 A. What is the resistance of the wire?\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### Physics\n\nDetermine the total resistance of each of the following parallel circuits. A. A parallel circuit with a 20-ohm resistor and a 10-ohm resistor. B. A parallel circuit with two 20-ohm resistors and a 10-ohm resistor. C. A parallel\n\n2. ### physics\n\nA 30 ohm resistor is connected in parallel with a variable resistance R. The parallel combination is then connected in series with a 6 ohm resistor and connected across a 120 V source. Find the minimum value of R if the power\n\n3. ### physics\n\nTwo resistors, one 12Ohms and the other 18 Ohms, are connected in parallel. What is the equivalent resistance of the parallel combination?\n\n4. ### physics\n\na 20 ohm and 60 ohm resistors are connected in series to a DC generator. The voltage across the 20 resistor is 80 volts. The current through the 60 ohm resistor? A) 1.0A B) 5.0 A C) is about 1.3 A D)4.0 A E) cannot be calculated\n\n1. ### physics (please check)\n\nA total resistance of 3 ohms is to be producd by combining an unknown resistor R with a 12 ohm resistor. What is the value of R and how is it connected to the 12 ohm resistor a) 4.0 ohm parallel b) 4.0 ohm in series c) 2.4 ohm in\n\n2. ### Science\n\nThe combined resistance of two identical resistors connected in series is 8 ohm. Their combined resistance in a parallel arrangement will be?\n\n3. ### physics\n\na 100 V DC signal is applied to four resistors as shown in Fig. 5. The values of the resistors are 20 ohm, 40 ohm, 60 ohm and 80 ohm. What is the voltage across the 40 ohm resistor?\n\n4. ### physics\n\ntwo unknown resistors a and b are connected together. when they are connected in series their combined resistance is 15 ohm. When they are connected in parallel, their combined resistance is 3.3 ohm. What are the resistances of A\n\n1. ### physics\n\nthree resistors of 2ohm, 3ohm and 4 ohm are connected in (a) series (b) parallel. find the equivalent resistance in each case\n\n2. ### Physics\n\n1)Four resistors of 10.0 each are connected in parallel. A combination of four resistors of 10.0 each is connected in series along with the parallel arrangement. What is the equivalent resistance of the circuit? A)80.0 B)40.4\n\n3. ### physics\n\nten identical resistors connected in parallel have an equivalent resistance of 2 ohm.when they are connected in series what will be its effective resistance?\n\n4. ### Physics\n\nTwo resistors have resistances R1 and R2. When the resistors are connected in series to a 12.4-V battery, the current from the battery is 2.13 A. When the resistors are connected in parallel to the battery, the total current from"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91199064,"math_prob":0.9475084,"size":2612,"snap":"2021-04-2021-17","text_gpt3_token_len":705,"char_repetition_ratio":0.23351227,"word_repetition_ratio":0.038135592,"special_character_ratio":0.25267994,"punctuation_ratio":0.10169491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981135,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T05:52:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b6775060-3821-4719-90df-f1f23f2a9ca4>\",\"Content-Length\":\"19642\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99d47b4e-638c-42f4-a9eb-50bf3ce1ad15>\",\"WARC-Concurrent-To\":\"<urn:uuid:8400a14c-532c-4e2b-af3d-b3bcdafc9c75>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/14999/thanks-for-any-help-four-20-ohm-resistors-are-connected-in-parallel-and-the-combination\",\"WARC-Payload-Digest\":\"sha1:U4QJJQFYZRUPFK7JZCCXOBPT6ZJ642NT\",\"WARC-Block-Digest\":\"sha1:V3SYJSSXFLMVPR77AORKMVQXHJWXWJEC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038101485.44_warc_CC-MAIN-20210417041730-20210417071730-00279.warc.gz\"}"} |
https://tex.stackexchange.com/questions/238209/general-guide-on-how-to-set-up-minion-pro-for-math-in-lualatex | [
"# General guide on how to set up Minion Pro for math in lualatex\n\n## Minion Pro\n\nis one of the most beautiful fonts and comes with many Adobe products. A complementary set of symbols is provided by the MnSymbol fonts and package, which is not fully compatible to lualatex.\n\nHowever the best option would be Minion Math, which is commercial, with hardly any free alternative. But...\n\n## I'd like to use Minion Pro also for math in lualatex - how can I do that?\n\nThis answer already gives a good starting point and this one provides extensive explanations around this issue. There are various more questions about this topic, but I haven't found one giving a complete solution.\n\nUsing the linked starting point, one already gets all letters of Minion Pro in the math environment together with mathematical symbols provided by MnSymbol:\n\n\\documentclass[a4paper]{article}\n\\usepackage{amsmath,amssymb,mathrsfs}\n\\usepackage{fontspec}\n\\usepackage{unicode-math}\n\n\\setmainfont[Numbers = OldStyle,Ligatures = TeX,SmallCapsFeatures = {Renderer=Basic}]{Minion Pro}\n\\setmathfont{MnSymbol}\n\\setmathfont[range=\\mathup/{num,latin,Latin,greek,Greek}]{Minion Pro}\n\\setmathfont[range=\\mathbfup/{num,latin,Latin,greek,Greek}]{MinionPro-Bold}\n\\setmathfont[range=\\mathit/{num,latin,Latin,greek,Greek}]{MinionPro-It}\n\\setmathfont[range=\\mathbfit/{num,latin,Latin,greek,Greek}]{MinionPro-BoldIt}\n\\setmathrm{Minion Pro}\n\n\\begin{document}\n\n\\begin{equation}\n\\mathscr{D}^{\\gamma} f(t)= \\mathscr{D}^{m} \\mathscr{I}^{m-\\gamma}\nf(t)=\\frac{\\partial^m }{\\partial t^m} \\Bigg[ \\frac{1}{\\Gamma(m-\\gamma)}\n\\int\\limits_{0}^{t} \\frac{f(\\tau )}{(t-\\tau)^{\\gamma-m+1}} ~\\partial{\\tau} \\Bigg] \\:,\n\\end{equation}\n\n\\begin{equation}\nA =\n\\left( \\begin{array}{ccc}\na_{11} & a_{12} & a_{13} \\\\\na_{21} & a_{22} & a_{23} \\\\\na_{31} & a_{32} & a_{33}\nB =\n\\begin{pmatrix}\na_{11} & a_{12} & a_{13} \\\\\na_{21} & a_{22} & a_{23} \\\\\na_{31} & a_{32} & a_{33}\n\\end{pmatrix}\n\\end{equation}\n\n\\begin{equation}\ny = \\sqrt{\\frac{z^2}{\\ln{z}}} + z \\,\\Bigg|_{z\\,=\\,z_0}\n\\end{equation}\n\n\\end{document}",
null,
"The minimal example from the other linked answer comes out quite faulty as well.",
null,
"## There are a lot of things not working correctly:\n\n1. partial differential \\partial and the vertical line character | is missing\n2. the integral and \\sum symbols are too small\n3. all brackets are not scaling with size of its wrapped content and size specifier like \\Bigg( don't work neither\n4. brackets of matrices don't work\n5. the square root does not scale\n6. \\mathscr does not bring any difference\n7. the comma symbol is missing\n8. the greek letter epsilon is undefined in MinionPro (at least in old font versions)\n9. to be continued ...\n\nHow can I fix them?\n\n## There is an alternative approach avoiding unicode-math\n\nand use the non-unicode implementation of MnSymbol\n\n\\documentclass[a4paper]{article}\n\\usepackage{amsmath,amssymb,mathrsfs}\n\\usepackage[no-math]{fontspec}\n\\usepackage{MnSymbol}\n\n\\setmainfont[...\n\n\\begin{document}\n...\n\n\nWhich is certainly a working option, but everything looks a little mixed up and I'm not happy with the result.",
null,
"If you consider this the better approach, feel free to post an answer providing a solution, which gets everything harmonizing a little better.\n\nI'm aware that there are no real solutions unless the OpenType version of MnSymbol gets patched for use in unicode-math. This question is self-answered as I thought it would be worth sharing the (in my opinion) pleasant result, though it is just an ugly workaround.\n\nPlease feel free to provide better and simpler workarounds or even part-solutions.\n\n## The bottom line at the top.\n\nDon't try this at home kids! Using MnSymbol with unicode-math will kill your time!\n\nThe symbols provided by MnSymbol are not set up for use with unicode-math. Some are missing, some are not scalable in size. These need to be replaced by a different math font. For my opinion XITS Math does a good job. One just needs to find the unicode characters due to fix.\n\nPartial differential \\partial:\n\n\\setmathfont[range={\"2202} ]{XITS Math}\n\n\nThe integrals with an additional little tweak:\n\n\\setmathfont[range={\"222B-\"2233,\"2A0B-\"2A1C}]{XITS Math}\n\\newcommand{\\intX}{\\int\\limits_{\\mkern-15mu #1}^{#2} \\mkern-15mu}\n\n\nFor the sum symbol I personally find the Latin Modern Math symbol a better match:\n\n\\setmathfont[range={\"2211} ]{Latin Modern Math}\n\n\nThe brackets come with a quite heavy weight in XITS Math, so also used Latin Modern Math in this case:\n\n\\setmathfont[range={\"005B,\"005D,\"0028,\"0029,\"007B,\"007D} ]{Latin Modern Math}\n\n\nThe vertical line character though, is too bold there, so back to XITS Math\n\n\\setmathfont[range={\"007C} ]{XITS Math}\n\n\nFixing the root:\n\n\\setmathfont[range={\"002F,\"221A}]{XITS Math}\n\n\nand the comma:\n\n\\setmathfont[range={\"002C} ]{XITS Math}\n\n\nand finally the \\mathscr and additionally the \\mathrm characters\n\n\\setmathfont[range=\\mathscr,StylisticSet={1}]{XITS Math}\n\\setmathrm{Minion Pro}\n\n\nThe letter \\epsilon is missing, but \\varepsilon is working. As we use unicode-math the replacement with \\let needs to be done at the begin of the document:\n\n\\AtBeginDocument{%\n\\let\\phi\\varphi\n\\let\\epsilon\\varepsilon\n}\n\n\nor \\AfterEndPreamble in case there are problems with hyperref and the etoolbox package.\n\nOne further problem is the slash /, where it is a matter of taste, which font should be use for replacement.\n\n\\setmathfont[range={\"002F,\"2215}]{Latin Modern Math}\n\n\nBut the actual mistake of the author of the original MWE was, that he should have used \\mathbin{/} instead of / for a better spacing.\n\nAll these fixes together give a pleasant result:\n\n\\documentclass[a4paper]{article}\n\\usepackage{amsmath,amssymb,mathrsfs}\n\\usepackage{fontspec}\n\\usepackage{unicode-math}\n\n\\setmainfont[Numbers = OldStyle,Ligatures = TeX,SmallCapsFeatures = {Renderer=Basic}]{Minion Pro}\n\\setsansfont[Numbers={OldStyle,Proportional},Scale=MatchLowercase]{Minion Pro}\n\\setmonofont[Numbers=OldStyle,Scale=MatchLowercase]{Minion Pro}\n\n\\setmathfont{MnSymbol}\n\\setmathfont[range=\\mathup/{num,latin,Latin,greek,Greek}]{Minion Pro}\n\\setmathfont[range=\\mathbfup/{num,latin,Latin,greek,Greek}]{MinionPro-Bold}\n\\setmathfont[range=\\mathit/{num,latin,Latin,greek,Greek}]{MinionPro-It}\n\\setmathfont[range=\\mathbfit/{num,latin,Latin,greek,Greek}]{MinionPro-BoldIt}\n\\setmathfont[range=\\mathscr,StylisticSet={1}]{XITS Math}\n\\setmathfont[range={\"005B,\"005D,\"0028,\"0029,\"007B,\"007D} ]{Latin Modern Math} % brackets\n\\setmathfont[range={\"2202} ]{XITS Math} % partial\n\\setmathfont[range={\"2211} ]{Latin Modern Math} % sum\n\\setmathfont[range={\"007C} ]{XITS Math} % vertical\n\\setmathfont[range={\"221A} ]{XITS Math} % root\n\\setmathfont[range={\"222B-\"2233,\"2A0B-\"2A1C}]{XITS Math} % integrals\n\\setmathfont[range={\"002F,\"2215}]{Latin Modern Math} % /\n\\setmathfont[range={\"002C} ]{XITS Math} % ,\n\\setmathrm{Minion Pro}\n\n\\newcommand{\\intX}{\\int\\limits_{\\mkern-15mu #1}^{#2} \\mkern-15mu}\n\n\\AtBeginDocument{\\let\\epsilon\\varepsilon}\n\n\\begin{document}\n...",
null,
"Also the output produced by \\blindmathpaper looks nice:",
null,
"## The neverending story.\n\nProbably you will still find something missing. The last issue I encountered before I gave up, was the missing full stop/punctuation mark as discussed in this question and also solved with this answer. But even then commands like \\dots still did not work. Realizing that one needs to fix almost everything of MnSymbol, so actually nothing is left, I decided to use another math font and just included some certain symbols of MnSymbol I'd liked.\n\nI finally use this settings:\n\n\\setmathfont{XITS Math}\n\\setmathfont[range=\\mathup/{num,latin,Latin,greek,Greek}]{Minion Pro}\n\\setmathfont[range=\\mathbfup/{num,latin,Latin,greek,Greek}]{MinionPro-Bold}\n\\setmathfont[range=\\mathit/{num,latin,Latin,greek,Greek}]{MinionPro-It}\n\\setmathfont[range=\\mathbfit/{num,latin,Latin,greek,Greek}]{MinionPro-BoldIt}\n\\setmathfont[range=\\mathscr,StylisticSet={1}]{XITS Math}\n\\setmathfont[range={\"005B,\"005D,\"0028,\"0029,\"007B,\"007D,\"2211,\"002F,\"2215 } ]{Latin Modern Math} % brackets, sum, /\n\\setmathfont[range={\"002B,\"002D,\"003A-\"003E} ]{MnSymbol} % + - < = >\n\\setmathrm{Minion Pro}\n\n\n## The bottom line at the bottom.\n\nDon't try this at home kids! Using MnSymbol with unicode-math will kill your time!\n\nThe whole thing is so buggy, that it is not worth trying. Maybe in the future. Using another font as base and include certain symbols from MnSymbol seems appropriate though.\n\n• MnSymbol includes all necessary symbols. Using the MinionPro and MnSymbol package with pdflatex produces very good results. – sebschub Apr 12 '15 at 12:38\n• With lualatex obviously not. That what this Q&A is supposed to fix. – thewaywewalk Apr 12 '15 at 15:03\n• Well, the symbols are present in the font but the set-up is not correct. Thus, a better solution would be correct the set-up, if that's possible... – sebschub Apr 12 '15 at 18:17\n• @sebschub sure, this is discussed extensively in one of the linked answers. I'm aware that this is just a workaround. Feel free to post a different approach :) – thewaywewalk Apr 12 '15 at 18:27\n• @sebschub there is a bug in MinionPro package with minionint option and using \\complement command – juanuni Aug 1 '16 at 4:14"
]
| [
null,
"https://i.stack.imgur.com/nuF0F.png",
null,
"https://i.stack.imgur.com/NCW88.png",
null,
"https://i.stack.imgur.com/34RRU.png",
null,
"https://i.stack.imgur.com/oUyk5.png",
null,
"https://i.stack.imgur.com/fKeFg.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7815903,"math_prob":0.9262367,"size":3475,"snap":"2019-35-2019-39","text_gpt3_token_len":1005,"char_repetition_ratio":0.100547396,"word_repetition_ratio":0.05764967,"special_character_ratio":0.26244605,"punctuation_ratio":0.10277325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911531,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T07:47:00Z\",\"WARC-Record-ID\":\"<urn:uuid:f31106aa-58b9-4b2b-8cc2-a16e5cda1d9e>\",\"Content-Length\":\"154474\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f314a77-86c6-4210-b206-b68d38c005d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:eab8a190-5d4d-4fd5-8f78-8989002b42f8>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/238209/general-guide-on-how-to-set-up-minion-pro-for-math-in-lualatex\",\"WARC-Payload-Digest\":\"sha1:ZZ6XUTVK2AGI25ULB6JB33XVTWG6RWXN\",\"WARC-Block-Digest\":\"sha1:HFEHV5O6YZN5LYZ4VEROAHUQWRT6K7D3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514570830.42_warc_CC-MAIN-20190915072355-20190915094355-00015.warc.gz\"}"} |
https://webetool.com/illuminance-converter | [
"# Illuminance Converter\n\n## Calculate and Convert Illuminance units",
null,
"Illuminance Converter\n\nThe calculation of wavelengths of light travelling in a particular direction, in a given area is called luminance, and the intensity of light on a surface, in a given area, is called illuminance. We calculate Illuminance to measure the brightness, it is mostly used in photometry. we generally measure Illuminance in Lux (lx).",
null,
"This online calculator is very useful tool to convert units of Illuminance. You can convert any units between Microlux, Millilux, Lux, Kilolux, Lumen per square meter, Lumen per square centimeter, Foot candle, Phot, and Nox.\n\n## How use Illuminance Unit Converter tool?\n\nThis online free unit converter tool is very easy to use. You can convert luminance with just one click. You only have to type the valu and select the unit you want to convert, then press the Convert button. You will get you unit converted to onter units within a second.\n\n## Illuminance Convert Table:\n\n• 1 Lux (lx) = 1000000 Microlux ( μlx)\n• 1 Lux (lx) = 1000 Millilux (mlx)\n• 1 Lux (lx) = 0.001 Kilolux (klx)\n• 1 Lux (lx) = 1 Lumen per Square Meter (lm/m²)\n• 1 Lux (lx) = 0.0001 Lumen per Square Centimeter (lm/c²)\n• 1 Lux (lx) = 0.09000000009 Foot candle (fc)\n• 1 Lux (lx) = 0.0001 Phot (ph)\n• 1 Lux (lx) = 1000 Nox",
null,
""
]
| [
null,
"https://www.webetool.com/components/storage/app/public/photos/1/illuminance-converter.webp",
null,
"https://www.webetool.com/components/storage/app/public/photos/1/illuminance-vs-luminance.webp",
null,
"https://webetool.com/assets/img/cookie.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7713227,"math_prob":0.9814189,"size":1270,"snap":"2023-40-2023-50","text_gpt3_token_len":356,"char_repetition_ratio":0.18483412,"word_repetition_ratio":0.054054055,"special_character_ratio":0.28031495,"punctuation_ratio":0.11885246,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.953685,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T02:24:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7c5f34a3-4fd7-4a24-ad06-cef2a3e62029>\",\"Content-Length\":\"66119\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46112c9d-ea5e-4ffb-a71d-e77a6d8df2a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f70f240-d884-4d32-96d1-b4fcaeffccc9>\",\"WARC-IP-Address\":\"185.28.21.142\",\"WARC-Target-URI\":\"https://webetool.com/illuminance-converter\",\"WARC-Payload-Digest\":\"sha1:T7OFCNE3AX27OR7EE3PYFX6W4CCUYLWU\",\"WARC-Block-Digest\":\"sha1:M3VLWAIMOPQ6LYA5T72Z327F56JXUJCE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510942.97_warc_CC-MAIN-20231002001302-20231002031302-00673.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/answers/762571-how-to-get-average-of-multiple-rows-and-subtract-from-one-group-of-average-to-another | [
"# how to get average of multiple rows and subtract from one group of average to another\n\n1 view (last 30 days)\nMinions on 4 Mar 2021\nCommented: Minions on 6 Mar 2021\nHello,\nI have a big file with multiple rows and columns. I need to get the average of every 30 rows. for example from row 1 to 30, 31 to 60, 61 to 90 and so on until the end of the rows. Can anyone help me with this? any kinds of help will be really appriciated.\nShlomo Geva on 5 Mar 2021\nJust use\nfor i=1:30:floor(size(M)/30)*30\n% loop body will execute and skip 30 rows, until there is no more group of exactly 30 rows left.\n...\nend\n\nWalter Roberson on 5 Mar 2021\nM = your matrix with 10039 rows and 64 columns\nN = 30;\nwhole_blocks = floor(size(M,1)/N);\nleftover = size(M,1) - whole_blocks * N;\nlast_whole = whole_blocks * N;\nwhole_mean = reshape( mean(reshape(M(1:last_whole, N, [])), 2), whole_blocks, [] );\nextra_mean = mean(M(last_whole+1:end, :),1);\noverall_mean = [whole_mean; extra_mean];\nMinions on 6 Mar 2021\nthank you\n\nKALYAN ACHARJYA on 4 Mar 2021\nEdited: KALYAN ACHARJYA on 4 Mar 2021\nAs you mentioned, the size of the data is \"10039 columns, 64 rows\"\nEasiest Way:\ncell_data=mat2cell(data,[30,30,4]);\n%.........................^ Row data\navg_data=zeros(1,size(cell_data,1));\nfor i=1:size(cell_data,1)\navg_data(i)=mean2(cell_data{i});\nend\navg_data\nNote:\n1. If you have the different rows number & divisible by 30 or step size, you can create a 1D row array easily\nrow_data=30*ones(1,rows/30);\nAn example rows size is 300, in that case it would be\nrow_data =\n30 30 30 30 30 30 30 30 30 30\n2. As the given rows number is not divisible by 30 exactly (64 case), hence I have created rows vector [30 30 4], you can make it generalize\n3. If the all cell elements are equal size in cell data, such case you can easiliy avoid the for loop here by data sequzeeing in cat 3 dimention, afterwards apply the mean on individual planes (3rd Dimension).\nHope it Helps!\nMinions on 4 Mar 2021\nthank you for your answer, I tried to run the programming, but i can find error. Also can you please explain how did you do it. I am kinda new in matlab so difficult for me to understand. Thank you in advance\n\nShlomo Geva on 4 Mar 2021\nEdited: Walter Roberson on 5 Mar 2021\n%% load the fille into memory with\n%% loop around M, 30 rows at a time\nfor i=1:30:size(M,1)\nm = mean(M(i:i+29,:)); % compute mean of rows from row i to row i+29\n% here do something with the mean\nend\nMinions on 5 Mar 2021\nThank you for explaining it to me\n\nShlomo Geva on 5 Mar 2021\nHere is a solution to also take care of a number of rows that is not a multiple of your grouping (e.g. 30).\nI made up a matrix of random integers (between 1 and 10), having 10 rows and 2 columns.\nThe code computes the mean of the rows, in groups of 3 rows at a time.\nM = randi(10,10,2); % your matrix should go here - read it from csv\ndisp(M); % show the matrix\nN = 3; % your number of vectors in a group that you wish to average (in your case 30)\nS = size(M,1); % number of rows in M\nfor i=1:N:S\nif i<=S-N\n% here if we have N rows - safe to average\nm = mean(M(i:i+N-1,:));\nelseif S==i\n% here if only one row left - no need to average\nm = M(i,:);\nelse\n% here taking the average of left over rows (more than one).\nm = mean(M(i:end,:));\nend\ndisp(m); % replace this by whatever you wish to do with each row\nend"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6879049,"math_prob":0.9794814,"size":879,"snap":"2021-21-2021-25","text_gpt3_token_len":257,"char_repetition_ratio":0.13485715,"word_repetition_ratio":0.041666668,"special_character_ratio":0.3208191,"punctuation_ratio":0.14213198,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903192,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T00:12:47Z\",\"WARC-Record-ID\":\"<urn:uuid:f1797011-a7b2-4f0c-8ac3-cb46bd9c384e>\",\"Content-Length\":\"269894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e131134-1b04-435a-977e-6041c24befbc>\",\"WARC-Concurrent-To\":\"<urn:uuid:5995bb20-f8e0-4c81-a79a-c48054a7c6c2>\",\"WARC-IP-Address\":\"23.197.108.134\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/answers/762571-how-to-get-average-of-multiple-rows-and-subtract-from-one-group-of-average-to-another\",\"WARC-Payload-Digest\":\"sha1:2BIAFDERKDX6YEQGYWZ4CIV3XWFLSGFR\",\"WARC-Block-Digest\":\"sha1:DWYQHLPTKV5T6BMRSFOIYQSVINVWECJD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991921.61_warc_CC-MAIN-20210516232554-20210517022554-00512.warc.gz\"}"} |
https://mw-live.lojban.org/index.php?title=Proposal:_Digit_Strings_which_Represent_Continued_Fractions&oldid=122694 | [
"# Proposal: Digit Strings which Represent Continued Fractions\n\n(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)\n\nThis article is a proposed description of a means by which to express numbers in a generalized continued fraction format as represented by a string of digits. It will not discuss other notations for continued fractions. By way of analogy, the subject matter of this article would be similar to a description of the decimal system (base) and will not touch on subject matter which is similar to means of expressing numbers as summations (big operator \"$\\sum$",
null,
"\") or formal polynomials in $10$",
null,
"with coefficients in $\\mathbb {Z} \\cap [0,9]$",
null,
", even though all of these are mutually equivalent.\n\nFor the purposes of this article: All expressions are big-endian and microdigits are in traditional decimal. PEMDAS is obeyed.\n\n## Original Explanation\n\nLet $z=a_{0}+b_{0}/(a_{1}+b_{1}/(a_{2}+b_{2}/(\\dots )))=a_{0}+{\\underset {i=0}{\\overset {\\infty }{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}$",
null,
", where $a_{i}$",
null,
"and $b_{i}$",
null,
"are integers for all $i$",
null,
"(see Kettenbruch notation here). In fact, for all i, we will canonically restrict $a_{(i+1)}$",
null,
"and $b_{i}$",
null,
"to nonnegative integers such that if $b_{j}=0$",
null,
", then $a_{k}=1$",
null,
"and $b_{k}=0$",
null,
"for all $k\\geq j$",
null,
"; this is a perfectly natural and standard set of restrictions to make and does not actually diminish the set of numbers which are expressible in this format, but the restriction is not technically necessary for Lojban. Then we will denote $z$",
null,
"by the continued fraction representation $z=(a_{0}:b_{0},a_{1}:b_{1},a_{2}:b_{2},\\dots )$",
null,
"; the whole rhs representation is called a string. Notice that the integer part is included. In this format, for each $i$",
null,
", \"$a_{i}:b_{i}$",
null,
"\" forms a single unit called a macrodigit; for each $i$",
null,
", \"$a_{i}$",
null,
"\" and \"$b_{i}$",
null,
"\" each are microdigits; the colon (\"$:$",
null,
"\") separates microdigits and the comma (\"$,$",
null,
"\") separates macrodigits. Microdigits can be expressed in any base or other representation and macrodigits could be reversed or slightly rearranged (such as being of form \"$b_{i}:a_{(i+1)}$",
null,
"\"; however, for our purposes here microdigits will be expressed in big-endian traditional decimal and macrodigits will be formed and ordered as shown; the specification herein proposed will obligate the user to express the macrodigits in the form which is shown (id est: of form \"$a_{i}:b_{i}$",
null,
"\"; within any given macrodigit, the first microdigit expressed represents $a_{i}$",
null,
"and the second (and final) microdigit expressed represents $b_{i}$",
null,
", only) but the other features aforementioned are not guaranteed, although they may normally be assumed as a contextless default. In order to be clear: in this representation, each macrodigit will consist of exactly two microdigits - namely, $a_{i}$",
null,
"and $b_{i}$",
null,
"in that order, for all $i$",
null,
"- and these microdigits will be separated explicitly by \"pi'e\"; meanwhile, macrodigits will be separated explicitly by \"pi\". In this representation, I will denote a not-explicitly-specified microdigit by a pair of consecutive underscores (\"$\\_\\_$",
null,
"\"). In the 'big-endian' arrangement of the macrodigits (as herein depicted), the first microdigit ($a_{0}$",
null,
") represents the 'integer part' of the expression.\n\nIn this system, let \"pi'e\" represent \":\" and let \"pi\" represent \",\", each bijectively. Then the basic method of expressing a continued fraction is to just read $(a_{0}:b_{0},a_{1}:b_{1},a_{2}:b_{2},\\dots )$",
null,
"where each microdigit is expressed in some base which represents integers, the parenthesis are not mentioned, the separators being named/pronounced as before, \"ra'e\" being used in order to create cyclic patterns or to extend the string indefinitely, and the string being terminated as any numeral string could or would be. The interpretation of the whole string according to these rules for continued fractions would be specified via JUhAU.\n\nA string terminates if and only if \"ra'e\" is not explicitly used. \"ra'e\" will couple with exactly one microdigit, and exactly every following explicitly mentioned microdigit in that position of their macrodigits will be considered to be part of a repetitious sequence applying to/running over the microdigits in that position of their macrodigits; the other microdigit is unaffected by it. Moreover, it can couple with \"pi'e\" as well (see below), but this occurs iff \"ra'e\" is explicitly mentioned immediately prior to exactly an explicitly mentioned \"pi'e\". If it couples with $a_{j}$",
null,
"for some $j$",
null,
", then it will cyclically repeat that $a_{j}$",
null,
"and all explicitly mentioned $a_{(j+k)}$",
null,
"for all $k>0$",
null,
"in each $a_{i}$",
null,
"spot until the last $b_{i}$",
null,
"(which either will be explicitly mentioned and defined as last by the closure of the string scope (formally, all subsequent $b_{i}$",
null,
"will be trivial), or will be nonexistent according to the next point); iff it couples with $b_{j}$",
null,
"for some $j$",
null,
", then the string is extended to infinite length and there exists no 'last $b_{i}$",
null,
"' (meaning that any repetition on $a_{i}$",
null,
"will also continue ad infinitum). Thus, $(a_{0}:b_{0},a_{1}:b_{1},\\operatorname {ra'e} a_{2}:b_{2},a_{3}:b_{3},\\_\\_:b_{4},\\_\\_:b_{5},\\_\\_:b_{6},\\dots ,\\_\\_:b_{10},\\_\\_:\\operatorname {ra'e} b_{11})=(a_{0}:b_{0},a_{1}:b_{1},a_{2}:b_{2},a_{3}:b_{3},a_{2}:b_{4},a_{3}:b_{5},a_{2}:b_{6},a_{3}:b_{7},a_{2}:b_{8},a_{3}:b_{9},a_{2}:b_{10},a_{3}:b_{11},a_{2}:b_{11},a_{3}:b_{11},a_{2}:b_{11},a_{3}:b_{11},a_{2}:b_{11},\\dots )$",
null,
".\n\n• For any $i>0$",
null,
", if $a_{i}$",
null,
"is not explicitly mentioned, then it is assumed to take on the appropriate value according to an ongoing formula which applies to it (such as by \"ra'e\") or, otherwise, it defaults to $1$",
null,
". $a_{0}=0$",
null,
"if it is not explicitly mentioned unless context very clearly indicates otherwise. These are called \"context-dependent defaults\".\n• For any $i$",
null,
", if $b_{i}$",
null,
"is not explicitly mentioned, then it is assumed to take on the appropriate value according to an ongoing formula which applies to it (such as by \"ra'e\") or, otherwise, it defaults to $1$",
null,
"if the string continues (explicitly or by sufficient \"ra'e\") and $0$",
null,
"otherwise. This is especially true if the verbal expression of the string is terminated and \"ra'e\" was not explicitly used (on $b_{j}$",
null,
"for some $j$",
null,
"): all finite strings can infinitely extended by right-concatenating \"$1:0,1:0,1:0,\\dots )$",
null,
"to them (this is similar to decimal notation; for example: $8.23=8.23000\\dots$",
null,
"). These are called \"context-dependent defaults\".\n• If exactly one microdigit is explicitly mentioned in a given macrodigit, then: it is to be understood to be $a_{i}$",
null,
"iff \"ra'e\" did not couple with \"pi'e\"; regardless of the prior presence of \"ra'e pi'e\" in the string, the implicit microdigit will assume the generic default value or (preferably) the value according to a repetition or formula which it inherited (see the aforementioned context-dependent defaults).\n\nEven though the basic and assumed notation for $a_{0}+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}$",
null,
"is $(a_{0}:b_{0},a_{1}:b_{1},a_{2}:b_{2},a_{3}:b_{3},\\dots )$",
null,
"(this is so-called 'big-endian' in the macrodigits), other formats can be supported iff they are explicitly specified. For example, with a change of endianness in the macrodigits, $a_{0}+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}=(\\dots ,a_{3}:b_{3},a_{2}:b_{2},a_{1}:b_{1},a_{0}:b_{0})$",
null,
". It is also reasonable that the microdigits could be reordered (note that this is not a change in the endianness of each microdigit (which would change $12:34$",
null,
"to $21:43$",
null,
"); rather, it is a transposition of the microdigits within each macrodigit) like so: $a_{0}+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}=(\\dots ,b_{3}:a_{3},b_{2}:a_{2},b_{1}:a_{1},b_{0}:a_{0})$",
null,
"; notice that in this example, I also changed the endianness of the macrodigits because the expression does not make much intuitive sense otherwise (but it would nonetheless be possible to do merely one of these changes in isolation, even if it is not advisable or sensical).\n\nThe string will terminate and be interpreted as a number formed from the specified continued fraction as all other digits strings do (see my other work).\n\n-- Krtisfranks (talk) 08:10, 9 March 2018 (UTC)\n\n## Alternative Explanation\n\nThe description in this section is intended to provide the same results as those in the \"Original Explanation\" section.\n\nZeroth, the mode must be activated (as described previously).\n\nThroughout the following discussions, I shall assume that such mode activation has been performed as appropriate.\n\nFor some $n\\in \\mathbb {N} \\cup \\{0,+\\infty \\}$",
null,
"and some sequences $(a_{i})_{i},(b_{i})_{i}$",
null,
", we will map $z=a_{0}+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}$",
null,
"(math form) to a string of form $(a_{0}:b_{0},a_{1}:b_{1},a_{2}:b_{2},\\dots ,a_{n}:b_{n})$",
null,
"(this is the notational form), which would be pronounced more or less as \"a0 pi'e b0 pi a1 pi'e b1 pi a2 pi'e b2 pi ... an pi'e bn\" (this is the verbal form). In other words, each \":\" in the notational form is expressed as \"pi'e\" and each \",\" in the notational form is expressed as \"pi\", and vice-versa, where \"pi'e\" and \"pi\" are exactly the cmavo that you think that they are. Notice that the parenthesis in the notation form are not pronounced; they are used in the notational form so that readers understand that everything between them forms some sort of unit - however, in Lojban, the terms are read and first understood as digits which compile into some sort of numeric string, so this is not necessary; moreover, mode activation makes it clear that they are indeed a single unit (and what that unit means: namely, a continued fraction).\n\n(Technical aside: In the original math form, each ai and bi are terms in their respective sequences and can be more or less understood as indirect terms in the continued fraction (really, the ordered pairs are the operands of the K operator and the entries of the pairs are the terms of the sequences); we are performing an implicit sleight of hand in mapping these numbers to digits in the string form, and we notate them exactly the same. This latter fact can be seen if one realizes the \"pi'e\" and \"pi\" are themselves digits and can only be concatenated to other digits. So, I may end up using the words \"term\" and \"digit\" more or less interchangeably (making the assumption that the reader knows when I am not including \"pi'e\" and \"pi\" in my reference set); they are technically distinct in concept and domain, but they are isomorphic. For the record, each 'term' in string notation may actually be constituted by more than one 'digit' in the expansion; for example, a0 = 3*4 would mean that the expression \"a0\" in the string would have to be represented by the multidigit expression \"12\" in decimal notation. Technically, I would call \"ai\" and \"ab\" macrodigits (and \"12\", standing in for such a macrodigit, is itself a macrodigit) and these can be composed of microdigits (which, in the example, would be \"1\" and \"2\" in that order). I will assume that microdigits are to be interpreted in decimal form (so a \"1\" followed by a \"2\", and nothing else being involved, compiles to \"12\" and means the number twelve) throughout this page; this assumption can be changed by certain cmavo if so desired.)\n\nNow, it might be nice to avoid having to talk forever in the case of $n=+\\infty$",
null,
"or to not have to say or repeat digits that follow a pattern. The following sections address these concerns. However, we first must cover the simplest cases - at the very least because they serve as good entry-level examples.\n\n### Step 1: Simplest (and Finite) Case\n\nLet $n\\in \\mathbb {N} \\cup \\{0\\}$",
null,
". Consider the continued fraction $z=a_{0}+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {b_{i}}{a_{i+1}}}{\\big )}$",
null,
", where $\\forall i,a_{i}$",
null,
"& $b_{i}$",
null,
"will be explicitly defined, particularly as they arise. We can enforce conditions on the sequences $(a_{i})_{i}$",
null,
"& $(b_{i})_{i}$",
null,
", but we will ignore such details, because we just need formal continued fractions. Since n is finite, we will not use the word \"ra'e\" on any of the bi terms in this case/section.\n\nWe can transform the representation of $z$",
null,
"from the previous notation (which is the application of a mathematical operator) to a string of digits. This is similar to changing \"2*5\" to \"10\".\n\nThe simplest subcase is if $b_{i}=0\\forall i$",
null,
". In that subcase, z = a0. In string form, z would be written as \"(a0:0, 0:0, 0:0, ...)\", which collapses to \"(a0)\". Since we are assuming that n is finite and that we are explicitly stating the value of any nontrivial term (ai and bi), we take the sequence to 'terminate' with the last explicitly mentioned term (and all subsequent terms to be trivial - which is to say that they need not be mentioned). In particular, if the last explicitly mentioned term is ai, then bi = 0; if the last explicitly mentioned term is bi, then a(i+1) = 1 and b(i+1) = 0; these particular values are 'trivial'. It does not matter what the later terms in the sequences are due to the nature of fractions (so long as we do not divide by zero - which we shall assume). Note that explicitly mentioned terms need not be nontrivial; however, if the last explicitly mentioned term was trivial, then the expression could have been simplified by having not explicitly mentioned it either. In the string representation, only the explicitly mentioned terms need be shown/said (and simplifications such as the one just stated are always welcome); in other words, we do not need to say any later ai or bi or - indeed - even the \"pi\" (\",\") immediately after the last explicitly mentioned term. We also introduce the rule - used throughout this page - that (unless a certain condition is satisfied, which shall be addressed later), if exactly one macrodigit is explicitly mentioned between \"pi\"s, then it represents ai (for appropriate i). Thus \"(a0)\" is pronounced simply as \"a0\", whatever that is. In that situation, the bi with which it is paired will take the default value - in this example, 0. For example, the extremely trivial continued fraction 12 (twelve), which equals $12+{\\underset {i=0}{\\overset {n}{\\mathrm {K} }}}{\\big (}{\\frac {0}{a_{i+1}}}{\\big )}$",
null,
"for any sequence $(a_{i})_{i}:a_{0}=12\\&a_{i}\\neq 0\\forall i$",
null,
", is pronounced simply as \"pa re\" (assuming that continued fraction interpretation has been activated and microdigits are expressed in decimal notation)."
]
| [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f1d4e06539576633987e902f402ed46728d573b6",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/4ec811eb07dcac7ea67b413c5665390a1671ecb0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a57c36eef3c385fe63c012d98401af8b836b7cc1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/29586b16efbfe52c69a8051d4a864fabb013a438",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d2abce5b390d817c66cc141ab87615c8956db97a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f422b1b38dfb3c5a6e5b5e8abe33f68e153213db",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/aaab141ee00904e5e5703fa2cadf14885b6e94b5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2fa52afc9e883843187fefe9e99f2227ee0313f4",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/301014b7e4a7651b95e0f49b385e34b89d7a5b92",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/6c5dd26b437b6913fca387b86efe955fefec0285",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/e25716a0990e6d5bad6b9ad70ad93cbc56fe6c32",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/cd064c6ce80ad9a8e53adebb7ad51b7635fceb0f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/b8fa4cba3a446de313920e16251756e27312b825",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/796f833347f2059b5a96393a3f788245ebc9dce8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/e25716a0990e6d5bad6b9ad70ad93cbc56fe6c32",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/05a357b732c5dbb7d9e7c8fdc3831f4c73dd9ad1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/693ad9f934775838bd72406b41ada4a59785d7ba",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/9014397e151268aec166d8bd7eb316315f12cb2b",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d0096fb78d6843c9fb67a840dc796b61ad93eec2",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2f461e54f5c093e92a55547b9764291390f0b5d0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d0096fb78d6843c9fb67a840dc796b61ad93eec2",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/b9fbc7b67a7d4b88ed6b37cde238e81341eef1a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/27b3af208b148139eefc03f0f80fa94c38c5af45",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/fa56eff4488494085785b7b0d6e2069bd45a3ce5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2f461e54f5c093e92a55547b9764291390f0b5d0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/baaf7c27b6c48018c7a7fd9f901af8bf3144d61f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1f49f2878fd68a89c3da37eb537198e887cf0293",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/92d98b82a3778f043108d4e20960a9193df57cbf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/e8f3589226b1f07bd27b7c82d8f470a4685fffe2",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/92d98b82a3778f043108d4e20960a9193df57cbf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2aae8864a3c1fec9585261791a809ddec1489950",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/fa56eff4488494085785b7b0d6e2069bd45a3ce5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2f461e54f5c093e92a55547b9764291390f0b5d0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1883e3aec503624415c8c7e33eb89b81787704cd",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1771cb1ceb1be8d50c083e0a4cf64c86687356d9",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0bc77764b2e74e64a63341054fa90f3e07db275f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a2ddde57f6070a1cbd95d602329258e948375459",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/012d85326ad80ff4486526a3971d148a61e212f5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/237d1c1bc3d18bc3433363408d138ec91231f425",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/7363ca408b5473abb1f063bebedf28595592f477",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/bc558042fab365c586677b8490faa338711dede0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/448dc00849cfcbddf0fa5e7f7e3608b4aff13f4c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/6f70dfe53cda37c1ea6c53f89b1afa195e8c2529",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/62e8cb638fb60b3fe394780987f5b371d38322c7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/9c13fa9fca59ebaf6ac5baa870b175b18dc4b82c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2c9feee626a5145dc601eefbe5d60537048109e2",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2590442ce138f6e1a9dc486c31cd20e437713b44",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/6bdf360f72ba3e0919e19406ee12af1336526ef1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/9c13fa9fca59ebaf6ac5baa870b175b18dc4b82c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/156af36fccb9fc74e10b58b4863d180af2c6aef1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/40a8c2db2990a53c683e75961826167c5adac7c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/cf25da3a11bd09e9a98a9edd542121f84ca37ea0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/0351bb0297fb64ce5f25e6c33d66858ed069cbe6",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f537a6d1f2c3b37425eeb3288d19a907edf1c7a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c4af4291f7ff9461afa3cd49cf32e4aab7f90be6",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/2824bcc87fbf5fc01c7af41849cbf4113f8b6dd8",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9307786,"math_prob":0.9900742,"size":12489,"snap":"2022-40-2023-06","text_gpt3_token_len":2673,"char_repetition_ratio":0.15546656,"word_repetition_ratio":0.045585413,"special_character_ratio":0.2160301,"punctuation_ratio":0.09907935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99872756,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158],"im_url_duplicate_count":[null,null,null,null,null,3,null,3,null,null,null,null,null,null,null,3,null,null,null,3,null,5,null,null,null,7,null,null,null,3,null,null,null,6,null,null,null,null,null,null,null,null,null,null,null,3,null,6,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,3,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,9,null,3,null,6,null,3,null,null,null,3,null,3,null,null,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T12:21:56Z\",\"WARC-Record-ID\":\"<urn:uuid:309196e7-8c8f-4e83-acdc-f56270f556c5>\",\"Content-Length\":\"130014\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12b32d04-d925-4e3b-b128-9f7b37063c75>\",\"WARC-Concurrent-To\":\"<urn:uuid:237a40ff-1ab4-476e-a08a-4e6e989fcd87>\",\"WARC-IP-Address\":\"50.250.232.18\",\"WARC-Target-URI\":\"https://mw-live.lojban.org/index.php?title=Proposal:_Digit_Strings_which_Represent_Continued_Fractions&oldid=122694\",\"WARC-Payload-Digest\":\"sha1:RFLTSZ6IORCKCDGGAHNH2KJKWPIIODNJ\",\"WARC-Block-Digest\":\"sha1:SZRPK5VEBB53OVMFGINKACUT6ARB3XNH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00139.warc.gz\"}"} |
https://www.myronzucker.com/Myron-Products/Line-Reactor.html | [
"Line Reactor Manual | What Is A Line Reactor?\n Phone : 586.979.9955 Fax : 586.979.9484",
null,
"",
null,
"Resources > Engineering Resources > Line Reactors\n\nLine Reactor Manual\n\nWHAT IS A LINE REACTOR?\n\nA 3-phase Line Reactor is a set of three (3) coils (also known as windings, chokes or inductors) in one assembly. It is a series device, which means it is connected in the supply line such that all line current flows through the reactor, as shown below.",
null,
"Line Reactors are current-limiting devices and oppose rapid changes in current because of their impedance. They hold down any spikes of current and limit any peak currents. This resistance to change is measured in ohms as the Line Reactor's AC impedance (XL) and is calculated as follows:\n\nXL = 2 π f L (ohms), where:\n\nf = frequency\n\nharmonic frequency examples:\n\n Harmonic (60 Hz) Frequency (Hz) 5th 300 7th 420 11th 660\n\nL = reactor inductancein henries (H), millihenries (mH) -- H x 10-3, microhenries (µH) -- H x 10-6\n\nBy inspection of the XL formula, the Line Reactor is directly proportional to the frequency (f) and the inductance (L). That is, if the impedance of a Line Reactor is 10 ohms at 60 Hz, then at the 5th harmonic (300 Hz) the impedance is 50 ohms. If the inductance (L) is increased, then the impedance will increase proportionally.\n\nThis increase in Line Reactor impedance will reduce the current in the line. The higher the frequency (Hertz), the lower the current. A Line Reactor's DC resistance (R-ohms) is very low by design so that the power losses (watts-I2R) are low.\n\nLine Reactors are rated by % impedance, voltage and current. However, they are sized by % impedance, voltage and motor horsepower. The motor horsepower determines the necessary current rating for the Line Reactor.\n\nLine Reactors are rated by impedance, voltage and current.\n\n1. Impedance (% impedance of load Z)\nThe load impedance (Z) is calculated by this formula:\n\nZ = V/I, whereZ = load impedance(ohms), V = line voltage(volts), andI = line current(amps)\n\nThis percent of load impedance also determines the voltage drop across the Line Reactor. For example, a 5% Line Reactor would have a 5% voltage drop.\n2. Voltage rating\n\nSince a Line Reactor is a current-sensitive device, the voltage rating is needed for dielectric concerns as a maximum voltage and horsepower. It is also used to determine the current rating when given only voltage and horsepower.\n3. Current rating (amperes)\n\nThis is the current required by the load(s). It is total current flowing to the load(s) and through the reactor. This current is measured in amperes (amps).\n\nMYRON ZUCKER INC. products are designed to:\n\n• Improve power factor\n• Eliminate utility penalties or surcharges\n• Increase available distribution capacity\n• Mitigate harmonic distortion\n• Protect sensitive equipment\n• Decrease downtime\n• Reduce line losses and associated cost\n• Comply with industry standards"
]
| [
null,
"https://www.myronzucker.com/App_Themes/Myronzucker/Images/mnuleft.png",
null,
"https://www.myronzucker.com/App_Themes/Myronzucker/Images/mnuright.png",
null,
"https://www.myronzucker.com/Asset/Images/linereactordiag.html",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9205401,"math_prob":0.97503555,"size":2416,"snap":"2019-26-2019-30","text_gpt3_token_len":605,"char_repetition_ratio":0.17993367,"word_repetition_ratio":0.009615385,"special_character_ratio":0.2442053,"punctuation_ratio":0.100877196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99590987,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-18T17:52:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7ed971dd-8aec-4439-8c30-e0fc22cfb1f2>\",\"Content-Length\":\"76597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5b649ab-3a24-413b-8c84-229c8805b6ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac2e7cf3-df0b-4866-afaa-ef1d419649cd>\",\"WARC-IP-Address\":\"104.27.156.164\",\"WARC-Target-URI\":\"https://www.myronzucker.com/Myron-Products/Line-Reactor.html\",\"WARC-Payload-Digest\":\"sha1:5QFVVSWROAW2MYBJZVWWKSZSALUERNYC\",\"WARC-Block-Digest\":\"sha1:IRC4S333VLAN3YFTRBZDHHMQPL2X2GRF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998808.17_warc_CC-MAIN-20190618163443-20190618185443-00123.warc.gz\"}"} |
https://de.mathworks.com/help/images/ref/imgradientxyz.html | [
"Find directional gradients of 3-D image\n\n## Syntax\n\n``````[Gx,Gy,Gz] = imgradientxyz(I)``````\n``````[Gx,Gy,Gz] = imgradientxyz(I,method)``````\n\n## Description\n\nexample\n\n``````[Gx,Gy,Gz] = imgradientxyz(I)``` returns the directional gradients `Gx`, `Gy`, and `Gz` of the 3-D grayscale or binary image `I`.```\n``````[Gx,Gy,Gz] = imgradientxyz(I,method)``` calculates the directional gradients using the specified `method`.```\n\n## Examples\n\ncollapse all\n\nRead 3-D data and prepare it for processing.\n\n```volData = load('mri'); sz = volData.siz; vol = squeeze(volData.D);```\n\n`[Gx, Gy, Gz] = imgradientxyz(vol);`\n\nVisualize the directional gradients as a montage.\n\n```figure, montage(reshape(Gx,sz(1),sz(2),1,sz(3)),'DisplayRange',[]) title('Gradient magnitude along X')```",
null,
"``` figure, montage(reshape(Gy,sz(1),sz(2),1,sz(3)),'DisplayRange',[]) title('Gradient magnitude along Y')```",
null,
"``` figure, montage(reshape(Gz,sz(1),sz(2),1,sz(3)),'DisplayRange',[]) title('Gradient magnitude along Z')```",
null,
"## Input Arguments\n\ncollapse all\n\nInput image, specified as a 3-D grayscale image or 3-D binary image.\n\nData Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`\n\nGradient operator, specified as one of the following values.\n\nValue\n\nMeaning\n\n```'sobel' ```\n\nSobel gradient operator. The gradient of a pixel is a weighted sum of pixels in the 3-by-3-by-3 neighborhood. For example, in the depth (z) direction, the weights in the three planes are:\n\nplane `z-1`plane `z`plane `z+1`\n```[ 1 3 1 3 6 3 1 3 1 ] ```\n```[ 0 0 0 0 0 0 0 0 0 ] ```\n```[ -1 -3 -1 -3 -6 -3 -1 -3 -1 ] ```\n\n`'prewitt'`\n\nPrewitt gradient operator. The gradient of a pixel is a weighted sum of pixels in the 3-by-3-by-3 neighborhood. For example, in the depth (z) direction, the weights in the three planes are:\n\nplane `z-1`plane `z`plane `z+1`\n```[ 1 1 1 1 1 1 1 1 1 ] ```\n```[ 0 0 0 0 0 0 0 0 0 ] ```\n```[ -1 -1 -1 -1 -1 -1 -1 -1 -1 ] ```\n\n`'central' `\n\nCentral difference gradient. The gradient of a pixel is a weighted difference of neighboring pixels. For example, in the depth (z) direction, ```dI/dz = (I(z+1) - I(z-1))/2```.\n\n`'intermediate'`\n\nIntermediate difference gradient. The gradient of a pixel is the difference between an adjacent pixel and the current pixel. For example, in the depth (z) direction, ```dI/dz = I(z+1) - I(z)```.\n\nWhen applying the gradient operator at the boundaries of the image, `imgradientxyz` assumes values outside the bounds of the image are equal to the nearest image border value. This behavior is similar to the `'replicate'` boundary option in `imfilter`.\n\nData Types: `char` | `string`\n\n## Output Arguments\n\ncollapse all\n\nHorizontal gradient, returned as a numeric matrix of the same size as image `I`. The horizontal (x) axis points in the direction of increasing column subscripts. `Gx` is of class `double`, unless the input image `I` is of class `single`, in which case `Gx` is of class `single`.\n\nData Types: `single` | `double`\n\nVertical gradient, returned as a numeric matrix of the same size as image `I`. The vertical (y) axis points in the direction of increasing row subscripts. `Gy` is of class `double`, unless the input image `I` is of class `single`, in which case `Gy` is of class `single`.\n\nData Types: `single` | `double`\n\nDepth gradient, returned as a 3-D numeric array of the same size as image `I`. The depth (z) axis points in the direction of increasing plane subscripts. `Gz` is of class `double`, unless the input image `I` is of class `single`, in which case `Gz` is of class `single`.\n\n## Algorithms\n\n`imgradientxyz` does not normalize the gradient output. If the range of the gradient output image has to match the range of the input image, consider normalizing the gradient image, depending on the `method` argument used. For example, with a Sobel kernel, the normalization factor is 1/44, for Prewitt, the normalization factor is 1/18."
]
| [
null,
"https://de.mathworks.com/help/examples/images/win64/Compute3DDirectionalImageGradientsUsingSobelMethodExample_01.png",
null,
"https://de.mathworks.com/help/examples/images/win64/Compute3DDirectionalImageGradientsUsingSobelMethodExample_02.png",
null,
"https://de.mathworks.com/help/examples/images/win64/Compute3DDirectionalImageGradientsUsingSobelMethodExample_03.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6196148,"math_prob":0.9746555,"size":373,"snap":"2020-10-2020-16","text_gpt3_token_len":87,"char_repetition_ratio":0.18428184,"word_repetition_ratio":0.0,"special_character_ratio":0.20643431,"punctuation_ratio":0.12676056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99462634,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T18:16:24Z\",\"WARC-Record-ID\":\"<urn:uuid:c6334892-02cd-4418-b44f-0476647be6d0>\",\"Content-Length\":\"90043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a62cbb58-dc35-4475-98b5-a955a3758935>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e160a9f-a062-433d-a7f7-e73e5dac350a>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/help/images/ref/imgradientxyz.html\",\"WARC-Payload-Digest\":\"sha1:XNOEWJUZ5CKXJEJZZ64OXEGP4757FVG5\",\"WARC-Block-Digest\":\"sha1:PLO6K2JNK2BUUKY44DLMZ3KRONCKHIUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145818.81_warc_CC-MAIN-20200223154628-20200223184628-00536.warc.gz\"}"} |
https://www.quotientapp.com/help/how-sales-tax-is-calculated-on-quotes | [
"# How Sales Tax is calculated on Quotes\n\nTax amounts in systems that calculate tax on a per total basis may differ slightly from Quotient, which calculates tax on a per item basis.\n\nIn Quotient:\n\n• Each Price Item can have its own Sales Tax rate. The Sales Tax is therefore calculated and rounded separately, before calculating the Quote Total.\n• Sales Tax Rates can be 4 decimal places, but after calculating the Price and Quantity the amount is rounded to 2 decimal places on the Price Item.\n\n### Sales Tax rounding examples:\n\nTwo items, each with a Price of 35.35, a Quantity of 1, and Sales Tax of 10.00%:\n\nThe tax amount on each item is 3.535, which rounds to 3.54. The tax from each item is added together, for a total tax of 7.08. If this were calculated on a per total basis the tax total would have been 7.07.\n\nOne item, with a Price of 10.00, a Quantity of 1, and Sales Tax of 8.875%:\n\n10.00 x 1 x 8.875% = 0.8875 which is then rounded to two decimal places for a tax amount of 0.89."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9439701,"math_prob":0.99849313,"size":979,"snap":"2023-14-2023-23","text_gpt3_token_len":259,"char_repetition_ratio":0.14974359,"word_repetition_ratio":0.05464481,"special_character_ratio":0.2829418,"punctuation_ratio":0.15151516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984792,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T15:16:42Z\",\"WARC-Record-ID\":\"<urn:uuid:680c4f6a-8926-4699-bb6e-11788c01d3fa>\",\"Content-Length\":\"13899\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29315395-5a08-4c12-8c4e-647f57b7788c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b82eee55-4f53-4974-8a7e-5e9fea803361>\",\"WARC-IP-Address\":\"18.165.98.29\",\"WARC-Target-URI\":\"https://www.quotientapp.com/help/how-sales-tax-is-calculated-on-quotes\",\"WARC-Payload-Digest\":\"sha1:2CXCUR37YZ6NXQWHINP73EEHE72HCQWR\",\"WARC-Block-Digest\":\"sha1:QFGNQISVSRLKOD2O3OYMFVD5EBETHWBW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653930.47_warc_CC-MAIN-20230607143116-20230607173116-00632.warc.gz\"}"} |
https://practicaldev-herokuapp-com.global.ssl.fastly.net/verisimilitudex/leetcodes-add-two-numbers-solution-beats-86-in-memory-simple-brute-force-algorithm-in-java-292p | [
"## DEV Community",
null,
"# LeetCode's Add Two Numbers in Linked List Solution - Beats 86% in Memory, Simple Brute Force Algorithm in Java\n\n## Overview\n\nIn this post, we will be discussing an ultra low memory Java solution for a LeetCode problem (2. Add Two Numbers) that involves adding two numbers represented as linked lists.\n\n## Problem Statement\n\nGiven two linked lists representing two non-negative integers, add the two numbers together and return the sum as a linked list.\n\nThe digits are stored in reverse order, such that the 1's digit is at the head of the list.\n\n## Example\n\n``````Input: (2 -> 4 -> 3) + (5 -> 6 -> 4)\nOutput: 7 -> 0 -> 8\nExplanation: 342 + 465 = 807\n``````\n\n## Intuition\n\nTo solve this problem, we can first parse the ListNode inputs for their respective numbers. Once that is done, we can loop over each digit in the sum and add it to a ListNode, linking all of them together.\n\n## Approach\n\nThis method takes in two linked lists, l1 and l2, as input and returns a new linked list which is the sum of the two input linked lists. The method converts the input linked lists into BigInteger objects, adds the two BigInteger objects together to obtain the sum, and then creates a new linked list from the result of the addition. The method returns the new linked list as the result of the addTwoNumbers() method.\n\nHere is an overview of our approach:\n\nGet the digits of the two numbers in the linked list.\nAdd the digits together to obtain the sum.\nCreate a new linked list from the sum.\nReturn the new linked list as the result.\nLet's now implement this approach in code.\n\n## Code\n\n``````/**\n* public class ListNode {\n* int val;\n* ListNode next;\n* ListNode() {}\n* ListNode(int val) { this.val = val; }\n* ListNode(int val, ListNode next) { this.val = val; this.next = next; }\n* }\n*/\nimport java.util.ArrayList;\nimport java.lang.StringBuilder;\nimport java.math.BigInteger;\n\nclass Solution {\npublic ListNode addTwoNumbers(ListNode l1, ListNode l2) {\n// Get digits of L2\nListNode placeholder = l1;\nArrayList<Integer> num1Array = new ArrayList<>();\n\nfor (int i = 0; i < 100; i++) {\nif (placeholder == null) {\nbreak;\n}\nplaceholder = placeholder.next;\n}\nString stringNum1 = \"\";\nfor (int i = 0; i < num1Array.size(); i++) {\nstringNum1 += num1Array.get(i);\n}\nStringBuilder sb = new StringBuilder(stringNum1).reverse();\nBigInteger num1 = new BigInteger(sb.toString());\n\n// Get digits of L2\nplaceholder = l2;\nArrayList<Integer> num2Array = new ArrayList<>();\n\nfor (int i = 0; i < 100; i++) {\nif (placeholder == null) {\nbreak;\n}\nplaceholder = placeholder.next;\n}\nString stringNum2 = \"\";\nfor (int i = 0; i < num2Array.size(); i++) {\nstringNum2 += num2Array.get(i);\n}\nsb = new StringBuilder(stringNum2).reverse();\nBigInteger num2 = new BigInteger(sb.toString());\n\nString stringSum = String.valueOf(intSum);\n\nArrayList<ListNode> listNodes = new ArrayList<>();\n\nListNode current;\nListNode previous = new ListNode();\nfor (int i = 0; i < stringSum.length(); i++) {\nif (i == 0) {\nprevious = new ListNode(Integer.parseInt(stringSum.charAt(0) + \"\"));\n} else {\ncurrent = new ListNode(Integer.parseInt(stringSum.charAt(i) + \"\"), previous);\nprevious = current;\n}\n}\n\nreturn listNodes.get(stringSum.length() - 1);\n}\n}\n``````\n1. The first few lines are just the definition of the ListNode class and the declaration of the Solution class which are given to us.\n2. The first thing we need to do is get the digits of the two numbers in the linked list.\n3. Since the digits are in reverse order, we can use a for loop to iterate through the linked list and add each digit to an ArrayList.\n4. Now that we have the digits, we can add them together.\n5. Since we have the digits in the ArrayLists, we can use StringBuilder to reverse the digits and then convert them to Strings.\n6. Since we can't use Strings in Java to do math, we need to convert the Strings to BigIntegers.\n7. We can now add the BigIntegers together and convert the result to a String.\n8. Now that we have the sum as a String, we can iterate through the String and create new ListNode objects.\n9. Since we need to return the ListNode object at the end, we can add each ListNode to an ArrayList.\n10. At the end of the for loop, we can return the ListNode object at the end of the ArrayList (LeetCode's testcase checker can then follow the links through the number).\n\n## Complexity\n\n• Time complexity: O(n)\n\n• Space complexity: O(n)"
]
| [
null,
"https://res.cloudinary.com/practicaldev/image/fetch/s--P7tPwJ4X--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3ef3l7tnsth6el6rxw5.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.64210117,"math_prob":0.94919795,"size":4366,"snap":"2023-14-2023-23","text_gpt3_token_len":1037,"char_repetition_ratio":0.15910132,"word_repetition_ratio":0.10504775,"special_character_ratio":0.27118644,"punctuation_ratio":0.1594203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964993,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T05:27:28Z\",\"WARC-Record-ID\":\"<urn:uuid:23bac8a4-8cec-41ed-8b59-ba539238e988>\",\"Content-Length\":\"102954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4af0da6a-7de9-46f9-a456-2f5fc9971799>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b367c85-11fd-4b09-9495-e20f0c8ecc12>\",\"WARC-IP-Address\":\"146.75.33.194\",\"WARC-Target-URI\":\"https://practicaldev-herokuapp-com.global.ssl.fastly.net/verisimilitudex/leetcodes-add-two-numbers-solution-beats-86-in-memory-simple-brute-force-algorithm-in-java-292p\",\"WARC-Payload-Digest\":\"sha1:GFWMIB6FYWUJDX2TIV7YH6LCFYJE747U\",\"WARC-Block-Digest\":\"sha1:T7WA5VPQBTGUQZZWI5NPNTXN62Z62R3O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950383.8_warc_CC-MAIN-20230402043600-20230402073600-00538.warc.gz\"}"} |
https://nl.mathworks.com/help/vision/ref/fastrcnnobjectdetector.classifyregions.html | [
"# classifyRegions\n\nClassify objects in image regions using Fast R-CNN object detector\n\n## Syntax\n\n``````[labels,scores] = classifyRegions(detector,I,rois)``````\n``````[labels,scores,allScores] = classifyRegions(detector,I,rois)``````\n``[___] = classifyRegions(___,'ExecutionEnvironment',resource)``\n\n## Description\n\nexample\n\n``````[labels,scores] = classifyRegions(detector,I,rois)``` classifies objects within the regions of interest of image `I`, using a Fast R-CNN (regions with convolutional neural networks) object detector. For each region, `classifyRegions` returns the class label with the corresponding highest classification score.When using this function, use of a CUDA® enabled NVIDIA® GPU is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™. For information about the supported compute capabilities, see GPU Support by Release (Parallel Computing Toolbox).```\n``````[labels,scores,allScores] = classifyRegions(detector,I,rois)``` also returns all the classification scores of each region. The scores are returned in an M-by-N matrix of M regions and N class labels.```\n````[___] = classifyRegions(___,'ExecutionEnvironment',resource)` specifies the hardware resource used to classify object within image regions: `'auto'`, `'cpu'`, or `'gpu'`. You can use this syntax with either of the preceding syntaxes.```\n\n## Examples\n\ncollapse all\n\nConfigure a Fast R-CNN object detector and use it to classify objects within multiple regions of an image.\n\nLoad a `fastRCNNObjectDetector` object that is pretrained to detect stop signs.\n\n```data = load('rcnnStopSigns.mat','fastRCNN'); fastRCNN = data.fastRCNN;```\n\nRead in a test image containing a stop sign.\n\n```I = imread('stopSignTest.jpg'); figure imshow(I)```",
null,
"Specify regions of interest to classify within the test image.\n\n```rois = [416 143 33 27 347 168 36 54];```\n\nClassify the image regions and inspect the output labels and classification scores. The labels come from the `ClassNames` property of the detector.\n\n`[labels,scores] = classifyRegions(fastRCNN,I,rois)`\n```labels = 2x1 categorical stopSign Background ```\n```scores = 2x1 single column vector 0.9969 1.0000 ```\n\nThe detector has high confidence in the classifications. Display the classified regions on the test image.\n\n```detectedI = insertObjectAnnotation(I,'rectangle',rois,cellstr(labels)); figure imshow(detectedI)```",
null,
"## Input Arguments\n\ncollapse all\n\nFast R-CNN object detector, specified as a `fastRCNNObjectDetector` object. To create this object, call the `trainFastRCNNObjectDetector` function with training data as input.\n\nInput image, specified as a real, nonsparse, grayscale or RGB image.\n\nData Types: `uint8` | `uint16` | `int16` | `double` | `single` | `logical`\n\nRegions of interest within the image, specified as an M-by-4 matrix defining M rectangular regions. Each row contains a four-element vector of the form [x y width height]. This vector specifies the upper left corner and size of a region in pixels.\n\nHardware resource used to classify image regions, specified as `'auto'`, `'gpu'`, or `'cpu'`.\n\n• `'auto'` — Use a GPU if it is available. Otherwise, use the CPU.\n\n• `'gpu'` — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Support by Release (Parallel Computing Toolbox).\n\n• `'cpu'` — Use the CPU.\n\nExample: `'ExecutionEnvironment','cpu'`\n\n## Output Arguments\n\ncollapse all\n\nClassification labels of regions, returned as an M-by-1 categorical array. M is the number of regions of interest in `rois`. Each class name in `labels` corresponds to a classification score in `scores` and a region of interest in `rois`. `classifyRegions` obtains the class names from the input `detector`.\n\nHighest classification score per region, returned as an M-by-1 vector of values in the range [0, 1]. M is the number of regions of interest in `rois`. Each classification score in `scores` corresponds to a class name in `labels` and a region of interest in `rois`. A higher score indicates higher confidence in the classification.\n\nAll classification scores per region, returned as an M-by-N matrix of values in the range [0, 1]. M is the number of regions in `rois`. N is the number of class names stored in the input `detector`. Each row of classification scores in `allscores` corresponds to a region of interest in `rois`. A higher score indicates higher confidence in the classification.\n\n## Version History\n\nIntroduced in R2017a"
]
| [
null,
"https://nl.mathworks.com/help/examples/vision/win64/ClassifyImageRegionsUsingFastRCNNExample_01.png",
null,
"https://nl.mathworks.com/help/examples/vision/win64/ClassifyImageRegionsUsingFastRCNNExample_02.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.61183006,"math_prob":0.7010379,"size":4159,"snap":"2022-27-2022-33","text_gpt3_token_len":961,"char_repetition_ratio":0.1593261,"word_repetition_ratio":0.12540193,"special_character_ratio":0.21808127,"punctuation_ratio":0.12278308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9573131,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T10:06:23Z\",\"WARC-Record-ID\":\"<urn:uuid:9cff3407-d8f8-4a7e-bfdd-a7689c660e35>\",\"Content-Length\":\"100223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0923e725-bed6-4ee5-9dbc-176840b3070c>\",\"WARC-Concurrent-To\":\"<urn:uuid:00acf63d-4d65-4901-89f6-561e5528315b>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/vision/ref/fastrcnnobjectdetector.classifyregions.html\",\"WARC-Payload-Digest\":\"sha1:YU563G2GQHW4PQKQNKVLI3NZM4XFTKNC\",\"WARC-Block-Digest\":\"sha1:JCTNLD2IIXZQHIBL2QSSUKDJVYXH57WI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573193.35_warc_CC-MAIN-20220818094131-20220818124131-00771.warc.gz\"}"} |
https://answers.everydaycalculation.com/multiply-fractions/9-2-times-1-3 | [
"Solutions by everydaycalculation.com\n\n## Multiply 9/2 with 1/3\n\n1st number: 4 1/2, 2nd number: 1/3\n\nThis multiplication involving fractions can also be rephrased as \"What is 9/2 of 1/3?\"\n\n9/2 × 1/3 is 3/2.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 9/2 × 1/3 = 9 × 1/2 × 3 = 9/6\n3. After reducing the fraction, the answer is 3/2\n4. In mixed form: 11/2\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8618531,"math_prob":0.99443793,"size":461,"snap":"2021-31-2021-39","text_gpt3_token_len":217,"char_repetition_ratio":0.17943107,"word_repetition_ratio":0.0,"special_character_ratio":0.47071582,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771229,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T12:41:22Z\",\"WARC-Record-ID\":\"<urn:uuid:5ca4a745-092f-48f8-ba10-ee3bf27975dc>\",\"Content-Length\":\"8001\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61daaa29-d234-48ca-b6fd-36866985dcbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a18b338-5480-4abd-947e-31b76042734b>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/9-2-times-1-3\",\"WARC-Payload-Digest\":\"sha1:VI4VADONHVAUSADOZTE5RCD4IMRUD4LL\",\"WARC-Block-Digest\":\"sha1:3TZ2ASWSW6RWP5JRJIYXN3HYM6YENEIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057421.82_warc_CC-MAIN-20210923104706-20210923134706-00551.warc.gz\"}"} |
https://solvedlib.com/n/problem-6-solve-the-heat-equation-ut-3uzr-2u1-0-lt-i-lt,7762925 | [
"# Problem 6 Solve the heat equation Ut = 3uzr + 2u1 0 < I < T,t >0u(0,t) = u(t,t) =\n\n###### Question:\n\nProblem 6 Solve the heat equation Ut = 3uzr + 2u1 0 < I < T,t >0 u(0,t) = u(t,t) = 0, t > 0 u(z,0) =I, 0 < * < T",
null,
"",
null,
"#### Similar Solved Questions\n\n##### Define Keq and Kw8 HHI| 4 a Br D Pi*HMLE Editor]12ptParagraph0woids,0 pts\nDefine Keq and Kw 8 HHI| 4 a Br D Pi* HMLE Editor] 12pt Paragraph 0woids, 0 pts...\n##### {Problem If the weights (Ibs heights Jinches' samult students rollows; 70,66,55, 59,64, 60, 72 , 67,54 48,51,55,62, 59,63,58, 65,60, 54 Calculate 95% confidence interva the means difference (ux-MvlclassroomrecotlecProblem 5- Using the sample weights (X;) of the stucents problem 14, find the 95% confidence Interval ofthe whole class variance of their weights_\n{Problem If the weights (Ibs heights Jinches' samult students rollows; 70,66,55, 59,64, 60, 72 , 67,54 48,51,55,62, 59,63,58, 65,60, 54 Calculate 95% confidence interva the means difference (ux-Mvl classroom recotlec Problem 5- Using the sample weights (X;) of the stucents problem 14, find the...\n##### The average price of a college math textbook is S170 and the standard deviation is 527 . Suppose that 48 textbooks are randomly chosen_ Round all answers t0 decimal places where possible_What is the distribution of I? r For the group of 48 find the probability that the average price is between 5166 and 5169Find the first quartile for the average textbook price for this sample size. nearest cent) For part b) , is the assumption that the distribution is normal necessary? 0 NOO Yes(round t0 the\nThe average price of a college math textbook is S170 and the standard deviation is 527 . Suppose that 48 textbooks are randomly chosen_ Round all answers t0 decimal places where possible_ What is the distribution of I? r For the group of 48 find the probability that the average price is between 5166...\n##### This is a 5 part question Correcting for negative externalities - Regulation versus tradable permits Suppose...\nThis is a 5 part question Correcting for negative externalities - Regulation versus tradable permits Suppose the government wants to reduce the total pollution emitted by three local firms. Currently, each firm is creating 4 units of pollution in the area, for a total of 12 pollution units. If the g...\n##### [(2Hz;} Qwx}] + /8 Age -24\n[(2Hz;} Qwx}] + /8 Age -24..."
]
| [
null,
"https://cdn.numerade.com/ask_images/d4730a5f1c7a4b6aa41687d7ac0af2db.jpg ",
null,
"https://cdn.numerade.com/previews/524a6921-2db6-4ebd-848a-14143654dca9_large.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8287874,"math_prob":0.9888681,"size":15165,"snap":"2023-40-2023-50","text_gpt3_token_len":4324,"char_repetition_ratio":0.10052107,"word_repetition_ratio":0.50994647,"special_character_ratio":0.28605342,"punctuation_ratio":0.14103362,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974086,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T14:40:51Z\",\"WARC-Record-ID\":\"<urn:uuid:15f84293-0181-4c52-bc2c-dd69b8744336>\",\"Content-Length\":\"80806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25030d30-f64b-4e64-9f33-58382717632c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f3dee88-ef4a-4d7c-a5a8-9300e946e3d8>\",\"WARC-IP-Address\":\"104.21.12.185\",\"WARC-Target-URI\":\"https://solvedlib.com/n/problem-6-solve-the-heat-equation-ut-3uzr-2u1-0-lt-i-lt,7762925\",\"WARC-Payload-Digest\":\"sha1:423Z77LSIPG7Z6QOLQVS7BD557YV4I24\",\"WARC-Block-Digest\":\"sha1:EU3R4JYV74JVBVYR3Y72E5H2RKWGUGVE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510516.56_warc_CC-MAIN-20230929122500-20230929152500-00120.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-14-trigonometric-graphs-identities-and-equations-14-4-solve-trigonometric-equations-14-4-exercises-skill-practice-page-935/13 | [
"## Algebra 2 (1st Edition)\n\n$$x=\\frac{\\pi }{6}+2\\pi n,\\:x=\\frac{11\\pi }{6}+2\\pi n,\\:x=\\frac{5\\pi }{6}+2\\pi n,\\:x=\\frac{7\\pi }{6}+2\\pi n$$\nWe solve the equation using the properties of trigonometric functions. Note, there is a general solution since trigonometric identities go up and down and this can pass through a given value of y many times. Solving this, we find: $$\\cos \\left(x\\right)=\\frac{\\sqrt{3}}{2},\\:\\cos \\left(x\\right)=-\\frac{\\sqrt{3}}{2} \\\\ x=\\frac{\\pi }{6}+2\\pi n,\\:x=\\frac{11\\pi }{6}+2\\pi n,\\:x=\\frac{5\\pi }{6}+2\\pi n,\\:x=\\frac{7\\pi }{6}+2\\pi n$$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.56211907,"math_prob":1.0000033,"size":559,"snap":"2019-35-2019-39","text_gpt3_token_len":236,"char_repetition_ratio":0.25765765,"word_repetition_ratio":0.12903225,"special_character_ratio":0.41144902,"punctuation_ratio":0.13475177,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T12:59:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2b1ad848-78d1-4f7e-ae89-0cc1f370925c>\",\"Content-Length\":\"107111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa5c0825-3e13-4a1f-b4fc-41aa651db432>\",\"WARC-Concurrent-To\":\"<urn:uuid:9199e233-cae9-4279-b737-c9325156c652>\",\"WARC-IP-Address\":\"52.87.77.102\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-14-trigonometric-graphs-identities-and-equations-14-4-solve-trigonometric-equations-14-4-exercises-skill-practice-page-935/13\",\"WARC-Payload-Digest\":\"sha1:IPTKLWIMGHTASZETX33XDAH327V4IDRU\",\"WARC-Block-Digest\":\"sha1:RJ2PZZ7WQ4L6PENZSMD4VFAKITD3FSYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573519.72_warc_CC-MAIN-20190919122032-20190919144032-00298.warc.gz\"}"} |
https://everything.explained.today/Tuple/ | [
"# Tuple Explained\n\nIn mathematics, a tuple is a finite ordered list (sequence) of elements. An -tuple is a sequence (or ordered list) of elements, where is a non-negative integer. There is only one 0-tuple, referred to as the empty tuple. An -tuple is defined inductively using the construction of an ordered pair.\n\nMathematicians usually write tuples by listing the elements within parentheses \"\" and separated by commas; for example, denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets \"[ ]\" or angle brackets \"⟨ ⟩\". Braces \"\" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term tuple can often occur when discussing other mathematical objects, such as vectors.\n\nIn computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples.\n\nTuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy.\n\n## Etymology\n\nThe term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are taken from the Latin names of the numerals. The unique 0-tuple is called the null tuple or empty tuple. A 1‑tuple is called a single (or singleton), a 2‑tuple is called an ordered pair or couple, and a 3‑tuple is called a triple (or triplet). The number can be any nonnegative integer. For example, a complex number can be represented as a 2‑tuple of reals, a quaternion can be represented as a 4‑tuple, an octonion can be represented as an 8‑tuple, and a sedenion can be represented as a 16‑tuple.\n\nAlthough these uses treat ‑uple as the suffix, the original suffix was ‑ple as in \"triple\" (three-fold) or \"decuple\" (ten‑fold). This originates from medieval Latin plus (meaning \"more\") related to Greek ‑πλοῦς, which replaced the classical and late antique ‑plex (meaning \"folded\"), as in \"duplex\".\n\n### Names for tuples of specific lengths\n\nTuple length,\n\nn\n\nName Alternative names\n0 empty tuple null tuple / empty sequence / unit\n1 monuple single / singleton / monad\n2 couple double / ordered pair / two-ple / twin / dual / duad / dyad / twosome\n3 triple treble / triplet / triad / ordered triple / threesome\n5 quintuple pentuple / quint / pentad\n8 octuple octa / octet / octad / octuplet\n13 tredecuple baker's dozen\n14 quattuordecuple\n15 quindecuple\n16 sexdecuple\n17 septendecuple\n18 octodecuple\n19 novemdecuple\n20 vigintuple\n21 unvigintuple\n22 duovigintuple\n23 trevigintuple\n24 quattuorvigintuple\n25 quinvigintuple\n26 sexvigintuple\n27 septenvigintuple\n28 octovigintuple\n29 novemvigintuple\n30 trigintuple\n31 untrigintuple\n50 quinquagintuple\n60 sexagintuple\n70 septuagintuple\n80 octogintuple\n90 nongentuple\n100 centuple\n\nNote that for\n\nn\\geq3\n\n, the tuple name in the table above can also function as a verb meaning \"to multiply [the direct object] by\n\nn\n\n\"; for example, \"to quintuple\" means \"to multiply by 5\". If\n\nn=2\n\n, then the associated verb is \"to double\". There is also a verb \"sesquiple\", meaning \"to multiple by 3/2\". Theoretically, \"monuple\" could be used in this way too.\n\n## Properties\n\nThe general rule for the identity of two -tuples is\n\n(a1,a2,\\ldots,an)=(b1,b2,\\ldots,bn)\n\nif and only if\n\na1=b1,a2=b2,\\ldots,an=bn\n\n.\n\nThus a tuple has properties that distinguish it from a set:\n\n1. A tuple may contain multiple instances of the same element, so\ntuple\n\n(1,2,2,3)(1,2,3)\n\n; but set\n\n\\{1,2,2,3\\}=\\{1,2,3\\}\n\n.\n1. Tuple elements are ordered: tuple\n\n(1,2,3)(3,2,1)\n\n, but set\n\n\\{1,2,3\\}=\\{3,2,1\\}\n\n.\n1. A tuple has a finite number of elements, while a set or a multiset may have an infinite number of elements.\n\n## Definitions\n\nThere are several definitions of tuples that give them the properties described in the previous section.\n\n### Tuples as functions\n\nThe\n\nF~:~\\left\\{1,\\ldots,n\\right\\}~\\to~\\left\\{a1,\\ldots,an\\right\\}\n\n\\operatorname{domain}F=\\left\\{1,\\ldots,n\\right\\}=\\left\\{i\\in\\N:1\\leqi\\leqn\\right\\}\n\n\\operatorname{codomain}F=\\left\\{a1,\\ldots,an\\right\\},\n\nthat is defined at\n\ni\\in\\operatorname{domain}F=\\left\\{1,\\ldots,n\\right\\}\n\nby\n\nF(i):=ai.\n\nThat is,\n\nF\n\nis the function defined by\n\n\\begin{alignat}{3} 1&\\mapsto&&a1\\\\ & \\vdots&&\\\\ n&\\mapsto&&an\\\\ \\end{alignat}\n\nin which case the equality\n\n\\left(a1,a2,...,an\\right)=\\left(F(1),F(2),...,F(n)\\right)\n\nnecessarily holds.\n\nTuples as sets of ordered pairs\n\nFunctions are commonly identified with their graphs, which is a certain set of ordered pairs. Indeed, many authors use graphs as the definition of a function. Using this definition of \"function\", the above function\n\nF\n\ncan be defined as:\n\nF~:=~\\left\\{\\left(1,a1\\right),\\ldots,\\left(n,an\\right)\\right\\}.\n\n### Tuples as nested ordered pairs\n\nAnother way of modeling tuples in Set Theory is as nested ordered pairs. This approach assumes that the notion of ordered pair has already been defined.\n\n1. The 0-tuple (i.e. the empty tuple) is represented by the empty set\n\n\\emptyset\n\n.\n1. An -tuple, with, can be defined as an ordered pair of its first entry and an -tuple (which contains the remaining entries when :\n\n(a1,a2,a3,\\ldots,an)=(a1,(a2,a3,\\ldots,an))\n\nThis definition can be applied recursively to the -tuple:\n\n(a1,a2,a3,\\ldots,an)=(a1,(a2,(a3,(\\ldots,(an,\\emptyset)\\ldots))))\n\nThus, for example:\n\n\\begin{align} (1,2,3)&=(1,(2,(3,\\emptyset)))\\\\ (1,2,3,4)&=(1,(2,(3,(4,\\emptyset))))\\\\ \\end{align}\n\nA variant of this definition starts \"peeling off\" elements from the other end:\n\n1. The 0-tuple is the empty set\n\n\\emptyset\n\n.\n1. For :\n\n(a1,a2,a3,\\ldots,an)=((a1,a2,a3,\\ldots,an-1),an)\n\nThis definition can be applied recursively:\n\n(a1,a2,a3,\\ldots,an)=((\\ldots(((\\emptyset,a1),a2),a3),\\ldots),an)\n\nThus, for example:\n\n\\begin{align} (1,2,3)&=(((\\emptyset,1),2),3)\\\\ (1,2,3,4)&=((((\\emptyset,1),2),3),4)\\\\ \\end{align}\n\n### Tuples as nested sets\n\nUsing Kuratowski's representation for an ordered pair, the second definition above can be reformulated in terms of pure set theory:\n\n1. The 0-tuple (i.e. the empty tuple) is represented by the empty set\n\n\\emptyset\n\n;\n1. Let\n\nx\n\nbe an -tuple\n\n(a1,a2,\\ldots,an)\n\n, and let\n\nxb\\equiv(a1,a2,\\ldots,an,b)\n\n. Then,\n\nxb\\equiv\\{\\{x\\},\\{x,b\\}\\}\n\n. (The right arrow,\n\nIn this formulation:\n\n\\begin{array}{lclcl} &&&=&\\emptyset\\\\ &&&&\\\\ (1)&=&1&=&\\{\\{\\},\\{,1\\}\\}\\\\ &&&=&\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\\\ &&&&\\\\ (1,2)&=&(1)2&=&\\{\\{(1)\\},\\{(1),2\\}\\}\\\\ &&&=&\\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\},\\\\ &&&&\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\}\\\\ &&&&\\\\ (1,2,3)&=&(1,2)3&=&\\{\\{(1,2)\\},\\{(1,2),3\\}\\}\\\\ &&&=&\\{\\{\\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\},\\\\ &&&&\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\}\\},\\\\ &&&&\\{\\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\},\\\\ &&&&\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\},3\\}\\}\\\\ \\end{array}\n\n## -tuples of -sets\n\nIn discrete mathematics, especially combinatorics and finite probability theory, -tuples arise in the context of various counting problems and are treated more informally as ordered lists of length . -tuples whose entries come from a set of elements are also called arrangements with repetition, permutations of a multiset and, in some non-English literature, variations with repetition. The number of -tuples of an -set is . This follows from the combinatorial rule of product. If is a finite set of cardinality, this number is the cardinality of the -fold Cartesian power . Tuples are elements of this product set.\n\n## Type theory\n\nSee main article: Product type. In type theory, commonly used in programming languages, a tuple has a product type; this fixes not only the length, but also the underlying types of each component. Formally:\n\n(x1,x2,\\ldots,xn):T1 x T2 x \\ldots x Tn\n\nand the projections are term constructors:\n\n\\pi1(x):T1,~\\pi2(x):T2,~\\ldots,~\\pin(x):Tn\n\nThe tuple with labeled elements used in the relational model has a record type. Both of these types can be defined as simple extensions of the simply typed lambda calculus.\n\nThe notion of a tuple in type theory and that in set theory are related in the following way: If we consider the natural model of a type theory, and use the Scott brackets to indicate the semantic interpretation, then the model consists of some sets\n\nS1,S2,\\ldots,Sn\n\n(note: the use of italics here that distinguishes sets from types) such that:\n\n[[T1]]=S1,~[[T2]]=S2,~\\ldots,~[[Tn]]=Sn\n\nand the interpretation of the basic terms is:\n\n[[x1]]\\in[[T1]],~[[x2]]\\in[[T2]],~\\ldots,~[[xn]]\\in[[Tn]]\n\n.\n\nThe -tuple of type theory has the natural interpretation as an -tuple of set theory:\n\n[[(x1,x2,\\ldots,xn)]]=([[x1]],[[x2]],\\ldots,[[xn]])\n\nThe unit type has as semantic interpretation the 0-tuple."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.76203704,"math_prob":0.9931885,"size":7933,"snap":"2021-43-2021-49","text_gpt3_token_len":2533,"char_repetition_ratio":0.13608274,"word_repetition_ratio":0.0029239766,"special_character_ratio":0.31753436,"punctuation_ratio":0.19144863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947595,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T02:59:03Z\",\"WARC-Record-ID\":\"<urn:uuid:0c3502a5-6730-45fe-bf84-7057c7d30cf6>\",\"Content-Length\":\"40883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29365382-8e16-446d-947c-32bc5bc77dda>\",\"WARC-Concurrent-To\":\"<urn:uuid:908e946a-58ae-435a-902a-77e86fc7985a>\",\"WARC-IP-Address\":\"85.25.210.18\",\"WARC-Target-URI\":\"https://everything.explained.today/Tuple/\",\"WARC-Payload-Digest\":\"sha1:PZCAJEFQLZZPVFBLE72CTRMACIGOGDVS\",\"WARC-Block-Digest\":\"sha1:K54QZ3EDIN7WLE26P3BL7KEG2P2WZGGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964361064.69_warc_CC-MAIN-20211202024322-20211202054322-00376.warc.gz\"}"} |
https://physics.stackexchange.com/questions/634049/mass-term-in-chiral-lagrangian-and-chiral-lagrangian | [
"# Mass term in chiral lagrangian and chiral lagrangian\n\nMesons in chiral lagrangian are described by the chiral action $$U = e^{2i\\pi^a T^a/f_\\pi}$$, $$\\begin{equation} \\mathcal{L}_{chiral} = \\frac{f_\\pi^2}{4} \\text{Tr} \\left( \\partial^\\mu U^\\dagger \\partial_\\mu U \\right) + \\frac{\\sigma}{2} \\text{Tr} \\left(MU + U^\\dagger M^\\dagger \\right) \\end{equation}$$\n\nThe mass term linear in quark masses, $$M = \\operatorname{diag} (m_u, m_d, m_s)$$ and proportional to the chiral condensate: $$\\begin{equation} \\langle \\bar{\\psi}_{-i} \\psi_{+j} \\rangle \\approx - \\sigma U_{ij}. \\end{equation}$$\n\nThis leads to the dispersion laws, $$\\begin{equation} m_\\pi^2 = \\frac{2\\sigma}{f_\\pi^2} (m_u + m_d), \\;\\;\\;\\;\\; m_\\eta^2 = \\frac{2\\sigma}{f_\\pi^2} (m_u + m_d + 4m_s)/3 \\end{equation}$$\n\nHow to explain that the coefficient of the mass term is $$\\sigma$$??\n\n• If somebody gave you the mass terms (\"dispersion laws\") you write down, from someplace, you could consistently identify their σ with the arbitrary parameter σ in the explicit breaking term of your \"chiral\" Lagrangian, right? So all you need to do is derive these \"Dashen formulas\" out of the chiral condensate formula you wrote, with its funny matrix U. May 4, 2021 at 19:54\n• May 4, 2021 at 21:40\n\nEven simpler: The QCD partition function satisfies $$\\frac{\\partial\\log Z}{\\partial m_q} = \\langle \\bar{q}q\\rangle$$ which follows directly from the form of the mass term in the QCD Lagragian. Now compute the same object in chiral perturbation theory. The ground state is $$U=1$$, and $$\\frac{\\partial\\log Z}{\\partial m_q} = -\\sigma$$\n\nI strongly suspect you could write the perfect answer to your question after looking at Georgi's Weak interaction book, or Sec 28.2.2 of M Schwartz's book, or 5.5 of TP Cheng & LF Li, or 4.1.2 of this classic review of chiral perturbation theory. I'll give you a trail map, which means all normalizations below will be suspect...\n\nAll of these references, obsessed with \"telling the truth\" do not emphasize what all pros have in the back of their mind: the basic correspondence of the SSBroken axial currents in their fundamental QCD quark reincarnation versus their pseudoscalar meson chiral lagrangian reincarnation, $$J_5^{a~~\\mu}= \\bar q \\gamma^\\mu \\gamma_5 T^a q ~~~~\\leftrightarrow ~~~~~f_\\pi \\partial^\\mu \\pi^a,$$ normally connected through the less transparent PCAC bridge, $$\\langle 0| J_5^{a~\\mu}(x)|\\pi^b(p)\\rangle=ip^\\mu f_\\pi e^{-ipx}\\delta_{ab}.$$ The divergence of this SSB current fails to vanish only because of the explicit breaking due to the quark masses, $$\\partial_\\mu J_5^{a~\\mu}= f_\\pi m_\\pi^2 \\pi^a$$, (~$$m_q \\bar q \\gamma_5 \\lambda^aq$$). The crucial link is that both reincarnations transform identically under the L and R transformations of the chiral SU(3)s.\n\nDashen's theorem, or, more conventionally, the Gell-Mann—Oakes—Renner relations connect the pseudoscalar masses of the above to the chiral condensation parameter of QCD, $$\\sigma=-\\langle \\bar q q\\rangle$$ for the three light quarks, the cube of a quarter of a GeV, $$f_\\pi^2 m_\\pi^2= \\sigma (m_u+m_d),\\\\ f_\\pi^2 m_\\eta^2= \\sigma (m_u+m_d+4m_s)/3.$$ (Your funny $$U_{ij}=\\delta_{ij}$$ in chiral condensation in QCD.$$^\\sharp$$)\n\nWhen you introduce the explicit breaking term in the chiral lagrangian, The P of PCAC, $$\\frac{\\sigma'}{2} \\text{Tr} \\left(MU + U^\\dagger M^\\dagger \\right),$$ you check the breaking term transforms like the corresponding quark mass terms in the fundamental QCD lagrangian, but, as yet, you don't know what the mystery arbitrary parameter $$\\sigma '$$ might be. Compute the quadratic terms of the pseudoscalars in the above (the linear ones vanish from the tracelessness of the Gell-Mann matrices, $$T^a=\\lambda^a/2$$).\n\nThe coefficient of the $$\\pi_3^2/f_\\pi^2$$, for example, is $$-\\sigma' (\\operatorname{Tr}M\\lambda_3^2)/2= -\\sigma'(m_u+m_d)$$, and of the $$\\eta^2/f_\\pi^2$$, is $$-\\sigma' (\\operatorname{Tr}M\\lambda_8^2)/2= -\\sigma'(m_u+m_d+4m_s)/3$$.\n\nComparing with the GOR masses, up to the flakey sign (hiding in M?), you may identify $$\\sigma= \\sigma'$$.\n\nHaving done that, one confirms that the vacuum energies of the two reincarnations, QCD and chiral lagrangian, also match, $$m_u\\langle \\bar u u\\rangle + m_d\\langle \\bar d d\\rangle + m_s\\langle \\bar s s\\rangle= \\sigma (m_u+m_d+ m_s)= \\sigma \\operatorname{Tr}M,$$ but this is not quite compelled by their symmetry structure.\n\n$$^\\sharp$$ Veery loosely, just a trailmap, eqn. (5.223) of Cheng & Li, $$m_\\pi^2 f_\\pi^2 \\propto \\langle \\pi^a| [Q_5^a,[Q^b_5,H(0)]|\\pi^b\\rangle \\propto m_q \\langle \\bar q q\\rangle,$$ no summation over flavor indices."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77945465,"math_prob":0.999408,"size":3002,"snap":"2022-05-2022-21","text_gpt3_token_len":944,"char_repetition_ratio":0.11641094,"word_repetition_ratio":0.0,"special_character_ratio":0.29780146,"punctuation_ratio":0.10942761,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999707,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T21:37:05Z\",\"WARC-Record-ID\":\"<urn:uuid:4832bc0f-7038-4036-9b34-dfc539b1e962>\",\"Content-Length\":\"239301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:507d009c-7c11-4ae8-96d2-8dc93bc8341f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2da5d1ea-8e61-4b50-aa89-7d845fbc9183>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/634049/mass-term-in-chiral-lagrangian-and-chiral-lagrangian\",\"WARC-Payload-Digest\":\"sha1:RAEC5SNMYXPTLFVLRCGIERP3JA2MQUFV\",\"WARC-Block-Digest\":\"sha1:P537ZGX2UWYZ6PE2I7BGLNPX2PR4JIVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662530066.45_warc_CC-MAIN-20220519204127-20220519234127-00612.warc.gz\"}"} |
https://gmplib.org/list-archives/gmp-devel/2003-September/000259.html | [
"# mpz_gcd\n\nPaul Zimmermann Paul.Zimmermann at loria.fr\nTue Sep 30 22:25:28 CEST 2003\n\n```\tDear gmp-developers,\n\nI have a theoretical question about mpz_gcd. It uses Sorenson's\nalgorithm, with improvements of Weber. In short, the algorithm\nis as follows:\n\n# assume a, b are both odd of n limbs\n# B = 2^32\nwhile n > 0 do\n1) find two 1-limb numbers u, v such that B^2 divides u*a + v*b\n2) a <- (u*a + v*b) / B^2 # a has now n-1 limbs\n3) find q such that B divides b + q*a\n4) b <- (b + q*a) / B # b has now n-1 limbs\nn <- n-1\nend while\n\nSteps 1) and 3) cost O(1) since they only need to consider the low\nlimbs from a and b. Step 2) can be performed by one mpn_mul_1 call\nand one mpn_addmul_1 call, both of size n. Step 4) can be performed\nby one mpn_addmul_1 call of size n too. Thus considering mpn_mul_1\nhas the same cost per limb c than mpn_addmul_1, we have 3*c*n to go\nfrom n to n-1, which gives a total cost of about 3/2*n^2.\n\nAs a comparison, the basecase multiplication (mpn_mul_basecase) costs\nc*n^2, thus we should have mpz_gcd ~ 3/2 * mpn_mul_basecase. However\nexperimental results show that the ratio is more about 2 than 3/2,\nfor example for n=5000 on my laptop:\n\n1.5 * mpn_mul_basecase took 1005ms\nmpz_gcd took 1370ms\n\nIs there an explanation for this?\n\nPaul\n\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8256369,"math_prob":0.97058886,"size":1303,"snap":"2020-10-2020-16","text_gpt3_token_len":442,"char_repetition_ratio":0.09776752,"word_repetition_ratio":0.008130081,"special_character_ratio":0.32233307,"punctuation_ratio":0.089655176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T19:33:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6a0ed8da-c609-4b7c-a611-b4b0c51a42d0>\",\"Content-Length\":\"3161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e91fecc7-8e12-48c7-8e40-65639378d1a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a0a8ecc-278a-4cc2-bed3-a194c8f93025>\",\"WARC-IP-Address\":\"130.242.124.102\",\"WARC-Target-URI\":\"https://gmplib.org/list-archives/gmp-devel/2003-September/000259.html\",\"WARC-Payload-Digest\":\"sha1:ISNOXHHF456JNLA24ZYVLPR7TEMJQARB\",\"WARC-Block-Digest\":\"sha1:O4BM44NIEVJKPTAN3NGNNM6GYK22A3XV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370511408.40_warc_CC-MAIN-20200410173109-20200410203609-00396.warc.gz\"}"} |
https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/INT-out.html | [
"# INTEGER Output: The I Descriptor",
null,
"The Iw and Iw.m descriptors are for INTEGER output. The general form of these descriptors are as follows:\n\nrIw and rIw.m\n\nThe meaning of r, w and m are:\n\n• I is for INTEGER\n• w is the width of field, which indicates that an integer should be printed with w positions.\n• m indicates that at least m positions (of the w positions) must contain digits. If the number to be printed has fewer than m digits, leading 0s are filled. If the number has more than m digits, m is ignored and in this case Iw.m is equivalent to Iw.\n\nNote that w must be positive and larger than or equal to m.\n\nIt is interesting to note that m can be zero! That is, no digits should be printed. In this case, if the number to be printed is non-zero, it will be printed as if Iw is used. However, if the number is zero, all w positions will be filled with spaces!\n\n• r is the repetition indicator, which gives the number of times the edit descriptor should be repeated. For example, 3I5.3 is equivalent to I5.3, I5.3, I5.3.\n• The sign of a number also needs one position. Thus, if -234 is printed, w must be larger than or equal to 4. The sign of a positive number is not printed.\n• What if the number of positions is less than the number of digits plus the sign? In other words, what if a value of 12345 is printed with I3? Three positions are not enough to print the value of five digits. In this case, traditionally, all w positions are filled with *'s. Therefore, if you see a sequence of asterisks, you know your edit descriptor does not have enough length to print a number.",
null,
"### Examples\n\nLet us look at the following example. There are three INTEGER variables a, b and c with values 123, -123 and 123456, respectively. In the following table, the WRITE statements are shown in the left and their corresponding output, all using five positions, are shown in the right.",
null,
"• The first line uses (I5) to print the value of 123. Thus, digits 1, 2 and 3 appear at the right end and two leading spaces are filled.\n• The second line uses (I5.2) to print 123. This means of the five positions, two positions must contain digits. Since the value 123 has already had 3 digits. Therefore, .2 is ignored and the result is the same as that of the first line.\n• The third line uses (I5.4) to print 123. Since m = 4, four positions must be filled with digits. Since 123 has only three digits, a leading 0 must be inserted. Thus, in the output, the five positions contain a space, 0, 1, 2 and 3.\n• The fourth line uses I5.5 and therefore forces two 0s to be inserted.\n• The fifth line uses (I5) to print -123. Since the number is negative, a minus sign is printed. The sixth line produces the same result as that of the fifth.\n• The seventh line tells us that if leading zeros must be inserted, they are inserted between the minus sign and the number.\n• The eighth line uses (I5.5) to print -123. This is not a good edit descriptor. I5.5 means to print a number using five positions and all five positions must be filled with digits. If the number is positive, there is no problem as shown on the fourth line. However, if the number is negative, there will be no position for the minus sign. As a result, all five positions are filled with asterisks, indicating a problem has occurred.\n• The last line uses I5.2 to print 123456. The number has six digits and the number of positions to print this number is five. Thus, the given number 123456 cannot be printed completely and all five positions are filled with asterisks.\nConsider the following example. The WRITE statement has three INTEGER variables and consequently the format must also have three I edit descriptors, one for each variable.\n\n```INTEGER :: a = 3, b = -5, c = 128\n\nWRITE(*,\"(3I4.2)\") a, b, c\n```\nThe edit descriptor is 3I4.2 and the format is equivalent to (I4.2,I4.2,I4.2) because the repetition indicator is 3. Therefore, each of the three INTEGER variables is printed with I4.2. Based on the discussion earlier, the result is the following:",
null,
""
]
| [
null,
"https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/GrLine.gif",
null,
"https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/GrLine.gif",
null,
"https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/I-out-1.jpg",
null,
"https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/I-out-2.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89000523,"math_prob":0.98439443,"size":3989,"snap":"2022-27-2022-33","text_gpt3_token_len":1036,"char_repetition_ratio":0.1540778,"word_repetition_ratio":0.024291499,"special_character_ratio":0.25946352,"punctuation_ratio":0.14982975,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99603903,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T14:14:20Z\",\"WARC-Record-ID\":\"<urn:uuid:d567edf5-73ed-4e81-a092-b191a67d4ca7>\",\"Content-Length\":\"6603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d104e36-b4cc-4ea6-b916-71ef5ce7753a>\",\"WARC-Concurrent-To\":\"<urn:uuid:89efd903-bb16-4b00-a1c4-70890394837e>\",\"WARC-IP-Address\":\"141.219.70.232\",\"WARC-Target-URI\":\"https://pages.mtu.edu/~shene/COURSES/cs201/NOTES/chap05/INT-out.html\",\"WARC-Payload-Digest\":\"sha1:EFV7EJSWXHFDCEL5FCLTNHAID5AZQ2B5\",\"WARC-Block-Digest\":\"sha1:CL4YCGLTMRKGHIZT6TD4BTG6UXYYXUXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570827.41_warc_CC-MAIN-20220808122331-20220808152331-00315.warc.gz\"}"} |
https://web2.0calc.com/questions/help_69289 | [
"+0\n\n# help\n\n+1\n140\n1\n\nIf a and b are integers, such that a does not equal 0 and b doe not equal 0 and and the quares of a and b have at most two digits, what is the greatest possible difference between the squares of a and b\n\nJun 21, 2020\n\n#1\n+1\n\nthe solution is\n\nso the max a or b's square is 81(9x9)\n\nand the least is 1 (1x1)\n\n81-1=80\n\nJun 21, 2020"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9517357,"math_prob":0.99808115,"size":289,"snap":"2021-04-2021-17","text_gpt3_token_len":87,"char_repetition_ratio":0.15438597,"word_repetition_ratio":0.0,"special_character_ratio":0.29757786,"punctuation_ratio":0.028985508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9600306,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T00:16:48Z\",\"WARC-Record-ID\":\"<urn:uuid:88d844ea-c344-4b35-a9ea-996d12afeffb>\",\"Content-Length\":\"21519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7d03c48-b059-439a-8d50-316cea568d96>\",\"WARC-Concurrent-To\":\"<urn:uuid:eaac91ea-06d4-405a-8a97-9c829f127d9c>\",\"WARC-IP-Address\":\"168.119.149.252\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help_69289\",\"WARC-Payload-Digest\":\"sha1:YK36ZHRQS47WWR4TF4CPJM376VZ66DA3\",\"WARC-Block-Digest\":\"sha1:VK7G2RMYEQPWULK4QQVCO7AXDQUIESMV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703497681.4_warc_CC-MAIN-20210115224908-20210116014908-00715.warc.gz\"}"} |
http://eth0.net/download/pysig/comprehensions.html | [
"# List Comprehensions\n\n• Used to create lists using for and if clauses\n• Introduced in Python 2.0\n• This feature was introduced via PEP202\n\n# For loop\n\n• Traditional imperative programming method, use a for loop\n\nCreate a list of the first ten powers of two:\n\n```powersof2 = []\nfor x in range(1, 11):\npowersof2.append(2 ** x)\n\n>>> print powersof2\n[2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]\n```\n\n# Map function\n\n• Functional programming approach\n• Apply function to each item returned by iterable\n• Syntax: map(function, iterable)\n\nPowers of two example with map and anonymous lambda function:\n\n```powersof2 = map(lambda x:2 ** x, range(1,11))\n\n>>> print powersof2\n[2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]\n```\n\n# List comprehension\n\n• Syntactic sugar to easily generate lists\n• Syntax for simple comprehension [ expression for variable in iterable ]\n\nPowers of two example with list comprehension:\n\n```powersof2 = [ 2 ** x for x in range(1, 11) ]\n>>> print powersof2\n[2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]\n```\n\n# Filtering results\n\n• for loop method\n```sentence = \"She sells seashells by the seashore\"\nwordlist = []\nfor word in sentence.split():\nif word.lower().startswith(\"s\"):\nwordlist.append(word)\n\nprint wordlist\n['She', 'sells', 'seashells', 'seashore']\n```\n• Map has a companion, filter()\n```wordlist = filter(lambda word:word.lower().startswith(\"s\"), sentence.split())\n>>> print wordlist\n['She', 'sells', 'seashells', 'seashore']\n```\n\n# Comprehension with if clause\n\n• Add a an if clause to the end of a comprehension to filter results\n• Syntax for simple comprehension [ expression for variable in iterable if condition ]\n```wordlist = [ word for word in sentence.split() if word.lower().startswith(\"s\") ]\n>>> print wordlist\n['She', 'sells', 'seashells', 'seashore']\n```\n\n# Nested comprehensions\n\n• List comprehensions can be nested\n```>>> [ letter * number for number in [1,2,3] for letter in [\"a\",\"b\",\"c\"] ]\n['a', 'b', 'c', 'aa', 'bb', 'cc', 'aaa', 'bbb', 'ccc']\n```\n• The order of the for clauses is significant\n• The above is equivalent to:\n```thelist = []\nfor number in [1,2,3]:\nfor letter in [\"a\",\"b\",\"c\"]:\nthelist.append(letter * number)\n```\n\n# More examples\n\n• Convert a list of Celsius temparatures to Fahrenheit\n\nExample from Python Course\n\n```Celsius = [39.2, 36.5, 37.3, 37.8]\nFahrenheit = [ \"%.2f\" % ((float(9)/5)*x + 32) for x in Celsius ]\nprint Fahrenheit\n['102.56', '97.70', '99.14', '100.04']\n```\n• List of all drive letters in Windows\n```driveletters = [ \"%s:\" % letter for letter in string.ascii_uppercase ]\n\ndriveletters[:len(driveletters)/2]\n['A:', 'B:', 'C:', 'D:', 'E:', 'F:', 'G:', 'H:', 'I:', 'J:', 'K:', 'L:', 'M:']\ndriveletters[len(driveletters)/2:]\n['N:', 'O:', 'P:', 'Q:', 'R:', 'S:', 'T:', 'U:', 'V:', 'W:', 'X:', 'Y:', 'Z:']\n```\n\n# More examples 2\n\n• Unique IP addresses from an apache web server log file:\n```from pprint import pprint\n\nuniqips = set( [ line.split() for line in open(\"access.log\") ] )\n\npprint(list(uniqips)[:5])\n['180.76.5.65',\n'74.125.19.39',\n'220.181.51.109',\n'123.125.71.75',\n'178.255.215.65']\n```\n\n# Related topics\n\n• For further reading\nGenerator expressions\n(introduced in Python 2.4)\nhttp://www.python.org/dev/peps/pep-0289/\nDict comprehensions\n(introduced in Python 2.7)\nhttp://www.python.org/dev/peps/pep-0274/\nSet comprehensions\n(introduced in Python 2.7)\nhttp://docs.python.org/release/3.1.5/tutorial/datastructures.html#sets\n\n# Credits\n\n• Presenter: Shawn K. O'Shea\n• Presented to: GNHLUG's PySIG\n• Presented on: July 26, 2012\n• Latest version: You can find the latest version of this presentation on my website: http://eth0.net/"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5312898,"math_prob":0.6342614,"size":2552,"snap":"2019-13-2019-22","text_gpt3_token_len":884,"char_repetition_ratio":0.12166405,"word_repetition_ratio":0.13172042,"special_character_ratio":0.43808776,"punctuation_ratio":0.2917342,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95929945,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T08:18:12Z\",\"WARC-Record-ID\":\"<urn:uuid:923c4fb7-d4f8-4a02-8e73-de8a7097baf2>\",\"Content-Length\":\"63853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:483cf5d6-da00-498b-bcd7-c0fdb4b83cfa>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1bd1f50-6dad-42f4-adfe-3a088e0e8932>\",\"WARC-IP-Address\":\"66.33.209.254\",\"WARC-Target-URI\":\"http://eth0.net/download/pysig/comprehensions.html\",\"WARC-Payload-Digest\":\"sha1:TUH2OTTGYCLGGZ7XA7RSLBQFBVXUXCFB\",\"WARC-Block-Digest\":\"sha1:HEU6DHUMW66NDWU2KX6SPQF2O5QHZA2H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204885.27_warc_CC-MAIN-20190326075019-20190326101019-00340.warc.gz\"}"} |
https://www.numbers.education/9740641.html | [
"Is 9740641 a prime number? What are the divisors of 9740641?\n\n## Parity of 9 740 641\n\n9 740 641 is an odd number, because it is not evenly divisible by 2.\n\nFind out more:\n\n## Is 9 740 641 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 9 740 641 is 3 121.\n\nTherefore, the square root of 9 740 641 is an integer, and as a consequence 9 740 641 is a perfect square.\n\nAs a consequence, 3 121 is the square root of 9 740 641.\n\n## What is the square number of 9 740 641?\n\nThe square of a number (here 9 740 641) is the result of the product of this number (9 740 641) by itself (i.e., 9 740 641 × 9 740 641); the square of 9 740 641 is sometimes called \"raising 9 740 641 to the power 2\", or \"9 740 641 squared\".\n\nThe square of 9 740 641 is 94 880 087 090 881 because 9 740 641 × 9 740 641 = 9 740 6412 = 94 880 087 090 881.\n\nAs a consequence, 9 740 641 is the square root of 94 880 087 090 881.\n\n## Number of digits of 9 740 641\n\n9 740 641 is a number with 7 digits.\n\n## What are the multiples of 9 740 641?\n\nThe multiples of 9 740 641 are all integers evenly divisible by 9 740 641, that is all numbers such that the remainder of the division by 9 740 641 is zero. There are infinitely many multiples of 9 740 641. The smallest multiples of 9 740 641 are:\n\n## Numbers near 9 740 641\n\n### Nearest numbers from 9 740 641\n\nFind out whether some integer is a prime number"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8324898,"math_prob":0.9909918,"size":378,"snap":"2021-43-2021-49","text_gpt3_token_len":120,"char_repetition_ratio":0.20053476,"word_repetition_ratio":0.0,"special_character_ratio":0.3994709,"punctuation_ratio":0.14893617,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99702036,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T21:42:28Z\",\"WARC-Record-ID\":\"<urn:uuid:4427ebab-8589-4e02-89b1-df1e79169e01>\",\"Content-Length\":\"19672\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:277b8117-1f6a-4125-be6e-2bd96801f684>\",\"WARC-Concurrent-To\":\"<urn:uuid:7eab5e2c-ccca-4b23-8fef-4c248785f66f>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/9740641.html\",\"WARC-Payload-Digest\":\"sha1:EX2BWWUFM27EQYA5S7MI7LIBZ5CTSSBZ\",\"WARC-Block-Digest\":\"sha1:XZPGU3FZ2YPU5B6DC3VNXHUBQF7BPRO6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363418.83_warc_CC-MAIN-20211207201422-20211207231422-00338.warc.gz\"}"} |
https://studylib.net/doc/18728884/reciprocity-transposition-based-sinusoidal-pulsewidth-mod.. | [
"# Reciprocity-transposition-based sinusoidal pulsewidth modulation",
null,
"```IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n1035\nReciprocity-Transposition-Based Sinusoidal\nPulsewidth Modulation for Diode-Clamped\nMultilevel Converters\nGiri Venkataramanan, Member, IEEE, and Ashish Bendre\nAbstract—Modulation strategies for multilevel inverters have\ntypically focused on synthesizing a desired set of three phase sinusoidal voltage waveforms using a fixed number of dc voltage\nlevels. This results in the average current injection and hence the\nnet power drawn from the multiple dc bus terminals to be unmatched and time varying. Subsequently, the dc-bus voltages are\nunregulated, requiring corrective control action to incorporated.\nIn this paper, the principle of reciprocity transposition in introduced as a means for modeling the dc-bus current injection simultaneously as the modulation strategy is formulated. Furthermore, a\nnew sinusoidal pulsewidth-modulation strategy that features constant and controllable current injection at the dc-bus terminals\nwhile maintaining output voltage waveform quality is introduced.\nThe proposed strategy is general enough to be applied to converters\nwith an even number of levels and an odd number of levels. Analytical results comparing the performance of the proposed modulator\nwith a conventional multiple carrier modulator are presented for\nexample multilevel converters with four and five levels. Computer\nsimulation results verifying the analytical results are presented for\na four-level converter.\nIndex Terms—Multilevel systems, power conversion, pulsewidth\nmodulation.\nI. INTRODUCTION\nI\nN RECENT YEARS, multilevel power converters have\nbecome popular in high-power three-phase ac applications\nwhere they provide various performance advantages over\nconventional two-level converters. These advantages include\nreduced voltage stresses on power semiconductor devices,\nreduced switching stresses, modular realization, and improved\nwaveform quality. In general, these converters incorporate a\ntopological structure that allows a desired output voltage to\nbe synthesized from among set of isolated or interconnected\ndistinct voltage sources. Independent of the topological structure, all modulation algorithms for these converters provide\nfor systematically selecting from among the various input\nvoltage levels to synthesize a desired output voltage waveform.\nSeveral modulation approaches that realize this function\neffectively have been proposed in the past –. Among\nthem are: stepped waveform synthesis , , programmed\nharmonic elimination , triangular-carrier-waveform-based\nschemes , real-time carrier-based harmonic elimination\nManuscript received July 4, 2001; revised December 4, 2001. Abstract published on the Internet July 15, 2002.\nThe authors are with the Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI 53706 USA (e-mail:\n[email protected]; [email protected]).\nPublisher Item Identifier 10.1109/TIE.2002.803210.\n, space-vector-based techniques , hysteretic current\ncontrol , and multilevel sigma–delta modulation . All\nof these techniques have been demonstrated to be effective in\nperforming the primary waveform synthesis function. Typically, in multilevel converters with isolated dc voltage sources\nsuch as in cascaded H-bridge converters, implementation of the\nmodulation function of these algorithms have been partitioned\nbetween various modules in a manner to equalize power drawn\nfrom the different voltage sources –. However, in\nmultilevel converters with interconnected sources such as the\ndiode-clamped multilevel converter such a partitioning between\nvarious dc voltage sources that incorporate the multiple levels\nis not straightforward , . As a result, the power drawn\nfrom the different voltage sources are varying as a function of\nresults in unsteady and at times unstable dc voltage levels ,\n. Thus, stabilization of voltage levels in the dc bus stack\nis one of the important concerns that is the focus of many\nresearch investigations –. This particularly problematic\nin singly fed systems where the dc bus stack is fed between the\nlowest voltage and the highest voltage in the stack. In general,\nin multiply fed systems that have different dc sources that feed\nvarious nodes of the dc-bus stack, the voltages are individually\nregulated. It has been shown that when bidirectional multilevel converters are connected “back-back” at the dc bus, the\ndc-bus voltages are stable and do not require any particular\nvoltage-balancing algorithms. However, this result has been\nspecific to a particular modulation algorithm, and it is not clear\nthat it holds true under all conditions. More recently, a voltage\nself-balancing topology was introduced, which has an inherent\ncapability for balancing the dc stack voltages .\nGenerally, it is well recognized and understood that the fundamental reason for voltage balancing problem is the unequal\ncurrent injection at the dc terminal of the converter that varies\nwith load current and modulation level. However, this process\nhas not been addressed systematically in the development of the\nmodulation algorithm. More often, the modulation algorithm\nis developed with the sole objective of output waveform synthesis and the dc power flow problem becomes an unintended,\nbut inevitable consequence of the power converter operation. A\ndc-voltage-balancing solution is then added on to manage the\nconsequence. It is the objective of this paper to systematically\naddress the dc current injection at the dc levels as a simultaneous problem to output waveform synthesis and not as a consequent problem. Reciprocity transposition is applied to examine\nthe dc current injection for a given modulation algorithm. This\n0278-0046/02\\$17.00 © 2002 IEEE\n1036\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n(a)\nFig. 1.\n(b)\nSchematic of the switching circuit of a multilevel dc to three phase ac power converter. (a) Even number of levels. (b) Odd number of levels.\napproach is well known and has been used as the basis for the development of modulation functions for matrix converters .\nThe dc current injection of a typical carrier based modulation\nalgorithm is determined using the reciprocity transposition approach. The drawbacks of the algorithm from the dc power flow\npoint of view are readily elucidated.\nConditions for obtaining desirable dc current injection properties are developed and applied to develop a modulation algorithm that has superior properties in terms of dc current injection. The results are developed for a general case of an\n(odd and even) level converter and demonstrated using a fiveand a four-level converter example. Analytical results are verified using computer simulations for the four-level converter. In\nSection II, the topology of the diode-clamped multilevel converter is briefly reviewed and the reciprocity transposition in\npower converters is used to develop input–output relationships\nfor the converter. Current injection at various nodes of the multilevel dc-bus stack is determined for a typical multiple triangular carrier modulation algorithm in Section III, along with\nsome of its limitations being identified. In Section IV, a new\nmodulation approach is proposed along with analytical results\ndescribing its properties. Computer simulation results from a\nfour-level converter example using conventional as well as the\nproposed modulation algorithm under typical operating conditions are presented in Section V. Section VI provides a summary\nof the results.\nII. DIODE-CLAMPED MULTILEVEL CONVERTER MODELING\nA. Topological Description\nA schematic of the switching circuit of a multilevel dc to three\nphase ac power converter is illustrated in Fig. 1. The illustration\nof the throws that form the switch as illustrated in Fig. 1 is an\nabstract representation of the switching structure between the\nmultilevel dc sources and the ac output. In reality, the throws\nmay be realized using any number of techniques, which result\nin the topological variety among multilevel converters. However, for purpose of describing the modulating properties of the\nconverter, the representation in Fig. 1 is sufficient. To be sure,\nthe topological mapping between the abstract representation and\nthe semiconductor realization will be provided further, so that\nthe models may be useful for studying the operation of the real\nconverter.\nThe switching circuit of the abstract representation consists\nof three single pole multithrow switches ( , , ), one each\nper phase of the output ac system. The number of throws in\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\neach switch is equal to the number of levels, of the multilevel\nconverter, which may be even or odd. For notational purposes,\nand\nbe defined as\nlet the integer\nthe maximum index number of the converter for even and odd\nnumber of levels, respectively. Thus, each of the three switches\nin an even-level converter has 2 throws and those in an oddthrows. Fig. 1(a) and (b) illustrates\nlevel converter have\nthe switching circuit schematic for an even-level and an oddlevel converter, respectively.\nThe dc bus of the diode clamped -level converter is formed\nindividual dc sources, connected in series\nby a stack of\nso that their polarities are additive. Although these sources may\nbe of different magnitudes, they are assumed to be nominally\nequal to each other, resulting in a symmetric diode-clamped\nmultilevel converter. The extremities of the series string and\neach of the junction points between the sources form the\nterminal of the dc-bus stack. The poles of the switches form\nthe ac output terminals, which feed an inductive load, generally\nmodeled as a balanced set of three-phase stiff currents ( ,\n, and ). The throws (\n), where\n) are connected to 2 or\nstiff voltages\n(\n) as illustrated in the figure. For\n(\nis equal to zero (or null),\na symmetric odd-level converter,\ndoes not exist.\nand for a symmetric even-level converter\nThe throws of the switches are assumed ideal as is common in\npreliminary functional analysis of switching power converters.\nThese assumptions include: 1) negligible forward voltage drop\nof the switch throws in their on-state; 2) sufficient on-state current carrying capacity and off-state voltage blocking capacity\ncommensurate and compatible with the voltage and current ratings of the system; and 3) negligible transition periods between\nopening and closing of the switch throws that permit repetitive high-frequency switching. The voltages at the throw terminals of the switch are assumed stiff such that their variations\nduring a switching period can be neglected. Similarly, the switch\npole currents are assumed stiff such that their variations over a\nswitching period can be neglected. These assumptions essentially allow the focus to be on the power transfer process and\nthe functional features. In practical power converters, filter elements appropriately applied at the input and output ports of the\nsystem would ensure that these assumptions are valid.\nIn order to maintain continuity of the three phase currents\nconnected to the poles, at least one of the throws of connected\nto any given pole of the switch has to be closed. Furthermore,\neach current port may be connected to only one voltage terminal\nat any given instant of time. Otherwise, two stiff voltages will\nbe short circuited together, resulting in uncontrolled currents\nthrough the switch throws. As a result, no more than one throw\nconnected to any given pole may be closed at any given instant\nof time.\nif\nis closed\notherwise\nfor\n1037\nMathematically, these constraints may be expressed using\nswitching function formulations. Let\n, the switching\nto\nfunction of a throw connecting the dc bus stack voltage\nbe defined as (1), shown at the bottom of the\nthe current\npage. Then,\nfor\nand\n(2)\nFurthermore, it is clear that\nfor\nand\n(3)\nand\nfor\nor\nand and\nfor even\nfor odd\n(4)\nIn Fig. 2, the single-pole multiple-throw switch forming the\nphase of a five-level (with\n, odd) converter is illustrated.\nFig. 2(a) represents the ideal switch realization and Fig. 2(b)\nrepresents an insulated gate bipolar transistor (IGBT)/diode realization. The complex, but systematic growth of the single pole\nmultithrow switch as the number of levels increases is evident\nfrom the figure. The following observations may be made regarding the structure of the realization of a single-pole multiple-throw switch using real semiconductors.\n1) The realization of each single pole -throw switch of the\n-level converter consists of\nindividual semiconare controlductor device throws. Among these,\nlable throws that have bidirectional current conducting,\nunidirectional voltage blocking and current turn-off capability. They may be realized using IGBTs with antiparallel\ndiodes (or equivalent devices) as illustrated in the figure.\nare uncontrolled throws,\nThe remaining\nwhich may be realized using diodes. In such a realization,\nall the semiconductors carry the same voltage stress.\nseries strings\n2) The semiconductors are grouped into\n).\nthat are designated by 0 through (\n3) The th series string consists of two controllable throws,\nand\ndesignated\n, where\n2 diodes, designated as\n. The semiconductors devices in given\nseries string are connected such that all the diodes in the\nstring point in the same direction, in the order designated\n.\n4) The midpoint of the zeroth string forms the pole of the\nswitch. The extremities of the string and every alternate\njunction between the semiconductor throws of the (\nfor even\nfor odd\n(1)\n1038\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n(a)\nFig. 2.\n(b)\n(a) Schematic of the single-pole multiple-throw switch of the a phase of a five-level converter. (b) Its realization using the diodes and IGBTs.\n)th (or the longest) string forms the throws of the\nswitch.\n5) The strings are interconnected such that, for all\nand\n, (a) the cathodes of diodes forming\nin the th string are connected to the cathodes of\n, in the (\n)th string; (b)\nthe diodes forming\nthe anode of diodes forming\nin the th string is\nth\nconnected to the anode of the diodes forming\n)th string.\nin the (\n) of the individual semi6) The switching functions (\n) for the diode-clamped converter\nconductor throws\nrealization may be represented by the set of matrix trans-\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\nformations shown in (5), at the bottom of the page, where\nis the unit step function.\nThe relationships shown in (5) may be used to compute the\nconnectivity of each semiconductor switch throws of the diode\nclamped -level converter realization, when the switching functions of the abstract -level equivalent structure illustrated in\nFig. 1 are known. As a result, further modeling and analysis\nthat are developed in the paper can be focused on the switching\n, which may be appropriately transformed to the\nfunctions\nif necessary.\nswitching functions\n1039\nWhen the repetition frequency of the switching function (or\nsimply the switching frequency) is much larger than the of the\npower frequency of the desired ac output voltages, net power\ntransfer between the dc voltages and the ac currents arises from\nthe slowly varying average value of the switching functions. The\naverage value of the switching functions may be readily represented by their time varying duty ratio functions of the particular\nthrow. From the power transfer point of view, the transfer relationships (6) and (7) may be approximated by\n(8)\nand\nB. Equivalent Circuit\nThe power transfer properties between the dc sources and the\nac output represented by (3) and (4) may be represented by using\na matrix-vector notation, in a more compact form as follows:\n(6)\n(9)\n) of the modulation matrix\nmay\nwhere the elements (\nbe determined by computing the duty ratio of the elements of\nthe matrix\nand\n(10)\n(7)\nwhere\nand\nwith being the switching period.\nRelationships (6) and (7) or (8) and (9) indicate a reciprocal\noutput transfer property similar to a transformer and\ninput\nfeature is termed “reciprocity transposition,” which is a property of all switching converters . Furthermore, based on the\nreciprocity transposition, dc/fundamental component equivalent\ncircuit of the multilevel converter system may be drawn as represented in Fig. 3. As may be seen from Fig. 3, the power interchange and waveform synthesis among the dc bus stack and the\nac current port depend on the duty ratio of various throws. The\n..\n.\n..\n.\n..\n.\n..\n.\n..\n.\n..\n.\n..\n.\n..\n..\n.\n.\n(5)\n1040\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\ntechnique, which utilize phase shifting of the triangular carrier\nwaveforms to shape the high-frequency harmonic spectrum\nhave been proposed. However, all of the variations are equivalent from the point of view of power transfer phenomenon and\nlow-frequency characteristics. Therefore, the analysis here is\nlimited to the case where all the triangular carrier waveforms\nare in phase.\nIn order to develop an analysis technique that can be extended\nto any number of levels, a few normalizing assumptions are\nmade. The total dc-bus stack voltage, i.e., the difference between\nand\nis assumed to be 2 p.u., with the midpoint assumed\nto be the reference voltage. As a result, the input voltage vector\nfor an -level converter can be expressed by\n(11)\nThe desired three-phase modulating signals with an amplitude\nof are given by\n(12)\nIf the output current amplitude is assumed to be 1 p.u., at a power\nfactor angle of , the output current vector may be represented\nby\n(13)\nA. Modeling of Multiple-Carrier Modulation\nFig. 3. Transformer-based dc/fundamental component equivalent circuit of the\nthree-phase multilevel converter.\nobjective of any modulation function is to determine the duty\n, or in general the elements of\nratio of various throws,\nso that any desired power transfer\nthe modulation matrix\nobjectives may be fulfilled. The equivalent circuit may be conveniently utilized to study and develop appropriate modulation\nstrategies for the multilevel converter.\nIII. ANALYSIS OF MULTIPLE-CARRIER-BASED MODULATION\nSeveral different modulation techniques for multilevel converters have been proposed with a primary goal of synthesizing\na desired set of ac voltage waveforms and a secondary goal\nfor shaping the harmonic spectrum of the output voltage\nwaveforms. Among these, carrier based modulation schemes\nhave been studied quite extensively from the point of view of\nharmonic characterization . Several variations of the basic\nMultiple carrier modulation technique for an -level contriangular waveforms with identical\nverter consists of\npeak–peak values as illustrated in Fig. 4. The peak–peak value\n. Each carrier waveform has a disof each waveform is\ntinct dc bias level such that the excursion of all the waveforms\ntogether fit the vertical span between 1 and 1 perfectly and\nnone of their excursions overlap each other. Thus, each waveform spans the voltage difference between two adjacent levels of\nthe dc-bus stack. Furthermore, the waveforms divide the vertical\n), each one correspace into zones (\nsponding to a distinct throw of the single-pole multiple-throw\nswitch. The throws of the switch are operated such that when\nthe modulating function of the particular phase is in a particular zone, the throw corresponding that zone is turned on. For\nis in\ninstance, at the time denoted by the arrow in Fig. 4,\nand throw\nwill be turned on.\nzone\nB. Matrix Description of Multiple-Carrier Modulation\nThe modulation technique described above may be\nmodeled mathematically to the extent that the modulation\nfunctions for each throw of the switch can be determined\nfrom the value of the desired output voltage reference\nwaveform. For this purpose, the interval [ 1 1] is diwindows. The windows are labeled\nvided into\n),\n(\nis bounded in the direction away\nsuch that the window\nfrom zero by and toward zero by . A membership function\nfor the modulating signal corresponding to each window can be\ndefined as being unity when the signal occupies that window\nand zero otherwise.\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\nFig. 4.\n1041\nIllustration of multiple triangular carrier waveforms and modulating waveforms for multilevel converter.\nWhen the modulator is not saturated, the modulating signal\nfalls within the interval [ 1 1] and, hence, will have\nits membership function corresponding to one window unity and\nevery other one zero. An example of the windows, modulating\nfor a five-level\nsignal and its membership function for\nconverter is illustrated in Fig. 5.\nUsing this definition of the membership function, the duty\nratio of the th throw of the th switch is given by\n(14)\nAn example of the modulation function of Throw 1 for a\nfour-level converter is illustrated in Fig. 6. In the case of this\nmodulation strategy, the three-phase output voltage waveforms\nhave an amplitude of , which also becomes the modulation\nindex.\nC. Average Output Voltage and Input Current Waveforms\nUsing the model for the modulation functions as developed\nin the above discussion, the output voltages and input currents\nof diode-clamped converter with any number of levels can be\nreadily determined, given the modulation index and the power\nfactor of the output using (8) and (9).\nThe average input current waveforms for a five level converter\nfor a modulation index of 0.75, at a power factor of 0.8 is illustrated in Fig. 7. The horizontal axis covers one period of the\noutput waveforms.\nIt is clear from the figure that the input currents contain a large\namount of low-frequency harmonics and this is one of the major\nsources of voltage variations of the dc-bus levels, especially at\nlow output frequencies. It is also clear that under the chosen\noperating conditions, the average current drawn from the outer\nnodes of the dc-bus stack is lower than that from the inner nodes\nof the dc-bus stack. Moreover, there is a large amount of third\nharmonic current injected into the midpoint of the dc-bus stack.\nThe nonzero nodes also carry a large amount of third-harmonic\nSince the average current from each node can be calculated\nusing the proposed approach, the average net power injection\nat each level can be readily determined. In singly fed systems,\nit is quite clear from the above waveforms that the node voltages would gradually collapse, resulting in a two level converter. However, in multifed systems where each of the nodes\nof the multilevel bus is fed from an active source, this approach\ncan be used to suitably size the power sources for each of the\n1042\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n(a)\nFig. 7. Average input current waveforms using multiple carrier modulation at\na modulation index of 0.75 and power factor of 0.8 for a five-level converter.\n(b)\n(a)\nFig. 5. (a) Carrier windows and modulating waveform for a five-level\nconverter with a modulation index of 0.75. (b) Window membership function\nfor\nfor the waveform.\nW\n(b)\nFig. 8. (a) Average input currents and (b) total input and output power\nwaveforms using multiple carrier modulation at a modulation index of 0.53\nand power factor of 1 for a four-level converter.\nFig. 6. Example modulation function and modulating waveform for Throw 1\nfor the five-level converter.\nlevels under different operating conditions. In a back-to-back\nsystem, the source converter and load converter would feed each\nother, albeit at different frequencies and provide net power balance. Furthermore, the harmonic current injection into the nodes\nunder different operating conditions may also be determined,\nthereby enabling appropriate choice of capacitor elements for\nthe bus stack.\nAverage input currents for a four-level converter are illustrated in Fig. 8, along with total input and total output power\nwaveforms. It is clear that although the individual input current\nwaveforms are rich in harmonics, the total power drawn from\nthe dc-bus stack remains constant, since the output power of a\nbalanced three-phase system is constant. This feature provides\nthe main motivation for the modulation strategy proposed further in the following section.\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\n1043\nIV. RECIPROCITY-TRANSPOSITION-BASED MODULATION\ndraw equal amounts of current from each of the nodes of the\ndc-bus stack. This would minimize ripple currents in the capacitors. Moreover, if each of the nodes of the dc-bus stack are fed\nfrom a different power converter, it would be highly desirable to\nhave the load at the nodes draw constant currents. Furthermore,\nthe currents injected from the top half of the nodes of the dc-bus\nstack would be positive and the bottom half of the dc-bus stack\nwould be negative for net power flow from the dc bus to the ac\noutput. The direction of currents would reverse under ac-to-dc\nnet power flow conditions.\nThe load currents drawn from the ac port of the converter\nwould nominally form a balanced three-phase ac system.\nRewriting (4) using (10) and (13), the dc current injection into\nthe th node of the converter may be calculated in terms of the\nload current and modulation functions as\n(15)\n,\nand\nFrom (15), it can be concluded that if\nhave a common term and/or if they form a balanced three-phase\nwill reduce to\nset at frequency , then the average value of\na constant value. The following choice of modulation function\nrealizes exactly that, while also maintaining the desired output\nvoltage\n(16)\ndefined as the “sharing function” of a given node,\nHere,\ndetermines the share of the current that will be drawn from the\nth node of the dc-bus stack. Typically, the current drawn from\nthe midpoint of the dc-bus stack or the null node should be zero,\nsince that node is not capable of delivering net power under symmay be chosen\nmetric ac output conditions. As a result,\nto be zero, in case the converter has odd number of levels.\nIn a symmetric converter, the number of nodes with positive\nvoltages is equal to the number of nodes with negative voltages.\nFor net power flow from the dc ports to ac ports, the sum of\ncurrents drawn from the positive nodes will be equal to the sum\nof currents sunk into the negative nodes, since we desire zero\ncurrent into the null node in case it is present. Therefore, the\nsum of all the sharing functions for a given converter should\nequal 2.\nThe total current drawn from the positive nodes may be split\nfrom among all the positive nodes. Furthermore, for symmetrical operating conditions, the ratio in which the currents are\nsplit among the positive nodes would have to be identical to\nthe ratio in which they are split among the negative nodes. For\ninstance, it may be desirable to source/sink 50% of the current from outer most pair of terminals and 25% each from the\ninner two pairs of terminals in a six-level converter. In that case,\n,\n, and\n,\nwould be chosen to be 0.5, 0.25, and 0.25 respectively. If it is\nFig. 9. Average input current waveforms using reciprocity-transposition-based modulation at an effective modulation index of 0.75 and power\nfactor of 0.8 for a five-level converter and current-sharing function 1:1.\ndesired to draw equal amount of currents, all the sharing functions would be chosen to be 1/3.\nIt may also be observed from these figures that the variable\ncontrols the amount of the current injected into or drawn\nout of a given node of the dc bus. Therefore, it may actually be\nused as a control input to regulate the particular node voltage of\nthe dc-bus stack should a need arise. This feature can be conveniently incorporated into a scheme for maintaining dc-bus regulation as is common with other approaches to modulation, while\nsimultaneously eliminating low-frequency ac components in the\ndc-bus currents.\nThe peak output voltage ( ) that is achievable from the converter is a function of the amplitude of the modulating signals\nand the sharing functions. It can be shown that\n(17)\nThus, the modulation index of the system under the proposed\nstrategy for a given is reduced by the factor equal to the summation term in (17). For instance, the peak output voltage avail) from a four-level converter (with voltage levels\nable (at\n1]) that uses equal current sharing between\n[1 .333 .333\ninner and outer nodes (\n) becomes 0.667. This is expected, because, the proposed\nmodulation strategy necessarily utilizes all the available voltage\nlevels to synthesize the desired output and, hence, compromises\non modulation efficiency. However, by choosing a different set\nof sharing functions, for instance, with (\n&\n), one is able to restore the maximum\nvoltage attainable from the system to be unity. Thus, through\nprudent choice of sharing functions at any desired modulation\nindex, it is possible to optimize system performance objectives.\nFig. 9 illustrates the input current waveforms for a five-level\nconverter, at operating conditions identical those in Fig. 7.\nAs may be observed, although the output voltage of the two\ncases are identical, the average input currents are now constant\nand equal. The current injection into the null node is zero. In\nFig. 10, the input current waveforms are repeated for a 2:1\n1044\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n(a)\nFig. 10. Average input current waveforms using waveforms using\nreciprocity-transposition-based modulation at an effective modulation index\nof 0.75 and power factor of 0.8 for a five-level converter and current-sharing\nfunction 2:1.\ncurrent-sharing ratio between the outer nodes and the inner\nnodes, while maintaining current injection into the null node\nzero and the output voltage at the same amplitude.\nIt is clear that the sharing functions can be effectively used\nto steer current into or away from every node without compromising the quality of the desired average output voltage waveforms. However, depending on the sharing functions used, the\n, will have to be\namplitude ( ) of the modulating signals,\nmodified in comparison with the conventional strategy to account for the application of additional voltage levels in order to\nsynthesize the same output voltage level.\nIn singly fed systems, under steady-state operating conditions, the sharing functions for the inner nodes would be zero\nand the outermost nodes would be unity. However, the sharing\nfunctions may be used actively to maintain the intermediate\nvoltage levels at their appropriate quiescent conditions. Otherwise, the voltage stress across the power semiconductor throws\nwould exceed their design limits, which would compromise the\nmost desirable feature of the diode clamped multilevel converter.\nIn multifed systems, the voltage levels at the intermediate\nnodes of the dc bus would be fed from independent dc power\nsources or from a similar multilevel converter. In such the\ncase, with the proposed modulation approach, the design of\nthe replenishing sources becomes simple because of controlled\nnot carry any low frequency current injection and, hence, could\nbe of a smaller value. This is particularly significant for motor\ndrive applications, where the output frequencies may be low,\nand large capacitors will be needed to limit the voltage ripple\nin the case of conventional modulation techniques.\nFig. 11 shows the input currents drawn from the various\nnodes for a four-level converter. The instantaneous total input\nand output power waveforms are also plotted. They may be\ncompared with the results from Fig. 8 using a conventional\nmodulator. These results illustrate the efficacy of the proposed\nmodulation algorithm in suitably controlling the dc-bus power\nflow in both even- and odd-level converters.\n(b)\nFig. 11. (a) Average input currents and (b) total input and output power\nwaveforms using reciprocity-transposition modulation at a modulation index\nof 0.53 and power factor of 1 for a four-level converter.\nV. SIMULATION RESULTS\nA detailed circuit model of the diode-clamped four-level converter was simulated using commercial power conversion circuit\nsimulation software (PSIM) . The model was used to implement the modulation algorithm presented here as well as the\nconventional modulation algorithm using multiple carrier waveforms. The switching frequency was chosen to be 1 kHz in order\nto better illustrate the switching features of the waveforms in\ntraces.\nA. Conventional Multiple-Carrier Modulator\nFig. 12 shows the traces of phase voltage ( – ) using a multiple carrier modulator for a modulation index of 53% at unity\npower factor. In Fig. 12(d) and (e), the Fourier spectra of the\ndc-bus node currents computed using discrete Fourier transform\n(DFT) are shown. In addition to the switching frequency harmonics that occur at the sidebands of multiples of the switching\nfrequency, a dominant amount of low-frequency harmonics is\nreadily apparent from the Fourier spectrum. Furthermore, the\nlarger dc current drawn from the inner nodes is also evident from\nthe spectrum.\nB. Reciprocity-Transposition-Based Modulator\nFig. 13 shows similar traces of phase voltage ( – ) using a\nreciprocity-transposition modulator for a modulation index of\n53% at unity power factor. In Fig. 13(d) and (e), the Fourier\nspectra of the dc-bus currents are shown. The absence of lowfrequency harmonics and equal value of dc components of the\ndc-bus currents are readily apparent from the Fourier spectra.\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\n(a)\n1045\n(b)\n(c)\n(d)\n(e)\nFig. 12. (a)–(c) Simulation waveforms using multiple carrier modulation. (d) Three-phase output voltages, Fourier spectrum of inner node input current. (e)\nFourier spectrum of outer node input current. Modulation index of 0. 53 and power factor of 1.\nThe dominant harmonics occur at the sidebands of multiples\nof the switching frequency. The simulation results thus confirm\nthe modeling approach and the analytical results that have been\npresented.\nThe output voltage waveforms shown in Figs. 12(a)–(c) and\n13(a)–(c) also indicate certain disadvantages of the proposed\napproach: 1) there are a higher number of switching events; 2)\nthere are large phase voltage variations; and 3) there are large\ncommon-mode voltage variations in the output voltage. These\nissues will have to be carefully weighed against the advantages\noffered by the proposed approach in a given application.\nVI. CONCLUSIONS\nThis paper has presented the application of reciprocity transposition as a systematic technique for studying the effect of\n1046\nIEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 49, NO. 5, OCTOBER 2002\n(a)\n(b)\n(c)\n(d)\n(e)\nFig. 13. (a)–(c) Simulation waveforms using reciprocity-transpsition-based modulation. (d) Fourier spectrum of inner node input current. (e) Fourier spectrum\nof outer node input current. Modulation index 0.53 and power factor 1.\n=\n=\nmodulation strategy on the current injection at the dc-bus terminals of a multilevel converter. Generally, most modulation\nstrategies for multilevel converters have focused on the output\nvoltage synthesis exclusively. The concomitant dc current injection results in large low-frequency variation in the stack voltages and often need corrective controllers to be incorporated.\nA simple but efficient technique to evaluate steady state current injection into the nodes of the dc-bus stack was presented\nhere. The technique can be used for sizing the capacitors and incorporate appropriate power sources that feed into the different\nnodes of the dc-bus stack to maintain power balance. Although\nreciprocity transposition was presented for a carrier based modulation strategy, it can be extended to evaluate the performance\nof the system under any desired modulation strategy, once the\nmodulation functions are determined.\nA modulation strategy was proposed in the paper that eliminates low-frequency ripple current injection into the nodes of\nthe dc bus stack under balanced operating conditions. The average currents drawn from each of the nodes of the dc-bus stack\nare constant and may be made equal. The currents are predeter-\nVENKATARAMANAN AND BENDRE: RECIPROCITY-TRANSPOSITION-BASED SINUSOIDAL PULSEWIDTH MODULATION\nmined by the choice of the modulation function and, hence, can\nbe used for sizing energy replenishing power sources connected\nto the dc-bus stack. Moreover, the share of the current drawn\nfrom each of the nodes can be varied and, hence, can be used as\na control handle to raise or lower any particular node voltage of\nthe dc-bus stack. The strategy is equally applicable to converters\nwith an even and an odd number of levels.\nAnalytical results from the application of the proposed technique to example converters with four and five levels have been\npresented. The results were verified using computer simulations\non a four-level converter.\nReciprocity transposition as an analytical and modeling tool\nfor study of multilevel converters has not been explored extensively with switching power converters. This paper has attempted to apply this principle in the field of multilevel converters and better manage input power flow. It is expected the\napplication of this valuable tool will lead to further developments and yield even better modulation techniques.\nREFERENCES\n N. P. Schibli, T. Nguyen, and A. C. Rufer, “A three-phase multilevel converter for high-power induction motors,” IEEE Trans. Power Electron.,\nvol. 13, pp. 978–986, Sept. 1998.\n Y. Chen, B. Mwinyiwiwa, Z. Wolanski, and O. Boon-Teck, “Regulating\nand equalizing DC capacitance voltages in multilevel STATCOM,”\nIEEE Trans. Power Delivery, vol. 12, pp. 901–907, Apr. 1997.\n J. S. Lai and F. Z. Peng, “Multilevel converters—A new breed of\npower converters,” in Conf. Rec. IEEE-IAS Annu. Meeting, 1995, pp.\n2348–2356.\n M. Marchesoni, M. Mazzucchelli, and S. Tenconi, “A nonconventional\npower converter for plasma stabilization,” in Proc. IEEE PESC’88,\n1988, pp. 122–129.\n R. H. Osman, “A medium-voltage drive utilizing series-cell multilevel\ntopology for outstanding power quality,” in Conf. Rec. IEEE-IAS Annu.\nMeeting, vol. 4, 1999, pp. 2662–2669.\n W. A. Hill and C. D. Harbourt, “Performance of medium voltage\nmultilevel inverters,” in Conf. Rec. IEEE-IAS Annu. Meeting, 1999, pp.\n1186–1192.\n P. Hammond, “A new approach to enhance power quality for medium\nvoltage drives,” in Proc. IEEE-IAS PCIC Tech. Conf., 1995, pp.\n231–235.\n M. D. Manjrekar, “Topologies, analysis, controls and generalization in\nH-Bridge multilevel power conversion,” Ph.D. dissertation, Dept. Elect.\nComput. Eng., Univ. Wisconsin, Madison, 1999.\n G. Sinha and T. A. Lipo, “A four level rectifier-inverter system for drive\napplications,” in Conf. Rec. IEEE-IAS Annu. Meeting, vol. 2, 1996, pp.\n980–987.\n Q. Jiang and T. A. Lipo, “Switching angles and DC link voltages optimization for multilevel cascade inverters,” in Proc. IEEE PEDES’98,\n1998, pp. 56–61.\n S. K. Biswas and B. Basak, “Stepped wave synthesis from preprogrammed PWM inverters with a common DC-DC converter supply,” in\nProc. IEEE PEDES’96, 1996, pp. 161–167.\n S. Sirisukprasert, J.-S. Lai, and T.-H. Liu, “Optimum harmonic reduction\nwith a wide range of modulation indexes for multilevel converters,” in\nConf. Rec. IEEE-IAS Annu. Meeting, vol. 4, 2000, pp. 2094–2099.\n G. Carrara, S. Gardella, M. Marchesconi, R. Salutari, and G. Schiutto,\n“A new multilevel PWM method: A theoretical analysis,” in Proc. IEEE\nPESC’90, 1990, pp. 363–371.\n V. G. Agelidis and M. Calais, “Application specific harmonic performance evaluation of multicarrier PWM techniques,” in Proc. IEEE\nPESC’98, vol. 1, 1998, pp. 172–178.\n1047\n N. Celanovic and D. Boroyevich, “A fast space-vector modulation algorithm for multilevel three-phase converters,” IEEE Trans. Ind. Applicat.,\nvol. 37, pp. 637–641, Mar./Apr. 2001.\n T. Ishida, K. Matsuse, K. Sasagawa, and L. Huang, “Fundamental characteristics of a five-level double converter for induction motor drive,” in\nConf. Rec. IEEE-IAS Annu. Meeting, vol. 4, 2000, pp. 2189–2196.\n M. Manjrekar and G. Venkataramanan, “Advanced topologies and modulation strategies for multilevel inverters,” in Proc. IEEE PESC’96, vol.\n2, Baveno, Italy, 1996, pp. 1013–1018.\n L. M. Tolbert, F. Zheng Peng, and T. G. Habetler, “Multilevel converters\nfor large electric drives,” IEEE Trans. Ind. Applicat., vol. 35, pp. 36–44,\nJan./Feb. 1999.\n, “A multilevel converter-based universal power conditioner,” IEEE\n\nTrans. Ind. Applicat., vol. 36, pp. 596–603, Mar./Apr. 2000.\n J. Rodriguez, L. Moran, A. Gonzalez, and C. Silva, “High voltage multilevel converter with regeneration capability,” in Proc. IEEE PESC’99,\nvol. 2, 1999, pp. 1077–1082.\n Y. Chen, B. Mwinyiwiwa, Z. Wolanski, and O. Boon-Teck, “Unified power flow controller (UPFC) based on chopper stabilized\ndiode-clamped multilevel converters,” IEEE Trans. Power Electron.,\nvol. 15, pp. 258–267, Mar. 2000.\n Y. Chen and O. Boon-Teck, “STATCOM based on multimodules of\nmultilevel converters under multiple regulation feedback control,”\nIEEE Trans. Power Electron., vol. 14, pp. 959–965, Sept. 1999.\n F. Zheng Peng, J.-S. Lai, J. McKeever, and J. VanCoevering, “A multilevel voltage-source converter system with balanced DC voltages,” in\nProc. IEEE PESC’95, vol. 2, 1995, pp. 1144–1150.\n F. Zheng Peng, “A generalized multilevel inverter topology with self\nvoltage balancing,” IEEE Trans. Ind. Applicat., vol. 37, pp. 611–618,\nMar./Apr. 2001.\n M. Ventrurini and A. Alesina, “The generalized transformer: A new\nbidirectional sinusoidal waveform frequency converter with continuously adjustable input power factor,” in Proc. IEEE PESC’80, 1980,\npp. 242–252.\n PSIM Users Manual, Version 5.0, Powersim, Inc., Noth Andover, MA,\n2001.\nGiri Venkataramanan (S’86–M’93) studied\nelectrical engineering at the Government College of\nTechnology, Coimbatore, India, California Institute\nof Technology, Pasadena, and the University of\nAfter teaching electrical engineering at Montana\nState University, Bozeman, he returned to the University of Wisconsin, Madison, as a faculty member\nin 1999, where he continues to direct research in various areas of electronic power conversion as an Associate Director of the Wisconsin Electric Machines\nand Power Electronics Consortium (WEMPEC). He is the holder of four U.S.\npatents and has authored a number of published technical papers.\nAshish Bendre received the B.Tech. degree in electrical engineering in 1990 from Indian Institute of\nTechnology, Bombay, India, and the M.S.E.E. degree\nin 1992 from the University of Wisconsin, Madison,\nwhere he is currently working toward the Ph.D. degree in electrical engineering.\nHis primary areas of interest are multilevel converters and dc–dc converters. He has more than seven\nyears of design and product development experience\nin industry, primarily at Pillar Technologies and Soft\nSwitching Technologies.\n```"
]
| [
null,
"https://s2.studylib.net/store/data/018728884_1-83ba7cee4d7530cfca87c34db1f8bbd0.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89428777,"math_prob":0.9202032,"size":43214,"snap":"2019-51-2020-05","text_gpt3_token_len":9486,"char_repetition_ratio":0.1777135,"word_repetition_ratio":0.07376437,"special_character_ratio":0.21305595,"punctuation_ratio":0.13267998,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97115177,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T00:37:52Z\",\"WARC-Record-ID\":\"<urn:uuid:0d65c633-fcc6-4c0d-b942-11deb96a3ab6>\",\"Content-Length\":\"138041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76c6071f-d93f-426c-999e-7027ee410bf8>\",\"WARC-Concurrent-To\":\"<urn:uuid:75ab12a3-a10b-4abb-83fb-5b2a5f618425>\",\"WARC-IP-Address\":\"104.24.125.188\",\"WARC-Target-URI\":\"https://studylib.net/doc/18728884/reciprocity-transposition-based-sinusoidal-pulsewidth-mod..\",\"WARC-Payload-Digest\":\"sha1:N2FK3MWEOLTKEVKISNNBAN3K4G6GQGMF\",\"WARC-Block-Digest\":\"sha1:M3N2PO6EYN6J2EO2SIKJPKQYJ2OFCCPL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540525598.55_warc_CC-MAIN-20191209225803-20191210013803-00182.warc.gz\"}"} |
https://mathoverflow.net/questions/120875/ring-with-three-binary-operations/120900 | [
"# Ring with three binary operations\n\nA rather precocious student studying abstract algebra with me asked the following question: Are there interesting rings where there are not just two but three binary operations along with some appropriate distributivity properties?\n\n• For example, there are dendriform algebras: math.tamu.edu/~maguiar/depaul.pdf loic.foissy.free.fr/pageperso/article5.pdf . These have four binary operations, but one is the sum of two of the others and can be left out. Many algebras with complicated product structures (\"complicated\" meaning something like \"the product of two simple things can be a sum of many simple things, rather than one single simple thing\") are actually dendriform algebras, and the $\\succ$ and $\\prec$ operations simplify proofs of their properties (due to having simpler recursions). – darij grinberg Feb 5 '13 at 16:47\n• Also, the ring $\\mathbf{Symm}$ of symmetric polynomials (in infinitely many variables) over $\\mathbb Z$ has at least four operations: addition, multiplication, \"second multiplication\" and plethysm. I don't know how well this generalizes (I fear not too well). – darij grinberg Feb 5 '13 at 16:49\n• Darij, thanks for your comments (which should be answers)! – Deane Yang Feb 5 '13 at 16:56\n• @darij You left out \"second plethysm\", usually called \"inner plethysm\" – Bruce Westbury Feb 5 '13 at 17:07\n• @Darij: These are answers, not comments. – Martin Brandenburg Feb 5 '13 at 19:16\n\nThe real numbers $\\mathbb{R}$ with the following three binary operations:\n\n• The maximum: $(x,y)\\mapsto\\max\\{x,y\\}$.\n\n• The sum: $(x,y)\\mapsto x+y$.\n\n• The product: $(x,y)\\mapsto x\\cdot y$.\n\nThe maximum is to the sum what the sum is to the product, except for the fact that the maximum does not have inverses, nor a unit, i.e. $(\\mathbb{R},\\max,+)$ is a semiring, while $(\\mathbb{R},+,\\cdot)$ is a ring.\n\n• Nice. Thanks for such a simple yet useful answer. – Deane Yang Feb 5 '13 at 22:39\n• You're welcome. That semiring structure on the reals is the motto of tropical geometry. There are very nice mathematics to learn starting with this observation. Some basic concepts are actually easy for young students. (Disclaimer: I'm not an expert in this topic). – Fernando Muro Feb 5 '13 at 23:16\n• Or the minimum function. Or any subring of $\\mathbb{R}$ such as $\\mathbb{Q}$, $\\mathbb{Z}$, $\\mathbb{Q(\\sqrt{2})}$. – user30304 Feb 6 '13 at 13:28\n\nAn important example is the notion of a Gerstenhaber algebra. It is simultaneously a commutative ring and a Lie algebra, such that the product and bracket satisfy the Poisson identity, except all these things need to be understood in a differential graded sense.\n\n• Isn't this just a \"twisted\" lie object in the symmetric monoidal category of $\\mathbb{Z}$-graded commutative algebras? – Martin Brandenburg Feb 5 '13 at 19:19\n• I had never seen anyone exclaim beautiful at being presented with Gerstenhaber algebras :-) – Mariano Suárez-Álvarez Feb 5 '13 at 19:40\n• @Martin: one glitch is that the underlying Lie algebra is supposed to be graded with respect to a different grading than the associative underlying algebra, obtained from the other one by a shift of $1$. – Mariano Suárez-Álvarez Feb 5 '13 at 19:42\n\nVery much in the spirit of Dan's answer, but more elementary, are Poisson algebras, associative algebras with Lie brackets that act as derivations.\n\n• Thanks! I should have thought of this or at least the example of functions on a symplectic manifold. – Deane Yang Feb 5 '13 at 22:43\n\nAn interesting non-example is the Eckmann-Hilton theorem, stating that if a set is endowed with two associative unital binary operations that \"commute\" (i.e., if I get it correctly, each multiplication operator $a\\mapsto a*b$ is a homomorphism with respect to the other multiplication) then the two operations are the same.\n\nThis would exclude the existence of rings $(R, +,*,\\circ)$ with two genuinely different \"commuting\" multiplications.\n\nFor experimenting you could use alg, a program which computes all finite models of a given theory. The best thing may be to pass the ball back to your student and ask him to use alg to find some interesting structures.\n\nFor example, suppse we want a structure $(R, 0, +, -, \\times, \\&)$ such that $(R, 0, +, -)$ is a commutative group, $\\times$ and $\\&$ are associative, $\\times$ and $\\&$ distribute over $+$ and $\\&$ distributes over $\\times$ (I am making stuff up, the point is to experiment until something interesting is found). In alg the input file would be:\n\nConstant 0.\nUnary ~.\nBinary + * &.\n\n# 0, + is a commutative group\n\nAxiom plus_commutative: x + y = y + x.\nAxiom plus_associative: (x + y) + z = x + (y + z).\nAxiom zero_neutral_left: 0 + x = x.\nAxiom zero_neutral_right: x + 0 = x.\nAxiom negative_inverse: x + ~ x = 0.\nAxiom negative_inverse: ~ x + x = 0.\nAxiom zero_inverse: ~ 0 = 0.\nAxiom inverse_involution: ~ (~ x) = x.\n\n# * and & are associative\n\nAxiom mult_associative: (x * y) * z = x * (y * z).\nAxiom and_associative: (x & y) & z = x & (y & z).\n\n# Distributivity laws\n\nAxiom mult_distr_right: (x + y) * z = x * z + y * z.\nAxiom mult_distr_left: x * (y + z) = x * y + x * z.\n\nAxiom and_distr_right: (x + y) & z = (x & z) + (y & z).\nAxiom and_distr_left: x & (y + z) = (x & y) + (x & z).\n\nAxiom mult_and_distr_right: (x * y) & z = (x & z) * (y & z).\nAxiom mult_and_distr_left: x & (y * z) = (x & y) * (x & z).\n\n\nLet us count how many of these are, up to isomorphism, of given sizes:\n\n$./alg.native --size 1-7 --count three.th size | count -----|------ 1 | 1 2 | 4 3 | 3 4 | 36 5 | 3 6 | 12 7 | 3 Check the numbers [4, 3, 36, 3, 12, 3](http://oeis.org/search?q=4,3,36,3,12,3) on-line at oeis.org We can also look at these structures, but that's the sort of thing a student should do. Here is a random one of size 4 that alg prints out when we omit --count: ~ | 0 a b c --+------------ | 0 a b c + | 0 a b c --+------------ 0 | 0 a b c a | a 0 c b b | b c 0 a c | c b a 0 * | 0 a b c --+------------ 0 | 0 0 0 0 a | 0 a 0 a b | 0 0 b b c | 0 a b c & | 0 a b c --+------------ 0 | 0 0 0 0 a | 0 0 0 0 b | 0 a b c c | 0 a b c Up to size 7 I cannot actually see any interesting ones, there are always large blocks of 0's in$\\&$. Other things should be tried out. An exponential field is a field with an additional unary operation$x\\mapsto E(x)$extending the usual idea of exponentiation. So it satisfies the usual law of exponents$E(a+b)=E(a)\\cdot E(b)$and also has$E(0)=1$. An exponential ring has an underlying ring, rather than field, and the exponentiation function is a homomorphism from the additive group of the ring to the multiplicative group of units. The example of the real exponential field$\\langle\\mathbb{R},+,\\cdot,e^x\\rangle$has been a principal focus of the research program in model theory that has lead to the theory of o-minimality. Tarski had famously proved that the theory of real-closed fields$\\langle\\mathbb{R},+,\\cdot,0,1,\\lt\\rangle$is a decidable theory, and one of the original motivating questions, still open to my knowledge, is whether the first-order theory of the real exponential field is similarly decidable. Meanwhile, the o-minimalists are making huge progress on our understanding of the structure of definable sets in these and many other similar structures. • Isn't there a connection between exponential fields and the Schanuel's conjecture? – Asaf Karagila Feb 5 '13 at 20:45 • Oh, yes, there are some deep connections. Some of this is explained on the wikipedia pages to which I link. – Joel David Hamkins Feb 5 '13 at 20:50 The claim that there are two binary operations on rings is misleading. Rings are actually equipped with countably many$n$-ary operations, one for each noncommutative polynomial in$n$variables over$\\mathbb{Z}$. These generate the morphisms in a category with finite products, the Lawvere theory of rings$T$, which is a category with the property that finite product-preserving functors$T \\to \\text{Set}$are the same thing as rings. It just happens to be the case that as a category with finite products,$T$is generated by addition and multiplication. The Lawvere theory of commutative rings is similar except that the polynomials are commutative; incidentally, it may also be regarded as the category of affine spaces over$\\mathbb{Z}$. This gives a useful perspective from which to understand other ring-like structures. For example: • commutative Banach algebras are equipped with an$n$-ary operation for each holomorphic function$\\mathbb{C}^n \\to \\mathbb{C}$. • smooth algebras like the algebras$C^{\\infty}(M)$of smooth functions on a smooth manifold are equipped with an$n$-ary operation for each smooth function$\\mathbb{R}^n \\to \\mathbb{R}$. Here is a general procedure for determining what operations are actually available to you when working with some mathematical objects. If$C$is a concrete category and$F : C \\to \\text{Set}$the forgetful functor, then one interpretation of \"$n$-ary operation\" is \"natural transformation$F^n \\to F$.\" If$C$has finite coproducts and$F$is representable by an object$a$, then by the Yoneda lemma these are the same thing as elements of$F(a \\sqcup ... \\sqcup a)$. This reproduces the obvious answers for groups, rings, etc., and when$C$is the opposite of the category of smooth manifolds and$F : M \\mapsto C^{\\infty}(M)$then we get that \"$n$-ary operation\" means element of$C^{\\infty}(\\mathbb{R}^n)$as above. • But most of those infinite operations in a ring are derived ones. Counting them is sort of a display of love for formalities... – Mariano Suárez-Álvarez Feb 5 '13 at 19:38 • No, it's a commitment to talking about mathematical objects instead of presentations of mathematical objects. Lawvere theories exist independent of a choice of generators in the same way that groups do. Some Lawvere theories (e.g. the Lawvere theory of smooth algebras) are best described all at once rather than using a presentation in the same way that some groups are. – Qiaochu Yuan Feb 5 '13 at 19:58 • Can you imagine the classification of finite simple groups done using (even the notation required to handle) all derived operations in a group? Unless «talking about mathematical objects instead of presentations of mathematical objects» serves a purpose, it is just formalities. And, sure, in some situations, it does serve a purpose. —in finding significative examples of ring-with-three-binary-operations, not so much! – Mariano Suárez-Álvarez Feb 5 '13 at 20:03 • Well, you started with «The claim that there are two binary operations on rings is misleading», which is rather difficult to misunderstand, and my point is that that is sort of backwards. All the other operations that show up when you view rings as a Lawvere theory are the result of forcing rings into a Lawvere theory —this may be useful at times (it is useful at times!) but it is just a (mostly harmless) side effect of adopting a specific the point of view. – Mariano Suárez-Álvarez Feb 5 '13 at 20:51 • I think perhaps the sense in which this answer does not address the original question is that it shows ways of making lots of$n$-ary operations, but not ones which distribute over each other, which was perhaps the crux of the original question. – Noah Stein Feb 6 '13 at 16:42 Not only are there such examples, there is a very natural way of continuing the progression that starts with addition and multiplication. Begin with sets. Then monoids are just the monoidal objects in the category of sets. That ends the story at level 1. Now consider the objects that ended the story at level 1 which are also commutative, that is, commutative monoids. Then semirings are the monoidal objects in the category of commutative monoids. Here the monoidal operation on the category of commutative monoids is the tensor product, rather than the Cartesian product, but this is as it should be because of the tensor-hom adjunction:$\\mathrm{Hom}(A\\otimes B,C)=\\mathrm{Hom}(B,\\mathrm{Hom}(A,C))$. In other words, the composition of the representable functors$\\mathrm{Hom}(B,-)$and$\\mathrm{Hom}(A,-)$is the functor represented by$A\\otimes B$. Thus for the present purposes, the tensor product for commutative monoids plays the role of the Cartesian product for sets. Now what if we try to go one step further? Can we fill in the missing entries in the following table of analogies? (Sets : Cartesian product : monoids) :: (Commutative monoids : tensor product : semirings) :: (Commutative semirings : ?? : ??) The answer is yes, but there is one more twist to the story, which is that unlike in the categories of sets and commutative monoids, in the categories of commutative semirings, representable functors$\\mathrm{Hom}(A,-)$take values all the way down in the category of sets, not in the category of semirings. This actually occurs already at the tensor product stage above, but we didn't see it because we were working with commutative monoids (i.e. modules over the semiring of natural numbers) instead of modules over more general semirings. If we let$K$be any semiring, then$\\mathrm{Hom}_K(A,-)$takes values in abelian group, not in$K$-modules. To make everything above work, we need$A$to be a$K$-$K$-bimodule. Thus what we really want is to complete the following table of analogies. (Sets : sets : Cartesian product : monoids) :: ($K$-modules :$K$-$K$-bimodules : tensor product$\\otimes_K$: semirings over$K$) :: (Commutative$L$-algebras : ?? : ?? : ??), where$L$is a fixed commutative semiring. For the first ??, we want commutative$L$-algebras$A$such that$\\mathrm{Hom}(A,-)$takes values in$L$-algebras, rather than sets. This extra structure will be the analogue of the right$K$-module structure above. So let us call it an$L$-$L$-bialgebra structure. For instance,$A$will need to have two co-operations$A\\to A\\otimes_L A$, which will induce a functorial ring structure on the objects$\\mathrm{Hom}(A,B)$. If$L$is a ring, then this is precisely the structure of a commutative$L$-algebra scheme on$\\mathrm{Spec}(A)$. Then if$A$and$B$are$L$-$L$-bialgebras, we can compose the functors$\\mathrm{Hom}(B,-)$and$\\mathrm{Hom}(A,-)$. The result is easily seen to be representable, and the representing object is denoted$A\\odot B$. (So when$L$is a ring, we are then taking two affine commutative$L$-algebra schemes over$L$, viewing them as endofunctors on the category of commutative$L$-algebras, and then composing them. This is not such a common thing to do, but the result is another affine commutative$L$-algebra scheme over$L$.) I like to call it the composition product of$A$and$B$. Then we can define a composition$L$-algebra to be a monoid object in the category of$L$-$L$-bialgebras. Thus the final line of the analogy table above is (Commutative$L$-algebras :$L$-$L$-bialgebras : composition product$\\odot_L$: composition$L$-algebras). The monoidal operation on a composition$L$-algebra is usually denoted$\\circ$and is called composition or plethysm. So the usual hierarchy of operations \"(1) addition, (2) multiplication\" extends to \"(3) plethysm\". There are many examples of composition$L$-algebras (but interestingly not too many when$L$is a ring). The most basic example is the polynomial algebra$L[x]$, where$\\circ$is given by usual composition of polynomials. The$L$-$L$-bialgebra structure is the one such that$L[x]$represents the identity functor on commutative$L$-algebras. A more interesting example is the polynomial algebra `A=$\\mathbb{C}[\\partial^0, \\partial^1,\\dots]$in infinitely many indeterminants, which we think of as all algebraic differential operators in one variable. Here$\\circ$is the usual composition of differential operators. (The$L$-$L$-bialgebra structure is determined by requiring each$\\partial^i$to be linear and to satisfy the appropriate Leibniz rule.) There are also exotic, arithmetic examples when$L$is not a$\\mathbb{Q}$-algebra. These are responsible for concepts like Witt vectors and$\\Lambda$-rings. (When$L$is a field of characteristic$0$, it is conjectured that any composition$L$-algebra can be generated by linear operators, and so all composition$L$-algebras should reduce to more familiar multilinear constructions, as with the differential operators above.) Perhaps the easiest one of these to give has already been mentioned by Darij Grinberg. It is the ring$\\Lambda=\\mathrm{Symm}$of symmetric functions in infinitely many variables. Here addition and multiplication are as usual, and$\\circ$is the operation known as plethysm in the theory of symmetric functions. It is the composition algebra whose representations are$\\Lambda$-rings and whose co-induction functor is the big Witt functor. (I haven't defined these concepts here.) This is discussed in only a few places in the literature. In order of appearance: Tall-Wraith, Bergman-Hausknecht (this deals with general cagtegories of a universal-algebraic nature), Borger-Wieland, Stacey-Whitehouse. And it seems that every paper on the subject uses different term for what I called a composition algebra above. On my web page, I have slides from a talk a gave not so long ago on these things. This isn't an answer to this exact question, but it sounds like the student may also be interested in hearing about Hopf algebras. • They might be interested in Hopf algebras (e.g., group objects in the category of coalgebras) and also Hopf rings (e.g., ring objects in the category of coalgebras). – Paul Pearson Feb 6 '13 at 14:09 Take the ring of polynomials. More generally, any ring of functions (from a ring to itself). With the functions, you have 3 operations defined: Pointwise addition: + such that f+g is the function t--> f(t) + g(t) Pointwise multiplication: . such that f.g is the function t --> f(t).g(t) Composition of functions o such that fog is the function t--> f(g(t)) This is defined for functions but in particular for polynomials on some ring R and for matrices with ring coefficients. The paper \"The Natural Chain of Binary Arithmetic Operations and Generalized Derivatives\" by M. Carroll (link) is a great paper for undergrads that demonstrates an infinite number of binary operations (defined recursively and in terms of the exponential function) on the reals where the$i$th operation distributes over the$(i-1)$th operation. A differential-geometric example of such a thing would be differential forms (with operations of addition, wedge product and differentiation). More generally, one considers DGLAs, Differential Graded Lie Algebras (with the usual caveat that the Lie bracket is not quite commutative/associative); the operations are addition, derivation and the bracket (as well as multiplication by scalars as a free bonus). The main example is, of course, differential forms on a manifold with values in a Lie algebra. One uses DGLA's to describe deformations of pretty much anything under the sun, look here.$L^2$with addition, multiplication, and convolution, with both multiplication and convolution linear with respect to addition. •$L^2(\\mathbb R)$is not closed under multiplication, and is also not closed under convolution. You should probably use the Schwarz space$\\mathit S(\\mathbb R)$instead. – André Henriques Dec 8 '15 at 23:35 • @AndréHenriques: or replace$\\mathbb R$with a compact domain. – Michael Dec 9 '15 at 0:07 • For$L^2(X)$to be closed under multiplication,$X$needs to be discrete. For$L^2(X)$to be closed under convolution,$X$needs to be compact. So the only$X\\$ that work are finite sets (by which I mean finite groups, otherwise, there is no such thing as convolution). – André Henriques Dec 9 '15 at 11:28"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8832383,"math_prob":0.9891153,"size":20190,"snap":"2020-24-2020-29","text_gpt3_token_len":5440,"char_repetition_ratio":0.13766967,"word_repetition_ratio":0.046827793,"special_character_ratio":0.26602277,"punctuation_ratio":0.12133022,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99926835,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T22:32:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8c4690cb-7a5c-4462-8de0-1b1485ec0b8d>\",\"Content-Length\":\"246447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81dd08ac-1d68-477d-a69e-b49b55a70d0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c285968-4e4c-4ab6-8040-f12a64a52931>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/120875/ring-with-three-binary-operations/120900\",\"WARC-Payload-Digest\":\"sha1:36NONMVSPBBQRLT7WNLXRCVAVVTXL2SU\",\"WARC-Block-Digest\":\"sha1:H7W2QTWZKONZGTCNPEGHD3TYJW6NMCWP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886706.29_warc_CC-MAIN-20200704201650-20200704231650-00314.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/hep-ph/0602192/ | [
"UT-06-01\n\nMinimal Supergravity, Inflation, and All That\nM. Ibe, Izawa K.-I., Y. Shinbara, and T.T. Yanagida\n\nDepartment of Physics, University of Tokyo,\n\nTokyo 113-0033, Japan\n\nResearch Center for the Early Universe, University of Tokyo,\n\nTokyo 113-0033, Japan\n\n## 1 Introduction\n\nThe landscape of many vacua111 The vacua here have extended meaning which indicates the backgrounds in the theory (moduli) space, or the landscape. is a plausible structure in the fundamental theory of physical laws in nature. In particular, this structure is expected as one of the theoretical ingredients to understand the observed small cosmological constant . However, the anthropically allowed region of vacua in the landscape seems too large to be predictive enough in the presence of a variety of couplings. Thus, it is a challenging problem to derive further physical consequences from the landscape of vacua.\n\nThe (non-)presence of inflationary dynamics is a promising candidate as the first criterion to select realistic vacua . We can naturally expect that macroscopic universe is realized through inflation from fundamental-scale physics. Moreover, under the dynamics of inflation, mediocrity principle may prefer long-lasting inflations which result in larger-volume universes where more habitable galaxies are produced. In this respect, multiple inflations give a remarkable possibility to be considered .\n\nIn a recent article , we have pointed out that the inflationary dynamics possess a potential to select minimal supergravity as a large-cutoff theory, where the gravitational scale is smaller than the cutoff scale stemming from the fundamental theory. Such a large-cutoff supergravity naturally causes multiple slow-roll inflations, which possibly meet mediocrity principle.\n\nThe large-cutoff theory is also attractive from the viewpoint of particle-physics phenomenology: First of all, the suppression of the flavor-changing neutral currents is automatic in the large-cutoff theory, since all of the higher-dimensional operators are suppressed by the large cutoff except for the genuine gravitational interactions. In detail, the large-cutoff supergravity predicts a hierarchical spectrum of supersymmetric particles as , where is the universal soft mass for sfermions and the gaugino masses (). Thus, the current chargino mass bound suggests heavy sfermions at several TeV. Such a soft mass parameter belongs to the parabolic or hyperbolic regime allowed for a given parameter. Indeed the recent detailed analysis has confirmed that the region with large sfermion masses along the small--parameter curve continued from the focus point is consistent with the electroweak symmetry breaking . In the region of heavy sfermions and light gauginos (an order of magnitude lighter than the sfermions), the constraint from CP violation is rather weak and even order one CP-violating phases are allowed for TeV . Furthermore, the lightest supersymmetric particle can explain the dark matter density of the present universe in the above mass region for supersymmetric particles .\n\nIn this paper, we discuss a minimal new inflation model 222We suspect that multiple stages of inflation imply that the primordial inflation at the last stage tends to be a new inflation, since it seems naturally realized with a lower energy scale than that of other types of inflation. The discussion section includes comments on the case of other inflations. as an example in the framework of the large-cutoff supergravity with emphasis on baryon asymmetry generated by leptogenesis to complete a model of the large-cutoff hypothesis.333There are other new inflation models in the framework of supergravity, although these inflation models can not explain the observed spectral index in the large-cutoff hypothesis. We claim that all the phenomenological requirements from cosmology and particle physics are satisfied in a certain parameter region of the large-cutoff theory.\n\n## 2 Supergravity new inflation\n\nWe adopt a new inflation model considered in Ref.[13, 14]. As an effective field theory for an inflaton chiral superfield , the superpotential is given by\n\n W=~v2~ϕ−~gn+1~ϕn+1, (1)\n\nfor and the Kähler potential is given by\n\n K=~K∣∣~ϕ∣∣2+~k4∣∣~ϕ∣∣4+⋯, (2)\n\nwhere we have taken the unit with the reduced Planck scale GeV equal to one. The positive parameters , , and are of orders , , and , respectively, for our large-cutoff hypothesis . The tiny scale can be generated dynamically and the ellipsis denotes higher-dimensional operators which may be neglected in the following analysis.\n\nFor the canonically normalized field , the superpotential is given by \n\n W=v2ϕ−gn+1ϕn+1, (3)\n\nand the Kähler potential is given by\n\n K=|ϕ|2+k4|ϕ|4+⋯, (4)\n\nwhere we have defined\n\n ~v2=v2√~K,~g=g~Kn+12,~k=k~K2. (5)\n\nThe effective potential for the lowest component of is given by\n\n V=eK{(∂2K∂ϕ∂ϕ†)−1|DW|2−3|W|2}, (6)\n\nwhere\n\n DW=∂W∂ϕ+∂K∂ϕW. (7)\n\nThus, the potential of the inflaton field is approximately given by\n\n V(φ)≃v4−k2v4φ2−g2n2−1v2φn+g22nφ2n (8)\n\nfor the inflationary period near the origin .\n\nThe inflationary regime is determined by the slow-roll condition \n\n ϵ(φ)=12(V′(φ)V(φ))2≤1,|η(φ)|≤1, (9)\n\nwhere\n\n η(φ)=V′′(φ)V(φ). (10)\n\nFor the potential Eq.(8), we obtain\n\n ϵ(φ)≃12⎛⎜ ⎜⎝−kv4φ−gn2n2−1v2φn−1v4⎞⎟ ⎟⎠2, (11) η(φ)≃−kv4−g2n2−1n(n−1)v2φn−2v4. (12)\n\nThe slow-roll condition Eq.(9) is satisfied for where\n\n φf≃√2((1−k)v2gn(n−1))1n−2, (13)\n\nwhich yields the value of the inflaton field at the end of inflation.\n\nThe value of the inflaton corresponding to the -fold number is given by\n\n Ne≃∫φNeφfdφV(φ)V′(φ)≃∫φNeφfdφv4−kv4φ−gn2n2−1v2φn−1. (14)\n\n φn−2Ne≃kv22n2−1gn{1+k(n−2)1−keNek(n−2)−1}−1. (15)\n\nHence the spectral index of the density fluctuations is given by \n\n ns ≃ 1−6ϵ(φNe)+2η(φNe) (16) ≃ 1−2k⎡⎢ ⎢⎣1+n−1{1+k1−k(n−1)}eNek(n−2)−1⎤⎥ ⎥⎦. (17)\n\nNote that the spectral index does not depend on and explicitly. We show the dependence of the spectral index for and , [13, 14] in Fig.1.",
null,
"Figure 1: The k dependence of the spectral index ns for n=4. The red (solid) line corresponds to the e-fold number Ne=45, the green (dashed) line to Ne=50, and the blue (dash-dotted) line to Ne=55. For k=0, ns≃1−6/(2Ne+3).\n\nNow we proceed to determine the inflation scale from the density fluctuations. The amplitude of primordial density fluctuations is given by\n\n δρρ≃15√3πV32(φN0)|V′(φN0)|≃15√3πv6kv4φN0+gnv22n2−1φn−1N0, (18)\n\nwhere is the value of inflaton field at the epoch of the present-horizon exit. Thus we obtain\n\n v2n−6n−2≃√2V32(φN0)|V′(φN0)|⎡⎣kgn{1+k(n−2)1−keN0k(n−2)−1}−1⎤⎦1n−2 (19) ×⎡⎣k+k{1+k(n−2)1−keN0k(n−2)−1}−1⎤⎦. (20)\n\nOwing to the COBE normalization\n\n V32(φN0)|V′(φN0)|≃5.3×10−4, (21)\n\nthe scale is expressed as\n\n v≃1012GeV×C(k,N0)×(0.1g)1/2, (22)\n\nfor , , and , where is a function of order unity.\n\nOn the other hand, the -fold number of the present horizon is also given by\n\n N0≃67+13lnH+13lnTR≃67+13lnv2√3+13lnTR, (23)\n\nwhere denotes the Hubble scale at the horizon exit and the reheating temperature. By means of Eq.(17), (20), and (23), we can determine and from , , and . For , , , and GeV, the inflation scale is given by GeV, and the -fold number of the present horizon is given by . In Fig.2, we show the dependence of the spectral index for the reheating temperature , and GeV. We conclude that the implication of the large-cutoff hypothesis 444For instance, Eq.(5) yields and for , and . is consistent with an experimental value of the spectral index for a wide range of the reheating temperature.555 The inflaton as a massless scalar field in the de Sitter background has quantum fluctuations whose amplitude is given by . Thus the amplitude at is given by\n\nFor , and GeV, the fluctuation amplitude takes a value of order , which is much less than the mean-field value to justify the above slow-roll analysis.",
null,
"Figure 2: The k dependence of the spectral index ns for n=4. The shaded regions correspond to TR=105,107,109GeV from below, and the lower lines for g=0.1 and the upper lines for g=0.01.\n\n## 3 The gravitino mass\n\nIn the previous section, we have confirmed that the new inflation model in the large-cutoff hypothesis is consistent with the cosmological observations. In this section, we discuss the gravitino problem under such an inflationary scenario.\n\nAs considered in Ref., we assume that the positive energy of the SUSY breaking is dominantly canceled out by the negative energy at the inflaton potential minimum. Namely we impose\n\n Λ4SUSY−3|W(ϕ0)|2=0, (24)\n\nwhere is the minimum point of in Eq.(6).\n\nThen we obtain the gravitino mass as\n\n m3/2≃Λ2SUSY√3=W(ϕ0). (25)\n\nThe value of is approximately given by\n\n ϕ0≃(v2g)1n. (26)\n\nConsequently the gravitino mass is given by\n\n m3/2≃nv2n+1(v2g)1n≃9TeV×(0.1g)32. (27)\n\nThe second equality holds for , where we have used Eq.(22) and omitted the weak dependence on and .\n\nMore precisely, by means of Eq.(20) and (23), the gravitino mass can be expressed as a function of , , and , although the dependence on is very weak, as can be seen from Eq.(20) and (23). The result is shown in Fig.3. For and , the gravitino mass is larger than TeV, which may avoid the gravitino overproduction for a reheating temperature GeV .",
null,
"Figure 3: The contours of the gravitino mass for n=4 and TR=4×106GeV. The dependence on the reheating temperature is very weak.\n\nIn contrast, the sfermion soft mass is given as if no -term contributes to the SUSY breaking. Thus, TeV implies for .\n\n## 4 Reheating for baryogenesis\n\nNow we are ready to consider the baryon asymmetry in the present new inflation model with the large cutoff.\n\nWe assume the baryon asymmetry is generated by leptogenesis through non-thermal production of right-handed neutrinos, as investigated in Ref.[10, 11], which provides a numerical estimate\n\n nBs≃8.2×10−11(TR106GeV)(2mNmϕ)(mν30.05eV)1sin2βδeff. (28)\n\nHere , , and are the masses of the right-handed neutrino , the inflaton and the heaviest (active) neutrino, respectively. The phase is the effective CP phase defined in Ref. and is the ratio of the vacuum expectation value of up- and down-type Higgs bosons in the MSSM. The reheating temperature is given by\n\n TR≃(10g∗π2Γ2ϕ)14, (29)\n\nwhere is the decay width of the inflaton and is the effective number of massless degrees of freedom to be taken as numerically. Note that the inflaton mass\n\n mϕ≃nv2(v2g)−1n (30)\n\nin our new inflation model also weakly depends on the and the reheating temperature , as is the case for the gravitino mass in Eq.(27).\n\nLet us introduce the following superpotential interaction as the dominant source of the production: 666The inflaton field is also expected to decay through couplings with light fields in the Kähler potential such as . However, the decay width through these couplings is so small that we neglect such contributions.\n\n δW=h2(n−1)ϕn−1N2, (31)\n\nwhere is a positive parameter of the order of the inflaton self-coupling . 777Here, we assign the same charge for and under R-symmetry, while we assign the matter parity for and for . Hence we expect the presence of such operators as in addition to Eq.(31). We do not include such operators since the operator Eq.(31) with the smallest number of dominates the reheating and leptogenesis. The coupling Eq.(31) gives a decay width\n\n Γϕ≃|h|216πϕ2(n−2)0mϕ. (32)\n\nFrom this decay width the reheating temperature after inflation for is given by 888The cross term between and in the superpotential gives a comparable decay width. We neglect this contribution since it does not essentially affect our conclusions.\n\n TR≃2.6×106GeV(h0.1)(0.1g)5/4, (33)\n\nwhere we have omitted the weak dependence on in Eq.(22). Therefore, the reheating temperature GeV is typical in this model. As mentioned above, this reheating temperature is low enough to avoid the gravitino overproduction.\n\nNote that the operator Eq.(31) also gives the Majorana mass to the neutrino:\n\n mN=hn−1ϕn−10≃hn−1(v2g)1−1n. (34)\n\nThus the mass inequality , namely,\n\n h\n\nis satisfied with a typical parameter set . This is appropriate for the non-thermal production of neutrinos which leads to the non-thermal leptogenesis.\n\nBased on the above setup, we now estimate the baryon asymmetry due to the decay of inflaton 999 In our setup, we also have an additional contribution to the baryon asymmetry and the gravitino abundance. However, as we see in Appendix, this contribution is small in typical parameter region so that we neglect this contribution in the following analysis. in our model as a function of the couplings , and the reheating temperature . The baryon asymmetry is determined by four independent parameters , and . In terms of the observed density fluctuations, we can represent with the other parameters. We further use the reheating temperature as an input parameter instead of by means of Eq.(29), (30) and (32):\n\n h≃√16πmϕ(g∗π210M2G)14(gv2)n−2nTR. (36)\n\nThen the baryon asymmetry is given in terms of , , and by\n\n nBs≃8.2×10−11(TR106GeV)(2hn(n−1)g)(mν30.05eV)1sin2βδeff, (37)\n\nwhere is given by Eq.(36) with determined by Eq.(20) and (23).",
null,
"Figure 4: The contours of (nBs)/(nBs)0 for n=4, TR=4×106GeV, δeff=1, sinβ=1 are plotted in red (solid) lines. The blue (dashed-dotted) lines correspond to the contours of gravitino mass.\n\nIn Fig.4, we plot the contours of and for GeV, eV, , , where is the baryon asymmetry of the universe suggested by WMAP :\n\n (nBs)0≃8.7×10−11. (38)\n\nWe note that the baryon asymmetry and the gravitino mass for different reheating temperatures can also be seen from Fig.4: The baryon asymmetry is proportional to the square of the reheating temperature , since the coupling is approximately proportional to the reheating temperature. As for the gravitino mass, its value is almost independent of , since is almost independent of .\n\nThis figure shows that the sufficient baryon asymmetry is produced in a typical parameter region of the large-cutoff hypothesis: , , and GeV, which turns out to be low enough to avoid the gravitino overproduction. Thus it is revealed that the large-cutoff hypothesis is also consistent with the observed baryon asymmetry.\n\n## 5 Discussion\n\nWe have studied the large-cutoff hypothesis from the viewpoint of cosmology. We first confirmed that the spectral index in the new inflation model has an upper bound (see Ref.) and the large-cutoff hypothesis implies its boundary value, which remarkably agrees with the present experimental suggestion . Secondly, we found a concrete setup where the sufficient baryon asymmetry can be produced via non-thermal leptogenesis with the reheating temperature low enough to avoid the gravitino overproduction in a typical parameter region of large-cutoff hypothesis.\n\nWe again emphasize that the large cut-off hypothesis has several advantages from the viewpoint of particle-physics phenomenology. It solves the FCNC problem and produces the mass spectrum , which yields the correct electroweak symmetry breaking . Furthermore, the spectrum realized in the large-cutoff hypothesis accommodates the appropriate amount of the dark matter density .\n\nWe also mention CP violations in the visible-sector supersymmetric standard model as a sensitive low-energy probe of the supersymmetry breaking. Phases of the theory would be limited severely if the scalar masses were to be less than the TeV scale. In contrast, for TeV, such a constraint is far milder, with the very heavy scalar masses expected to be realized in the large-cutoff hypothesis from the viewpoint of electroweak symmetry breaking and dark matter, as mentioned above.\n\nThe heavy scalar masses are remarkably consistent with the cosmological constraint, as we saw in this paper. Thus we conclude that the large-cutoff theory with the supergravity new inflation and non-thermal leptogenesis is consistent with all the phenomenological requirements from cosmology and particle physics.\n\nFinally we comment on other types of inflations. The presence of the large cutoff seems advantageous for other inflationary models such as hybrid inflation and chaotic inflation. In particular, large-field inflations imply the presence of a larger scale (see Ref.) than the reduced Planck scale. In fact, we suspect that multiple inflations may be so generic as to include various types of inflations as their components, whose slow-roll conditions are realized by the large-cutoff mechanism .\n\n## Acknowledgments\n\nM. I. thanks the Japan Society for the Promotion of Science for financial support. This work is partially supported by Grand-in-Aid Scientific Research (s) 14102004.\n\n## Appendix: Another source of baryon and gravitino\n\nIn section 4, we put aside the baryon asymmetry and the gravitino produced through the coherent oscillation of right-handed sneutrino. Here we argue that this contribution can be small enough to be neglected.\n\nFirstly we explain the motion of right-handed sneutrino field which is the source of the baryon asymmetry and gravitino. During inflation, the right-handed sneutrino is fixed at the origin due to the Hubble mass. After the inflaton starts to roll down to the vacuum, the mass of the right-handed sneutrino changes along the motion trajectory of the inflaton. As the oscillation energy of the inflaton decreases, the origin of right-handed sneutrino becomes unstable, and right-handed sneutrino also starts oscillation. Then the decay of right-handed sneutrino becomes significant.101010The decay width of right-handed sneutrino is much larger than that of inflaton, due to a large Yukawa coupling of right-handed neutrino and standard-model particles compared with Eq.(31). The baryon asymmetry and gravitino are provided by the decay of this right-handed sneutrino .\n\nLet us estimate the yields of the baryon asymmetry and gravitino provided through the coherent oscillation of right-handed sneutrino. As mentioned above, the decay of right-handed sneutrino becomes significant when the motion of right-handed sneutrino is induced by that of inflaton. Then the yields of the baryon asymmetry and the grvitino number produced at the decay time of right-handed sneutrino are given by\n\n nNBs ≃ ερNmN452π2g∗T3N (39) nN3/2s ≃ Yϕ3/2TNTR. (40)\n\nHere, denotes the CP-asymmetry in right-handed sneutrino decay defined in Ref., is the temperature of radiation produced by right-handed sneutrino decay, is the yield of gravitino produced by inflaton decay, and is the energy of the right-handed sneutrino at the right-handed sneutrino decay.\n\nAfter the inflaton decay, these yields are diluted by the dilution factor estimated as\n\n Δ≃TNTRρϕρN, (41)\n\nwhere is the energy of the inflaton at the right-handed sneutrino decay. Thus and after the inflaton decay are given by\n\n nNBs ≃ (42) nN3/2s ≃ Yϕ3/2ρNρϕ. (43)\n\nThese values are smaller than the yields produced at inflaton decay for (see Eq.(37)), which we assume in the main text.\n\nIn fact, we checked that is realized in a typical parameter region by solving the equations of motion numerically for . We note a possibility that parametric resonance occurs in specific points, and the energy of right-handed sneutrino becomes comparable to that of inflaton in such a case."
]
| [
null,
"https://media.arxiv-vanity.com/render-output/4977621/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/4977621/x2.png",
null,
"https://media.arxiv-vanity.com/render-output/4977621/x3.png",
null,
"https://media.arxiv-vanity.com/render-output/4977621/x4.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8700966,"math_prob":0.97231615,"size":19793,"snap":"2021-31-2021-39","text_gpt3_token_len":5122,"char_repetition_ratio":0.14053261,"word_repetition_ratio":0.052614897,"special_character_ratio":0.2588794,"punctuation_ratio":0.15501286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9837901,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T06:03:13Z\",\"WARC-Record-ID\":\"<urn:uuid:c011399c-3abe-4a00-aee0-6668bb5f965d>\",\"Content-Length\":\"606076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f04e7ab7-acfd-4bd3-a2be-d33e3bfd9afd>\",\"WARC-Concurrent-To\":\"<urn:uuid:492f2879-4861-48b1-a3ce-ee2c53c0a22a>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-ph/0602192/\",\"WARC-Payload-Digest\":\"sha1:LZUUL7BZN72OTY2DJRAALQYMZZTZA4OC\",\"WARC-Block-Digest\":\"sha1:T2AZYUEXNVU7ILSRC7FKRMLC6TOWQ4FJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057417.10_warc_CC-MAIN-20210923044248-20210923074248-00179.warc.gz\"}"} |
https://powerpointmaniac.com/autocad/frequent-question-what-is-plot-scale-in-autocad.html | [
"# Frequent question: What is plot scale in Autocad?\n\nContents\n\nFrom model space, you can establish the scale in the Plot dialog box. … This scale represents a ratio of plotted units to the world-size units you used to draw the model. In a layout, you work with two scales.\n\n## What do AutoCAD scales mean?\n\nAutoCAD 2D drawings are commonly drawn in model space at a 1:1 scale (full-size). In other words, a 12-foot wall is drawn at that size. The drawings are then plotted or printed at a plot “scale” that accurately resizes the model objects to fit on paper at a given scale such as 1/8″ = 1′.\n\n## How do you plot to scale 1/100 in AutoCAD?\n\nFirst draw a rectangle the size of your paper minus the margins required. For scale of 1:100 use the SCALE command to scale the rectangle 100 times. Put this rectangle around what you want to plot then plot using Window and select the corners of the rectangle. Use scale to fit for the scaling and print it out.\n\n## What is the plot scale?\n\nThis scale represents a ratio of plotted units to the world-size units you used to draw the model. In a layout, you work with two scales. The first affects the overall layout of the drawing, which usually is scaled 1:1, based on the paper size.\n\n## How do I find the scale of a plot in AutoCAD?\n\nScale a Drawing to Fit the Page\n\n1. Click Output tab Plot panel Plot. Find.\n2. In the Plot dialog box, under Plot Scale, select the Fit to Paper option. The resulting scale is automatically calculated. The ratio of plotted units to drawing units in the custom scale boxes is displayed.\n3. Click OK to plot the drawing.\n\n## How do you set limits in AutoCAD?\n\nTo Set the Display Limits of the Grid\n\n1. At the Command prompt, enter limits.\n2. Enter the coordinates for a point at the lower-left corner of the grid limits.\n3. Enter the coordinates for a point at the upper-right corner of the grid limits.\n4. At the Command prompt, enter griddisplay, and enter a value of 0.\n\n## How do I scale a layout in AutoCAD?\n\nUsing the Properties palette . . .\n\n1. Select the layout viewport that you want to modify.\n2. Right-click, and then choose Properties.\n3. If necessary, click Display Locked and choose No.\n4. In the Properties palette, select Standard Scale, and then select a new scale from the list. The scale you choose is applied to the viewport.\n\n## What is a plot plan drawing?\n\nA plot plan is an architectural drawing that shows all the major features and structures on a. piece of property. The information on a plot plan will generally include the following: • Location of all buildings. • Porches.\n\n## What is the difference between plot and print commands?\n\nWhat is the Difference Between Printing and Plotting? The terms printing and plotting can be used interchangeably for CAD output. Historically, printers would generate text only, and plotters would generate vector graphics. … The process of generating physical models in plastic and metal is called 3D printing.\n\nIT IS INTERESTING: How do you split a face in Fusion 360?\n\n## What is the scale factor for 1 20?\n\n1″ = 20′ Multiply the feet by 12. 20 x 12 = Scale Factor 240.\n\n## How do you calculate scale?\n\nTo scale an object to a smaller size, you simply divide each dimension by the required scale factor. For example, if you would like to apply a scale factor of 1:6 and the length of the item is 60 cm, you simply divide 60 / 6 = 10 cm to get the new dimension.",
null,
""
]
| [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20101%2099'%3E%3C/svg%3E",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85994023,"math_prob":0.87896246,"size":3222,"snap":"2022-27-2022-33","text_gpt3_token_len":738,"char_repetition_ratio":0.12740833,"word_repetition_ratio":0.1092437,"special_character_ratio":0.23587833,"punctuation_ratio":0.11694153,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98324513,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T01:54:50Z\",\"WARC-Record-ID\":\"<urn:uuid:8aa79e41-3f59-47ec-9117-b8a127a65a40>\",\"Content-Length\":\"131448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15d2a8ef-1fb5-4ebf-9913-d995554e7b5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:504589ff-fa66-415a-a3dd-dffefa4ff8f8>\",\"WARC-IP-Address\":\"207.244.241.49\",\"WARC-Target-URI\":\"https://powerpointmaniac.com/autocad/frequent-question-what-is-plot-scale-in-autocad.html\",\"WARC-Payload-Digest\":\"sha1:CF5UZOJMZG35PRY63LWILNRJR5FBSCVQ\",\"WARC-Block-Digest\":\"sha1:VQJNVIQPFRL44XDAIIIVREQNRYFH36PA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00421.warc.gz\"}"} |
https://scicomp.stackexchange.com/questions/3534/fastest-algorithm-to-compute-the-condition-number-of-a-large-matrix-in-matlab-oc | [
"# Fastest algorithm to compute the condition number of a large matrix in Matlab/Octave\n\nFrom the definition of condition number it seems that a matrix inversion is needed to compute it, I'm wondering if for a generic square matrix (or better if symmetric positive definite) is possible to exploit some matrix decomposition to compute the condition number in a faster way.\n\nComputing the condition number (even approximating it within a factor of 2) seems to have the same complexity as computing a factorization, though there are no theorems in this direction.\n\nFrom a sparse Cholesky factor $R$ of a symmetric positive definite matrix, or from a sparse $QR$ factorization (with implicit $Q$) of a general square matrix, one can obtain the condition number in the Frobenius norm by computing the sparse inverse subset of $(R^TR)^{-1}$, which is much faster than computing the full inverse. (Related to this is my paper: Hybrid norms and bounds for overdetermined linear systems, Linear Algebra Appl. 216 (1995), 257-266. http://www.mat.univie.ac.at/~neum/scan/74.pdf)\n\nEdit: If $A=QR$ then with respect to any unitarily invariant norn, $$cond(A)=cond(R)=\\sqrt{cond(R^TR)}.$$ For the computation of sparse QR factorizations see, e.g.,\nhttp://dl.acm.org/citation.cfm?id=174408.\nFor the computation of the sparse inverse, see, e.g., my paper: Restricted maximum likelihood estimation of covariances in sparse linear models, Genetics Selection Evolution 30 (1998), 1-24.\nhttps://www.mat.univie.ac.at/~neum/ms/reml.pdf The cost is about 3 times the cost for the factorization.\n\n• So you are suggesting the following: Given a matrix $\\mathbf{A}$ compute its QR of the form $\\mathbf{A}=\\mathbf{Q}\\mathbf{R}$ where $\\mathbf{R}$ is an upper triangular matrix and $\\mathbf{Q}$ is an orthogonal matrix and then the condition number is given by $\\textrm{cond}(\\mathbf{A})=||A|| ||A^{-1}|| (\\mathbf{R}^T\\mathbf{R})^{-1}$ The point here is how to find a fast method to compute a QR factorization. Am I right? – linello Oct 22 '12 at 19:21\n• @linello: not quite; see my edit. – Arnold Neumaier Oct 22 '12 at 19:58\n• Thanks! I'm going to check it, btw what is the cost of this step? – linello Oct 22 '12 at 20:02\n• @linello: For a full matrix, $O(n^3)$; for a sparse matrix, it depends a lot on the sparsity structure. – Arnold Neumaier Oct 22 '12 at 20:23\n\nIt's certainly easy to use the eigenvalue/eigenvector decomposition of a symmetric matrix or the SVD of a general matrix to compute the condition number, but these aren't particularly fast ways to proceed.\n\nThere are iterative algorithms that can compute an estimate of the condition number that is useful for most purposes without going to all of the work of computing $A^{-1}$. See for example the condest function in MATLAB.\n\n• But the estimate is sometimes significantly too small. Computing the condition number (even approximating it within a factor of 2) seems to have the same complexity as computing a factorization, though there are no theorems in this direction. – Arnold Neumaier Oct 22 '12 at 14:27\n\nFor sparse Hermitian matrices $H$, you can use Lanczos algorithm to compute its eigenvalues. If $H$ is not Hermitian, you can compute its singular values by computing the eigenvalues of $H^TH$.\n\nSince the largest and smallest eigenvalues/singularvalues can be found very fast (long before the tridiagonalization is complete), the Lanczos method is particular useful to compute the condition number.\n\n• I've always wondered where to find a readable matlab code for lanczos iteration which clarifies how to get the smallest or largest eigenvalue. Can you suggest me one? – linello Oct 23 '12 at 8:16\n• I don't have the MATLAB codes for Lanczos algorithm. – chaohuang Oct 23 '12 at 8:50"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82406044,"math_prob":0.99120426,"size":3822,"snap":"2020-45-2020-50","text_gpt3_token_len":986,"char_repetition_ratio":0.13803038,"word_repetition_ratio":0.12048193,"special_character_ratio":0.25719517,"punctuation_ratio":0.11707989,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997108,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T17:06:00Z\",\"WARC-Record-ID\":\"<urn:uuid:bcc2fdaa-afa2-47d6-8dfb-1caffe13755c>\",\"Content-Length\":\"170559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62abd890-4181-4423-a7fa-6f50816dd1e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f82a7614-166a-4adb-b6df-0e563f5b5e2b>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/3534/fastest-algorithm-to-compute-the-condition-number-of-a-large-matrix-in-matlab-oc\",\"WARC-Payload-Digest\":\"sha1:QF35JP2CQTPBO4TP4HWTLUZWVLOBAHN6\",\"WARC-Block-Digest\":\"sha1:MNHXI5NA4BF6623BQQYRFUXTAUEHC6B5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107919459.92_warc_CC-MAIN-20201031151830-20201031181830-00278.warc.gz\"}"} |
https://primenumbers.info/405.htm | [
"# Prime Numbers\n\n## Is number 405 a prime number?\n\nNumber 405 is not a prime number. It is a composite number.\n\nA prime number is a natural number greater than 1 that is not a product of two smaller natural numbers. We do not consider 405 as a prime number, because it can be written as a product of two smaller natural numbers (check the factors of number 405 below).\n\n### Other properties of number 405\n\nNumber of factors: 10.\n\nList of factors/divisors: 1, 3, 5, 9, 15, 27, 45, 81, 135, 405.\n\nParity: 405 is an odd number.\n\nPerfect square: no (a square number or perfect square is an integer that is the square of an integer).\n\nPerfect number: no, because the sum of its proper divisors is 321 (perfect number is a positive integer that is equal to the sum of its proper divisors).\n\n Number:Prime number: 398no 399no 400no 401yes 402no 403no 404no 405no 406no 407no 408no 409yes 410no 411no 412no"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88624245,"math_prob":0.9967979,"size":835,"snap":"2021-31-2021-39","text_gpt3_token_len":247,"char_repetition_ratio":0.17448857,"word_repetition_ratio":0.0375,"special_character_ratio":0.3520958,"punctuation_ratio":0.15254237,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96841836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T17:22:37Z\",\"WARC-Record-ID\":\"<urn:uuid:b025a8dc-3835-4cdd-8a8b-46044a80e06f>\",\"Content-Length\":\"10121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e074255e-5ce3-4e05-a257-5a74def748c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:45dd3d02-c541-45fb-a114-c1871b47d978>\",\"WARC-IP-Address\":\"149.28.234.134\",\"WARC-Target-URI\":\"https://primenumbers.info/405.htm\",\"WARC-Payload-Digest\":\"sha1:ULZC75EAVJGQ2CIBW6TMDBS3KXUPQ2YM\",\"WARC-Block-Digest\":\"sha1:565QDQJQVZAYXXESS33LD5HB77GUQS73\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055684.76_warc_CC-MAIN-20210917151054-20210917181054-00494.warc.gz\"}"} |
https://exlskills.com/learn-en/courses/java-basics-basics_java/variables-and-operators-MRRAmEXxmwhc/operators-cIPHHNiCKVpy/expressions-wdloEcxoUXHd | [
"# Expressions\n\nAn expression is a mixture of literals, operators, variable names, and parentheses used to calculate a value. Please keep in mind that in Java, the expression on the right side of the assignment statement is evaluated first. The expressions will look similar to regular mathematical expressions from math class. Please take a look at the expressions below.\n\nExpressionsExample.java\n``````package exlcode;\n\npublic class ExpressionsExample {\n\npublic static int exampleVariableOne = ((7-4) * (-3/-1));\n\npublic static void main(String[] args) {\nSystem.out.println(exampleVariableOne);\n}\n}``````\n\nExpressions can be written without using any spaces at all, but the use of one or more spaces to visually separate the parts without changing the meaning is useful for the programmer and reader.\n\nKeep in mind the discussion we had previously on division of integers. If the operands are integers, these operators will perform integer arithmetic. If one or both operands are floating points, the operators will perform floating point arithmetic. Any number with a decimal would result in a `double` or `float`. For integers, 5/2 results in 2. For floating point, 5.0/2.0 results in 2.5 and 5/2.0 results in 2.5.\n\n#### Application Question\n\nConsider the following code segment:\n\n``````double varOne;\nint varTwo = 56;\nint varThree = 25;\nvarOne = varTwo / varThree;\nSystem.out.println(varOne);\n``````\n\nWhat is printed as a result of executing this code segment?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8329204,"math_prob":0.96476763,"size":1777,"snap":"2020-10-2020-16","text_gpt3_token_len":372,"char_repetition_ratio":0.107163,"word_repetition_ratio":0.0,"special_character_ratio":0.21046709,"punctuation_ratio":0.12615384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97182626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T22:21:18Z\",\"WARC-Record-ID\":\"<urn:uuid:ee85424c-747d-48a2-87b0-4d2b17d40f87>\",\"Content-Length\":\"321718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8683d92b-0bb6-4742-9503-8847880b396f>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebb7065d-ce98-443c-9992-e910b9ab6b2a>\",\"WARC-IP-Address\":\"52.26.191.149\",\"WARC-Target-URI\":\"https://exlskills.com/learn-en/courses/java-basics-basics_java/variables-and-operators-MRRAmEXxmwhc/operators-cIPHHNiCKVpy/expressions-wdloEcxoUXHd\",\"WARC-Payload-Digest\":\"sha1:SMS5J7BITQQ34OD3UQASDYXEBVQTSJH2\",\"WARC-Block-Digest\":\"sha1:XJOGDVSUOA5QKQO2S64RAFTVPU4ZI226\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497309.31_warc_CC-MAIN-20200330212722-20200331002722-00052.warc.gz\"}"} |
https://rfphotonicslab.org/category/uncategorized/page/2/ | [
"# The Cavity Magnetron\n\nThe operation of a cavity magnetron is comparable to a vacuum tube: a nonlinear device that was mostly replaced by the transistor. The vacuum tube operated using thermionic emission, when a material with a high melting point is heated and expels electrons. When the work function of a material is overcome through thermal energy transfer to electrons, these particles can escape the material.\n\nMagnetrons are comprised of two main elements: the cathode and anode. The cathode is at the center and contains the filament which is heated to create the thermionic emission effect. The outside part of the anode acts as a one-turn inductor to provide a magnetic field to bend the movement of the electrons in a circular manner. If not for the magnetic field, the electrons would simple be expelled outward. The magnetic field sweeps the electrons around, exciting the resonant cavities of the anode block.\n\nThe resonant cavities behave much like a passive LC filter circuit which resonate a certain frequency. In fact, the tipped end of each resonant cavity looks much like a capacitor storing charge between two plates, and the back wall acts an inductor. It is well known that a parallel resonant circuit has a high voltage output at one particular frequency (the resonant frequency) depending on the reactance of the capacitor and inductor. This can be contrasted with a series resonant circuit, which has a current peak at the resonant frequency where the two devices act as a low impedance short circuit. The resonant cavities in question are parallel resonant.\n\nJust like the soundhole of a guitar, the resonant cavity of the magnetron’s resonance frequency is determined by the size of the cavity. Therefore, the magnetron should be designed to have a resonant frequency that makes sense for the application. For a microwaves oven, the frequency should be around 2.4GHz for optimum cooking. For an X-band RADAR, this should be closer to 10GHz or around this level. An interesting aspect of the magnetron is when a cavity is excited, another sequential cavity is also excited out of phase by 180 degrees.\n\nThe magnetron generally produces wavelength around several centimeters (roughly 10 cm in a microwave oven). It is known as a “crossed field” device, because the electrons are under the influence of both electric and magnetic fields, which are in orthogonal directions. An antenna is attached to the dipole for the radiation to be expelled. In a microwaves oven, the microwaves are guided using a metallic waveguide into the cooking chamber.",
null,
"# Quality Factor\n\nQuality factor is an extremely important fundamental concept in electrical and mechanical engineering. An oscillator (active) or resonator (passive) can be described by its Q-factor, which is inversely proportional to bandwidth. For these devices, the Q factor describes the damping of the system. In some instances, it is better to have either a lower or higher quality factor. For instance, with a guitar you would want to have a lower quality factor. The reason is because a high Q guitar would not amplify frequencies very evenly. To lower the quality factor, complex or strange shapes are introduced for the instrument body. However, the soundhole of a guitar (a Helmholtz resonator) has a very high quality factors to increase its frequency selectivity.\n\nA very important area of discussion is the Quality Factor of a filter. Higher Q filters have higher peaks in the frequency domain and are more selective. The Quality factor is really only valid for a second order filter, which is based off of a second order equation and contains both an inductor and a capacitor. At a certain frequency, the reactances of both the capacitor and inductor cancel, leading to a strong output of current (lower total impedance). For a tuned circuit, the Q must be very high and is considered a “Figure of Merit”.\n\nIn terms of equations, the quality factor can be thought of in many different ways. It can be thought of as the ratio of “reactive” or wasted power to average power. It can also be thought of as the ratio of center frequency to bandwidth (NOTE: This is the FWHM bandwidth in which only frequencies that are equal to or greater than half power are part of the band). Another common equation is 2π multiplied by the ratio of energy stored in a system to energy lost in one cycle. The energy dissipated is due to damping, which again shows that Q factor is inversely related to damping, in addition to bandwidth.\n\nQ can also be expressed as a function of frequency:",
null,
"The full relationship between Q factor and damping can be expressed as the following:\n\nWhen Q = 1/2, the system is critically damped (such as with a door damper). The system does not oscillate. This is also when the damping ratio is equal to one. The main difference between critical damping and overdamping is that in critical damping, the system returns to equilibrium in the minimum amount of time.\n\nWhen Q > 1/2 the system is underdamped and oscillatory. With a small Quality factor underdamped system, the system many only oscillate for a few cycles before dying out. Higher Q factors will oscillate longer.\n\nWhen Q < 1/2 the system is overdamped. The system does not oscillate but takes longer to reach equilibrium than critical damping.\n\n# Bragg Gratings\n\nBragg gratings are commonly used in optical fibers. Generally, an optical fiber has a relatively constant refractive index throughout. With a FBG (Fiber Bragg Grading) the refractive index is varied periodically within the core of the fiber. This can allow certain wavelengths to be reflected while all others are transmitted.",
null,
"The typical spectral response is shown above. It is clear that only a specific wavelength is reflected, while all others are transmitted. Bragg Gratings are typically only used in short lengths of the optical fiber to create a sort of optical filter. The only wavelength to be reflected is the one that is in phase with the Bragg grating distribution.\n\nA typical usage of a Bragg Grating is for optical communications as a “notch filter”, which is essentially a band stop filter with a very high Quality factor, giving it a very narrow range of attenuated frequencies. These fibers are generally single mode, which features a very narrow core that can only support one mode as opposed to a wider multimode fiber, which can suffer from greater modal distortion.\n\nThe “Bragg Wavelength” can be calculated by the equation:\n\nλ = 2n∧\n\nwhere n is the refractive index and ∧ is the period of the bragg grating. This wavelength can also be shifted by stretching the fiber or exposing it to varying temperature.\n\nThese fibers are typically made by exposing the core to a periodic pattern of intense laser light which permanently increases the refractive index periodically. This phenomenon is known as “self focusing” which is when refractive index can be permanently changed by extreme electromagnetic radiation.\n\n# Pseudomorphic HEMT\n\nThe Pseudomorphic HEMT makes up the majority of High Electron Mobility Transistors, so it is important to discuss this typology. The pHEMT differentiates itself in many ways including its increased mobility and distinct Quantum well shape. The basic idea is to create a lattice mismatch in the heterostructure.\n\nA standard HEMT is a field effect transistor formed through a heterostructure rather than PN junctions. This means that the HEMT is made up of compound semiconductors instead of traditional silicon FETs (MOSFET). The heterojunction is formed when two different materials with different band gaps between valence and conduction bands are combined to form a heterojunction. GaAs (with a band gap of 1.42eV) and AlGaAs (with a band gap of 1.42 to 2.16eV) is a common combination. One advantage that this typology has is that the lattice constant is almost independent of the material composition (fractions of each element represented in the material). An important distinction between the MESFET and the HEMT is that for the HEMT, a triangular potential well is formed which reduces Coloumb Scattering effects. Also, the MESFET modulates the thickness of the inversion layer while keeping the density of charge carriers constant. With the HEMT, the opposite is true. Ideally, the two compound semiconductors grown together have the same or almost similar lattice constants to mitigate the effects of discontinuities. The lattice constant refers to the spacing between the atoms of the material.\n\nHowever, the pseudomorphic HEMT purposely violates this rule by using an extremely thin layer of one material which stretches over the other. For example, InGaAs can be combined with AlGaAs to form a pseudomorphic HEMT. A huge advantage of the pseudomorphic typology is that there is much greater flexibility when choosing materials. This provides double the maximum density of the 2D electron gas (2DEG). As previously mentioned, the field mobility also increases. The image below illustrates the band diagram of this pHEMT. As shown, the discontinuity between the bandgaps of InGaAs and AlGaAs is greater than between AlGaAs and GaAs. This is what leads to the higher carrier density as well as increased output conductance. This provides the device with higher gain and high current for more power when compared to traditional HEMT.",
null,
"The 2DEG is confined in the InGaAs channel, shown below. Pulse doping is generally utilized in place of uniform doping to reduce the effects of parasitic current. To increase the discontinuity Ec, higher Indium concentrations can be used which requires that the layer be thinner. The Indium content tends to be around 15-25% to increase the density of the 2DEG.",
null,
"# Object Oriented Programming and C#: Program to Determine Interrupt Levels\n\nThe following is a program designed to detect environmental interrupts based on data inputted by the user. The idea is to generate a certain threshold based on the standard deviation and twenty second average of the data set.\n\nA bit of background first: The standard deviation, much like the variance of a data set, describes the “spread” of the data. The standard deviation is the square root of the variance, to be specific. This leaves the standard deviation with the same units as the mean, whereas the variance has squared units. In simple terms, the standard deviation describes how close the values are to the mean. A low standard deviation indicates a narrow spread with values closer to the mean.",
null,
"Often, physical data which involves the averaging of many samples of a random experiment can be approximated as a Gaussian or Normal distribution curve, which is symmetrical about the mean. As a real world example, this approximation can be made for the height of adult men in the United States. The mean of this is about 5’10 with a standard deviation of three inches. This means that for a normal distribution, roughly 68% of adult men are within three inches of the mean, as shown in the following figure.",
null,
"In the first part of the program, the variables are initialized. The value “A” represents the multiple of standard deviations. Previous calculations deemed that the minimum threshold level would be roughly 4 times the standard deviation added to the twenty second average. Two arrays are defined: an array to calculate the two second average which was set to a length of 200 and also an array of length 10 for the twenty second average.",
null,
"The next part of the program is the infinite “while(true)” loop. The current time is printed to the console for the user to be aware of. Then, the user is prompted to input a minimum and maximum value for a reasonable range of audible values, and these are parsed into integers. Next, the Random class is instantiated and a for loop is incremented 200 times to store a random value within the “inputdata_two[]” array for each iteration. The random value is constrained to the max and min values provided by the user. The “Average()” method built into the Random class gives an easy means to calculate the two second average.",
null,
"Next, a foreach statement is used to iterate through every value (10 values) of the twenty second average array and print them to the console. An interrupt is triggered if two conditions are met: the time has incremented to a full 20 seconds and the two second average is greater than the calculated minimum threshold. “Alltime” is set to -2 to reset the value for the next set of data. Once the time has incremented to 20 seconds, a twenty second average is calculated and from this, the standard deviation is calculated and printed to the console.",
null,
"The rest of code is pictured below. The time is incremented by two seconds until the time is at 18 seconds.",
null,
"The code is shown in action:",
null,
"If a high max and min is inputted, an interrupt will be triggered and the clock will be reset:",
null,
"# HFSS – Simulation of a Square Pillar\n\nThe following is an EM simulation of the backscatter of a golden square object. This is by no means a professional achievement, but rather provides a basic introduction to the HFSS program.",
null,
"The model is generated using the “Draw -> Box” command. The model is placed a distance away from the origin, where the excitation is placed, shown below. The excitation is of spherical vector form in order to generate a monostatic plot.",
null,
"The basic structure is a square model (10mm in all three coordinates) with an airbox surrounding it. The airbox is coated with PML radiation boundaries to simulate a perfectly matched layer. This is to emulate a reflection free region. This is necessary to simulate radiating structures in an unbounded, infinite domain. The PML absorbs all electromagnetic waves that interract with the boundary. The following image is the plot of the Monostatic RCS vs the Incident wave elevation angle.",
null,
"The subsequent figure was generated by using a “bistatic” configuration and is plotted against the elevation angle.",
null,
"# Miller Effect\n\nThe Miller Effect is a generally negative consequence of broadband circuitry due to the fact that bandwidth is reduced when capacitance increases. The Miller effect is common to inverting amplifiers with negative gain. Miller capacitance can also limit the gain of a transistor due to transistors’ parasitic capacitance. A common way to mitigate the Miller Effect, which causes an increase in equivalent input capacitance, is to use cascode configuration. The cascode configuration features a two stage amplifier circuit consisting of a common emitter circuit feeding into a common base. Configuring transistors in a particular way to mitigate the Miller Effect can lead to much wider bandwidth. For FET devices, capacitance exists between the electrodes (conductors) which in turn leads to Miller Effect. The Miller capacitance is typically calculated at the input, but for high output impedance applications it is important to note the output capacitance as well.",
null,
"Interesting note: the Miller effect can be used to create a larger capacitor from a smaller one. So in this way, it can be used for something productive. This can be important for designing integrated circuits, where having large bulky capacitors is not ideal as “real estate” must be conserved.\n\n# VHF and UHF\n\nThe RF and microwave spectrum can be subdivided into many bands of varying purpose, shown below.",
null,
"On the lower frequency end, VLF (Very Low Frequency) tends to be used in submarine communication while LF (Low Frequency) is generally used for navigation. The MF (Medium Frequency) band is noted for AM broadcast (see posts on Amplitude modulation). The HF (shortwave) band is famous for use by HAM radio enthusiasts. The reason for the widespread usage is that HF does not require line of sight to propagate, but instead can reflect from the ionosphere and the surface of the earth, allowing the waves to travel great distances. VHF tends to be used for FM radio and TV stations. UHF covers the cellphone band as well as most TV stations. Satellite communication is covered in the SHF (Super High Frequency) band.\n\nRegarding UHF and VHF propagation, line of sight must be achieved in order for the signals to propagate uninhibited. With increasing frequency comes increasing attenuation. This is especially apparent when dealing with 5G nodes, which are easily attenuated by buildings, trees and weather conditions. 5G used bands within the UHF, SHF and EHF bands.\n\nSpeaking of line of sight, the curvature of the earth must be taken into account.",
null,
"The receiving and transmitting antennas must be visible to each other. This is the most common form of RF propagation. Twenty five miles (sometimes 30 or 40) tends to be the max range of line of sight propagation (radio horizon). The higher the frequency of the wave, the less bending or diffraction occurs which means the wave will not propagate as far. Propagation distance is a strong function of antenna height. Increasing the height of an antenna by 10 feet is like doubling the output power of the antenna. Impedance matching should be employed at the antennas and feedlines as losses increase dramatically with frequency.\n\nDespite small wavelengths, UHF signals can still propagate through buildings and foliage but NOT the surface of the earth. One huge advantage of using UHF propagation is reuse of frequencies. Because the waves only travel a short distance when compared to HF waves, the same frequency channels can be reused by repeaters to re-propagate the signal. VHF signals (which have lower frequency) can sometimes travel farther than what the radio horizon allows due to some (limited) reflection by the ionosphere.\n\nBoth VHF and UHF signals can travel long distances through the use of “tropospheric ducting”. This can only occur when the index of refraction of a part of the troposphere due to increased temperature is introduced. This causes these signals to be bent which allows them to propagate further than usual.\n\n# HEMT – High Electron Mobility Transistor\n\nOne of the main limitations of the MESFET is that although this device extends well into the mmWave range (30 to 300 GHz or the upper part of the microwave spectrum), it suffers from low field mobility due to the fact that free charge carriers and ionized dopants share the same space.\n\nTo demonstrate the need for HEMT transistors, let us first consider the mobility of GaAs compound semiconductor. As shown in the picture, with decreasing temperature, Coloumb scattering becomes prevalent as opposed to phonon lattice vibrations. For an n-channel MESFET, the main electrostatic Coloumb force is between positively ionized donor elements (Phosphorous) and electrons. As shown, the mobility is heavily dependent on doping concentration. Coloumb Scattering effectively limits mobility. In addition, decreasing the length of the gate in a MESFET will increase Coloumb scattering due to the need for a higher doping concentration in the channel. The means that for an effective device, the separation of free and fixed charge is needed.",
null,
"A heterojunction consisting of n+ AlGaAs and p- GaAs material is used to combat this effect. A spacer layer of undoped AlGaAs is placed in between the materials. In a heterojunction, materials with different bandgaps are placed together (as opposed to a homojunction where they are the same).",
null,
"This formation leads to the confinement of electrons from the n- layer in quantum wells which reduces Coloumb scattering. An important distinction between the HEMT and the MESFET is that the MESFET (like all FETs) modulates the channel thickness whereas with an HEMT, the density of charge carriers in the channel is changed but not the thickness. So in other words, applying a voltage to the gate of an HEMT will change the density of free electrons will increase (positive voltage) or decrease (negative voltage). The channel is composed of a 2D electron gas (2DEG). The electrons in the gas move freely without any obsctruction, leading to high electron mobility.\n\nHEMTs are generally packed into MMIC chips and can be used for RADAR applications, amplifiers (small signal and PAs), oscillators and mixers. They offer low noise performance for high frequency applications.\n\nThe pHEMT (pseudomorphic) is an enhancement to the HEMT which feature structures with different lattice constants (HEMTs feature roughly the same lattice constant for both materials). This leads to materials with wider bandgap differences and generally better performance.\n\n# Object Oriented Programming and C#: Simple Program to add three numbers\n\nThe following is a simple program that takes a user input of three numbers and adds them but does not crash when an exception is thrown (eg. if a user inputs a non integer value). The “int?” variable is used to include the “null” value used to signify that a bad input was received. The user is notified instantly when an incorrect input is received by the program with a “Bad input” command prompt message.",
null,
"The code above shows that the GetNumber() method is called (shown below) three times, and as long as these are integers, they are summed and printed to the console after being converted to a string.",
null,
"The code shows that as long as the sum of the three integers is not equal to null (anything plus null is equal to null, so if at least one input is a non-integer this will be triggered) the Console will print the sum of the three numbers. The GetNumber() method uses the “TryParse” method to convert each string input to an integer. This will handle exceptions that are triggered by passing a non-integer to the command line. It also gives a convenient return of “null” which is used above.\n\nThe following shows the effect of both a summation and an incorrect input summation failure.",
null,
"",
null,
"# Power Factor and the Power Triangle\n\nPower factor is very important concept for commercial and industrial applications which require higher current draw to operate than domestic buildings. For a passive load (only containing resistance, inductance or capacitance and no active components), the power factor range from 0 to 1. Power factor is only negative with active loads. Before delving into power factor, it is important to discuss different types of power. The type of power most are familiar with is in Watts. This is called active or useful power, as it represents actual energy or time dissipated or “used” by the load in question. Another type of power is reactive power, which is caused by inductance or capacitance, which leads to a phase shift between voltage and current. To demonstrate how a lagging power factor causes “wasted” power, it would be helpful to look at some waveforms. For a purely resistive load, the voltage and current are in phase, so no power is wasted (P=VI is never zero at any point).",
null,
"The above image captures the concept of leading and lagging power factor (leading and lagging is always in reference to the current waveform). For a purely inductive load, the current will lag because the inductor will create a “back EMF” or inertial voltage to oppose changes in current. This EMF leads to a current within the inductor, but only comes from the initial voltage. It can also be seen that this EMF is proportional to the rate of change of the current, so when the current is zero the voltage is maximum. For a capacitive load, the power factor is leading. A capacitor must charge up with current before establishing a voltage across the plates. This explains the PF “leading” or “lagging”. Most of the time, when power factor is decreased it is because the PF is lagging due to induction motors. To account for this, capacitors are used as part of power factor correction.\n\nThe third type of power is apparent power, which is the complex combination of real and reactive power.",
null,
"The power factor is the cosine of the angle made in this triangle. Therefore, as the PF angle is increased the power factor decreases. The power factor is maximum when the reactive power is zero. Ideally, the PF would be between 0.95 and 1, but for many industrial buildings this can fall to even 0.7. This leads to higher electric bills for this buildings because having a lower power factor leads to increases current in the power lines leading to the building which causes higher losses in the lines. It also leads to voltage drops and wastage of energy. To conserve energy, power factor correction must be employed. Often capacitors are used in conjunction with contactors that are controlled by regulators that measure power factor. When necessary, the contactors will be switched on and allow the capacitors to improve the power factor.\n\nFor linear loads, power factor is called as displacement power factor, as it only accounts for the phase difference between the voltage and current. For nonlinear loads, harmonics are added to the output. This is because nonlinear loads cause distortion, which changes the shape of the output sinusoids. Nonlinear loads and power factor will be explored in a subsequent post.\n\n# RFID – Radio Frequency Identification\n\nRFID is an important concept in the modern era. The basic principle of operation is simple: radio waves are sent out from an RF reader to an RFID tag in order to track or identify the object, whether it is a supermarket item, a car, or an Alzheimer patient.\n\nRFID tags are subdivided into three main categories: Active, passive and semipassive. Active RFID tags employ a battery to power them whereas passive tags utilize the incoming radio wave as a power source. The semipassive tag also employs a battery source, but relies on the RFID reader signal as a return signal. For this reason, the active and semi passive tags have a greater range than the passive type. The passive types are more compact and also cheaper and for this reason are more common than the other two types. The RFID picks up the incoming radio waves with an antenna which then directs the electrical signal to a transponder. Transponders receive RF/Microwaves and transmit a signal of a different frequency. After the transponder is the rectifier circuit, which uses a DC current to charge a capacitor which (for the passive tag) is used to power the device.\n\nThe RFID reader consists of a microcontroller, an RF signal generator and a receiver. Both the transmitter and receiver have an antennas which convert radio waves to electrical currents and vice versa.\n\nThe following table shows frequencies and ranges for the various bands used in RFID",
null,
"As expected, lower frequencies travel further distances. The lower frequencies tend to be used for the passive type of RFID tags.\n\nFor LF and HF tags, the working principle is inductive coupling whereas with the UHF and Microwave, the principle is electromagnetic coupling. The following image shows inductive coupling.",
null,
"A transformer is formed between the two coils of the reader and tag. The transformer links the two circuits together through electromagnetic induction. This is also known as near field coupling.\n\nFar field coupling/radiative coupling uses backscatter by reradiating from the tag to the reader. This depends on the load matching, so changing the load impedance will change the intensity of the return wave. The load condition can be changed according to the data in order for the data to be sent back to the reader. This is known as backscatter modulation.\n\n# Using GIT – Introduction\n\nGit is essentially a version control system for tracking changes in computer files. This can be used in conjunction with Visual Studio to program in C#, for example. Git can be accessed through commands through the command window in windows. Git is generally to coordinate changes to code between multiple developers and is also used to work in a local repository which is then “pushed” to a remote depository such as Github.com\n\nGit tracks changes to files by taking snapshots. This is done by the user by typing “git commit ….” in the command prompt. The files should be added to the staging area first by using the command “git add <filename>”. “Git push” and “git pull” are used to interact with the remote repository. “Git clone” will copy and download a repository to your local machine. Git saves every version that gets committed, so a previous version can always be accessed if necessary. The following image illustrates the concept of committing.",
null,
"You can essentially “branch” your commits which can later be merged together by using “git commit” command with multiple parents. The master branch is the main/linear list of saves. This can be done in the remote repository or the local. A “pull request” essential means taking changes that were made in a certain branch and pulling them into another branch. This means multiple people can be editing multiple branches which can then be merged together.\n\nGit is extremely useful for collaboration (as with websites such as google docs) where multiple authors can work on something at the same time. It also is excellent for keeping track of the history of projects.\n\n# Mobility and Saturation Velocity in Semiconductors\n\nIn solid state physics, mobility describes how quickly a charge carrier can move within a semiconductor device when in the presence of a force (electric field). When an electric field is applied, the particles begin to move at a certain drift velocity, given by the mobility of the carrier (electron or hole) and electric field. The equation can be written as:",
null,
"This is also related to Ohm’s law in point form, which is the conductivity multiplied by the Electric field. This shows that the conductivity of a material is related to the number of charge carriers as well as their mobility within the material. Mobility is heavily dependent on doping, which introduces defects to the material. This means that intrinsic semiconductor material (Si or Ge) has higher mobility, but this is a paradox due to the fact that intrinsic semiconductor has no charge carriers. In addition, mobility is inversely proportional to mass, so a heavier particles will move at a slower rate.\n\nPhonons also contribute to a loss of mobility due to an effect known as “Lattice Scattering”. When the temperature of semiconductor material is raised above absolute zero, the atoms vibrate and create phonons. The higher the temperature, the more phonon particles which means greater collisions and lower mobility.\n\nSaturation velocity refers to the maximum velocity a charge carrier can travel within a semiconductor in the presence of a strong electric field. As previously stated, the velocity is proportional to mobility, but with increasing electric field there reaches a point where the velocity saturates. From this point, increasing the field only leads to more collisions with the lattice structure and phonons, which does not help the drift speed. Different semiconductor materials have different saturation velocities and are strong functions of impurities.\n\n# Transistor IV curves and Modes of Operation/Biasing\n\nIn the field of electronics, the most important active device is without a doubt the transistor. A transistor acts as a ON/OFF switch or as an amplifier. It is important to understand the modes of operation for these devices, both voltage controlled (FET) and current controlled (BJT).\n\nFor the MOSFET, the cutoff region is where no current flows through the inversion channel and functions as an open switch. The “Ohmic” or linear region, the drain-source current increases linearly with the drain-source voltage. In this region, the FET is acting as a closed switch or “ON” state. The “Saturation” region is where the drain-source current stays roughly constant despite the drain source voltage increasing. This region has the FET functioning as an amplifier.",
null,
"The image above illustrates that for an enhancement mode FET, the gate-source voltage must be higher than a certain threshold voltage for the device to conduct. Before that happens, there is no channel for charge to flow. From there, the device enters the linear region until the drain-source voltage is high enough to be in saturation.\n\nDC biasing is an extremely important topic in electronics. For example, if a designer wishes for the transistor to operate as an amplifier, the FET must stay within the saturation region. To achieve this, a biasing circuit is implemented. Another condition which effects the operating point of the transistor is temperature, but this can be mitigated with a DC bias circuit as well (this is known as stabilization). “Stability factor” is a measure of how well the biasing circuit achieves this effect. Biasing a MOSFET changes its DC operating point or Q point and is usually implemented with a simple voltage divider circuit. This can be done with a single DC voltage supply. The following voltage transfer curve shows that the MOSFET amplifies best in the saturation region with less distortion than the triode/ohmic region.",
null,
""
]
| [
null,
"https://mbenkerhome.files.wordpress.com/2020/05/unnamed.jpg",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/1.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/spec.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/1-5.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/1-6.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/std.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/normal.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/prog1.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/prog2.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/prog3.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/prog4.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/resultprog.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/inter.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/hfss_sq_model-1.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/excitation.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/monostatic_hfss.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/bistatic.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/03/cascode.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/radiospec.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/los.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/mobility.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/hetero.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/c1.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/c2.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/correct.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/04/incorrect.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/eli.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/triangle.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/rfidtable.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/inductive-coupling.jpg",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/gitcommit.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/density.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/ivcurve.png",
null,
"https://mbenkerhome.files.wordpress.com/2020/02/output.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9415418,"math_prob":0.92737377,"size":2543,"snap":"2020-24-2020-29","text_gpt3_token_len":520,"char_repetition_ratio":0.13706183,"word_repetition_ratio":0.0,"special_character_ratio":0.18678726,"punctuation_ratio":0.08658009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9568058,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,7,null,9,null,10,null,9,null,9,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,5,null,3,null,3,null,6,null,6,null,7,null,7,null,7,null,7,null,6,null,6,null,9,null,9,null,2,null,4,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T01:46:52Z\",\"WARC-Record-ID\":\"<urn:uuid:d29d241c-39d7-447f-b33d-76ca0d1d9594>\",\"Content-Length\":\"184839\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d69b04c2-c5ff-46f2-ac6f-b73af3a91408>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f796717-9f1f-45ff-83de-f9bc4c6de433>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://rfphotonicslab.org/category/uncategorized/page/2/\",\"WARC-Payload-Digest\":\"sha1:IGPAZSP2FZ3YO3YH5I43FRPRNQITTRQF\",\"WARC-Block-Digest\":\"sha1:35UI5UN27S3AIUHSM5TKNIVSFJ7SRDJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655883961.50_warc_CC-MAIN-20200704011041-20200704041041-00524.warc.gz\"}"} |
http://chromatography-online.org/Retention/rs_1_13.php | [
"# Principles and Practice of Chromatography - Factors Controlling Retention > Page 13\n\nThe peak height (h) is the distance between the peak maximum and the base line geometrically produced beneath the peak.\n\nThe peak width (w) is the distance between each side of a peak measure at 0.6065 of the peak height (ca 0.607h). The peak width measured at this height is equivalent to two standard deviations (2s) of the Gaussian curve and thus has significance when dealing with chromatography theory.\n\nThe peak width at half height (w0.5) is the distance between each side of a peak measured at half the peak height. The peak width measured at half height has no significance with respect to chromatography theory.\n\nThe peak width at the base (wB) is the distance between the intersections of the tangents drawn to the sides of the peak and the peak base geometrically produced. The peak width at the base is equivalent to four standard deviations (4s) of the Gaussian curve and thus also has significance when dealing with chromatography theory.\n\n# Factors Controlling Retention\n\nThe equation for the retention volume (Vr), as derived from the Plate theory (see ThePlate Theory and Extensions ) is as follows,\n\nVr = Vm + KVS"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94370145,"math_prob":0.9959594,"size":1208,"snap":"2019-51-2020-05","text_gpt3_token_len":264,"char_repetition_ratio":0.15863787,"word_repetition_ratio":0.16,"special_character_ratio":0.20860927,"punctuation_ratio":0.056074765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98765105,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T13:16:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f9d6bc82-da55-43c5-81fb-7c0466646f49>\",\"Content-Length\":\"15643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0f66751-58c5-46f0-be83-40a5a4bbccfd>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a0c57ef-2196-4f46-bf50-5891e1bae83e>\",\"WARC-IP-Address\":\"70.32.66.44\",\"WARC-Target-URI\":\"http://chromatography-online.org/Retention/rs_1_13.php\",\"WARC-Payload-Digest\":\"sha1:J5T3CP3VQIZCQUNHBIZ6HK5PM5Q5AGT6\",\"WARC-Block-Digest\":\"sha1:RUIPNFOAIPEZVCT6UHGWSGIBIBBIEWHL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543850.90_warc_CC-MAIN-20191212130009-20191212154009-00305.warc.gz\"}"} |
https://www.codingninjas.com/codestudio/problems/print-diagonal_919819 | [
"",
null,
"1\n\n# Print Diagonal\n\nDifficulty: MEDIUM",
null,
"Contributed By\nArchit Sharma\nAvg. time to solve\n30 min\nSuccess Rate\n80%\n\nProblem Statement\n\n#### Example:",
null,
"#### Following will be the output of the above matrix:\n\n``````1\n5 2\n9 6 3\n13 10 7 4\n14 11 8\n15 12\n16\n``````\n##### Input Format:\n``````The first line contains an Integer 'T' which denotes the number of test cases or queries to be run. Then the test cases follow.\n\nThe first line of each test case contains two space-separated integers ‘N’ and ‘M’ denoting the number of rows and columns of the matrix respectively.\n\nN’ lines follow. Each of the next ‘N’ lines contains ‘M’ space-separated integers separated by space.\n``````\n##### Output Format:\n``````For each test case, return a 2D vector containing all elements of the matrix in a diagonal fashion.\n\nThe output of each test case should be printed in a separate line.\n``````\n##### Note:\n``````You are not required to print anything, it has already been taken care of. Just implement the function.\n``````\n##### Constraints :\n``````1 <= T <= 10\n1 <= N <= 100\n1 <= M <= 100\n1 <= mat[i][j] <= 100\n\nTime Limit : 1 sec.\n``````\n##### Sample Input 1:\n``````2\n4 6\n1 2 3 4 5 6\n7 8 9 10 11 12\n13 14 15 16 17 18\n19 20 21 22 23 24\n4 4\n1 2 3 4\n6 7 8 9\n11 12 13 14\n16 17 18 19\n``````\n##### Sample Output 1:\n``````1\n7 2\n13 8 3\n19 14 9 4\n20 15 10 5\n21 16 11 6\n22 17 12\n23 18\n24\n1\n6 2\n11 7 3\n16 12 8 4\n17 13 9\n18 14\n19\n``````\n##### Explanation For Sample Output 1 :\n``````Test Case 1:\n``````",
null,
"``````In the above pic, arrow lines represent the diagonals of the matrix which are to be returned.\nTest Case 2:\n``````",
null,
"``````In the above pic, arrow lines represent the diagonals of the matrix which are to be returned.\n``````\n##### Sample Input 2:\n``````2\n5 4\n1 2 3 4\n5 6 7 8\n9 10 11 12\n13 14 15 16\n17 18 19 20\n2 2\n8 3\n6 1\n``````\n##### Sample Output 2:\n``````1\n5 2\n9 6 3\n13 10 7 4\n17 14 11 8\n18 15 12\n19 16\n20\n8\n6 3\n1\n``````",
null,
"",
null,
"",
null,
"Console"
]
| [
null,
"https://s3-ap-southeast-1.amazonaws.com/codestudio.codingninjas.com/codestudio/assets/images/ps_upvote.svg",
null,
"https://www.codingninjas.com/codestudio/problems/print-diagonal_919819",
null,
"https://files.codingninjas.in/screenshot-from-2021-06-15-16-47-27-11104.png",
null,
"https://files.codingninjas.in/main-qimg-2b51d475c6a7b9cbc87189a2eaeaca8f-11105.png",
null,
"https://files.codingninjas.in/diagonal-order-of-matrix-1-11106.png",
null,
"https://s3-ap-southeast-1.amazonaws.com/codestudio.codingninjas.com/codestudio/assets/icons/reset-code-dark.svg",
null,
"https://s3-ap-southeast-1.amazonaws.com/codestudio.codingninjas.com/codestudio/assets/icons/full-screen-dark.svg",
null,
"https://s3-ap-southeast-1.amazonaws.com/codestudio.codingninjas.com/codestudio/assets/icons/copy-code.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84076494,"math_prob":0.93403643,"size":1004,"snap":"2021-43-2021-49","text_gpt3_token_len":264,"char_repetition_ratio":0.128,"word_repetition_ratio":0.104166664,"special_character_ratio":0.28984064,"punctuation_ratio":0.09952607,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97452384,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,1,null,3,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T05:21:35Z\",\"WARC-Record-ID\":\"<urn:uuid:948c8f87-98cf-4afe-8754-8c9e547d4b32>\",\"Content-Length\":\"206278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6ce9853-05e5-4110-a825-8bd8a05ab783>\",\"WARC-Concurrent-To\":\"<urn:uuid:28bc099e-1d09-434a-a37a-788bab3b0b67>\",\"WARC-IP-Address\":\"52.85.131.87\",\"WARC-Target-URI\":\"https://www.codingninjas.com/codestudio/problems/print-diagonal_919819\",\"WARC-Payload-Digest\":\"sha1:KTNDBLKAIT3OXT5WFBDJBPGQMLV2OPFD\",\"WARC-Block-Digest\":\"sha1:OGMND4MGVYCVGOVW7MUTSBU6UAM3LD4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358688.35_warc_CC-MAIN-20211129044311-20211129074311-00536.warc.gz\"}"} |
https://testbook.com/question-answer/the-per-unit-impedance-of-a-transformer-is--6083cb5285734d23196ec680 | [
"# The per unit impedance of a transformer is:\n\nThis question was previously asked in\nSSC JE EE Previous Paper 11 (Held on: 24 March 2021 Morning)\nView all SSC JE EE Papers >\n1. larger if computed from primary side than from secondary side\n2. the same whether computed from primary or secondary side\n3. always zero\n4. always infinity\n\nOption 2 : the same whether computed from primary or secondary side\nFree\nCT 1: Basic Concepts\n19163\n10 Questions 10 Marks 6 Mins\n\n## Detailed Solution\n\nPer unit system:\n\nIt is usual to express voltage, current, voltamperes and impedance of an electrical circuit in per unit (or percentage) of base or reference values of these quantities.\n\nThe Per Unit value of any quantity is defined as\n\nPU value = actual value/base value\n\nPer unit system in transformers:\n\nThe per unit impedance of a transformer is the same whether computed from primary or secondary side so long as the voltage bases on the two sides are in the ratio of transformation (equivalent per phase ratio of a three-phase transformer which is the same as the ratio of line-to-line voltage rating).\n\nTransformation ratio of transformer is given by K = V2/V1 = E2/E1 = N2/N1.\n\nWhere N1 is the number of primary turns\n\nVis the primary voltage\n\nN2 is the number of secondary turns\n\nVis the secondary voltage"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8766785,"math_prob":0.9675161,"size":809,"snap":"2021-43-2021-49","text_gpt3_token_len":178,"char_repetition_ratio":0.1552795,"word_repetition_ratio":0.0,"special_character_ratio":0.21384425,"punctuation_ratio":0.04605263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98669845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T06:19:25Z\",\"WARC-Record-ID\":\"<urn:uuid:dcebb0ed-3ac0-426f-951f-445e7487aa9e>\",\"Content-Length\":\"125933\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9c917a0-ab3a-42eb-bd10-ed234bed1d11>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fa84136-811b-4c05-ba5e-0dbeeb58c697>\",\"WARC-IP-Address\":\"104.22.45.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/the-per-unit-impedance-of-a-transformer-is--6083cb5285734d23196ec680\",\"WARC-Payload-Digest\":\"sha1:PVKEFLYPUHQY2WNNGZFND4C4HGWVOAZI\",\"WARC-Block-Digest\":\"sha1:FXDRXZY7VC6EJFKMU3IFG63OCFSGGDEF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362605.52_warc_CC-MAIN-20211203060849-20211203090849-00232.warc.gz\"}"} |
https://charisontreetopspa.com/irvinebank/cmiss-sas-example-in-an-array-sas.php | [
"# An sas sas array in example cmiss\n\nHome » Irvinebank » Cmiss sas example in an array sas\n\n## SAS Delete empty rows in SAS - ListenData",
null,
"Macro Arrays Make %DO-Looping Easy. SAS calculates descriptive statistics for the non For example, the argument The following example shows how you can use a variable array in a SUM function, 14/11/2014 · SAS provides several functions to test for missing values but in this post we will focus on MISSING(), CMISS() and NMISS() functions. The NMISS() function.\n\n### sas base programming Flashcards Quizlet\n\nPharmaSUG 2015 Paper BB16 Unpacking an Excel Cell. Missing Values in SAS Deepanshu Bhalla 5 Comments SAS. In SAS, Numeric and Character missing values are represented differently. CMISS : The CMISS(), SAS Procedures / Check if array is empty; Check if array is empty. or for character array. e = cmiss(of ) eq dim(g); in the following example:.\n\nMissVars=cmiss (of Var1- Var4); DO WHILE EXAMPLE: The alternate syntax is often used when the array elements are defined with a SAS variable list. array Arrays from AtoZ Phil Spector SAS arrays can be used for simple repetitive tasks, Examples: array x x1-x3; array check{5}\n\nAre You Missing Out? Working with Missing Values to Make the Most of however many SAS® programmers are unaware of and C, in the example to the left have Start studying sas base programming. Learn vocabulary, Permanent SAS libraries are stored until you delete them. to reference array element\n\nMissVars=cmiss (of Var1- Var4); DO WHILE EXAMPLE: The alternate syntax is often used when the array elements are defined with a SAS variable list. array Missing Data Report with array names(*) f_: ; # A MISSING DATA REPORT # TRY SAME SAS CODE WITH ANOTHER DATA SET\n\nSAS : Delete empty rows in SAS SAS : Delete empty rows from a NMISS function checks the numeric of missing numeric values and CMISS checks the number of Example: An array with more than one dimension is known as a multidimensional array. A SAS array is simply a convenient way of temporarily identifying a group of\n\n6/08/2008 · Re: N and Nmiss Showing 1-16 of 16 That could be changed but I didn't see a need for this example. (Windows XP, SAS 9.2) TRANSPOSE+data step+CMISS Q: How can one subset observations to exclude those which have missing values for all variables? For example: data test; length var1 \\$2 yyw 8 var2 8 zzt \\$1 yy3 8 tt4\n\nIF you wanted all variable instead of arrays use the NMISS and CMISS functions Fundamental Missing with array. (even though it's actually written in sas You can use variable lists to assign an array in a SAS DATA step. For example, in functions without creating an array. For example, if cmiss (of _ALL_) = 0;\n\nExample 1: Using the DATALINES Statement. In this example, SAS reads a data line and assigns values to two character variables, NAME and DEPT, for each observation in Missing Values in SAS Magnus Mengelbier, Limelogic Ltd, The functions and methods are discussed with DATA step examples, CMISS() and NMISS() functions\n\nSAS calculates descriptive statistics for the non For example, the argument The following example shows how you can use a variable array in a SUM function Count missing values in observations. let’s count missing values in the SAS DATA step by using the CMISS The following example uses an array of\n\nFor example, if a two-dimensional array is passed in For details about the ARRAY statement in the base SAS language, see SAS Language Reference: Dictionary. You can use variable lists to assign an array in a SAS DATA step. For example, in functions without creating an array. For example, if cmiss (of _ALL_) = 0;\n\nArrays in SAS. This seminar is In the next example we want to create a variable called new1 which A more subtle usage of arrays. One issue in SAS data The CMISS function does not convert any argument. The NMISS function converts all arguments to numeric values.\n\nWhat is a SAS array? A SAS array each array element in this example has a position number in the array list from 1 to 5 Convenience of arrays. Arrays Made Easy: An Introduction to Arrays and Array A SAS array is a The value of n will be the element’s position within the array. For example,\n\nSyntax of WHERE Expression A WHERE expression is a type of SAS expression that defines a condition for selecting observations. For example, you can use For example, if a two-dimensional array is passed in For details about the ARRAY statement in the base SAS language, see SAS Language Reference: Dictionary.\n\nWould you like to better understand how to work with missing values in SAS? Example 2 – Using CMISS to Count Missing Values in Character Variables and %DO_OVER macros are analogous to the ARRAY and DO OVER statements in the SAS data step language, macro array. Example:\n\nExample 1: Using the DATALINES Statement. In this example, SAS reads a data line and assigns values to two character variables, NAME and DEPT, for each observation in For example, if a two-dimensional array is passed in For details about the ARRAY statement in the base SAS language, see SAS Language Reference: Dictionary.\n\nHow do you sort the values by ascending order when concantenating for SAS? eg. In this example I am do ASCENDING order when concantenating. sas sort array SAS® Functions - Simple But well as several examples of popular functions, will be you are not yet satiated, a vast array 01 functions are\n\n### Statistical Computing Seminars Arrays in SAS IDRE Stats",
null,
"SAS Examples Arrays and Variable Lists queirozf.com. examples arrays The first program illustrates basic array procedures. In the final section, elements of dblNegSubArray are specified by using numeric expressions, Would you like to better understand how to work with missing values in SAS? Example 2 – Using CMISS to Count Missing Values in Character Variables.\n\n### Missing Data Report SAS",
null,
"MISSING() NMISS() and the CMISS() functions Ravi Mandal. For example, if a two-dimensional array is passed in For details about the ARRAY statement in the base SAS language, see SAS Language Reference: Dictionary. If you inadvertently use a function name as the name of the array, SAS treats parenthetical references that involve the Example: An array with one dimension can.",
null,
"Home » SAS » SAS Arrays and DO Loop Made Easy. SAS Arrays and DO Loop Made Easy Deepanshu Bhalla 10 Comments SAS. In the example above, ABC is an array-name, Array Functions Three SAS functions that pertain specifically to arrays Function Definition Dim(arrayname ) Temporary Array - Example\n\nSAS statements that accept variable lists include the KEEP You can use variable lists to assign an array in a SAS DATA step. For example, if cmiss (of _ALL_ Start studying SAS Programming 1 and 2. Learn When you only need the array to perform a calculation or Create a SAS data set from the format using the\n\n3/05/2016 · In this video, I show how to use arrays to manipulate multiple variables at the same time. You can download the code on www.phdinfinance.org SAS Arrays - Learn SAS in simple and easy steps starting from basic to advanced concepts with examples including Overview, Environment, User Interface, Program\n\nExamples on how to use arrays and variable lists in SAS. Three SAS Programs that use Arrays Example #1 . This is an example of a simple array. Arrays are used with do loops to process a list of variables in the same manner.\n\nThe CMISS function does not convert any argument. The NMISS function converts all arguments to numeric values. MissVars=cmiss (of Var1- Var4); DO WHILE EXAMPLE: The alternate syntax is often used when the array elements are defined with a SAS variable list. array\n\nIt is the ARRAY statement that makes the CMISS function convenient. If the variables are contiguous in the data set (as in this example), you can use the double-dash You can use variable lists to assign an array in a SAS DATA step. For example, in functions without creating an array. For example, if cmiss (of _ALL_) = 0;\n\nExample 1: One-dimensional Array. In this example, DIM returns a value of 5. Therefore, SAS repeats the statements in the DO loop five times. array big{5} weight sex Example 1: One-dimensional Array. In this example, DIM returns a value of 5. Therefore, SAS repeats the statements in the DO loop five times. array big{5} weight sex\n\nHaving Fun with RACE Derivation in DM Domain logical expression and SAS functions in our daily SAS programming. The following example shows SAS array is a Would you like to better understand how to work with missing values in SAS? Example 2 – Using CMISS to Count Missing Values in Character Variables\n\nIF you wanted all variable instead of arrays use the NMISS and CMISS functions Fundamental Missing with array. (even though it's actually written in sas Provides comprehensive reference information for the Base SAS language, FINDC searches for the characters in this list This example searches for all of\n\nSAS Procedures / Check if array is empty; Check if array is empty. or for character array. e = cmiss(of ) eq dim(g); in the following example: Missing Values in SAS Magnus Mengelbier, Limelogic Ltd, The functions and methods are discussed with DATA step examples, CMISS() and NMISS() functions\n\nProvides comprehensive reference information for the Base SAS language, FINDC searches for the characters in this list This example searches for all of SAS® Functions - Simple But well as several examples of popular functions, will be you are not yet satiated, a vast array 01 functions are\n\nFunctions and CALL Routines CMISS Function Definitions of Functions and CALL Routines A SAS function performs a computation or system manipulation on ar Arrays from AtoZ Phil Spector SAS arrays can be used for simple repetitive tasks, Examples: array x x1-x3; array check{5}\n\narray x (0, ., .a, 1e-12, CMISS Function Counts the number of missing character or numeric New Features in SAS 9.2 SAS. Standards Assessments comparisons as multiplication equations. Example 1: array.html for additional practice using arrays for multiplication without the\n\nHow to mimic the N function for character variables using Data Step, miss=cmiss (of disp1-disp8 your super cool SAS tips & techniques. These code examples are Using Arrays in SAS Because a variable list is not specified in this example, SAS uses the name of the array (WEIGHT) and adds a numeric suffix",
null,
"SAS ® ARRAYS: A BASIC TUTORIAL examples • Using arrays in an IF-THEN statement A SAS array associates a group of variables so Missing Values in SAS Magnus Mengelbier, Limelogic Ltd, The functions and methods are discussed with DATA step examples, CMISS() and NMISS() functions"
]
| [
null,
"https://charisontreetopspa.com/img/0c312eb683f2858eeadc89782197acde.jpg",
null,
"https://charisontreetopspa.com/img/802406.jpg",
null,
"https://charisontreetopspa.com/img/cmiss-sas-example-in-an-array-sas.jpg",
null,
"https://charisontreetopspa.com/img/188754.jpg",
null,
"https://charisontreetopspa.com/img/cmiss-sas-example-in-an-array-sas-2.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7494479,"math_prob":0.737724,"size":10475,"snap":"2020-45-2020-50","text_gpt3_token_len":2315,"char_repetition_ratio":0.17104383,"word_repetition_ratio":0.4143991,"special_character_ratio":0.21088305,"punctuation_ratio":0.09919436,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9715394,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T12:03:21Z\",\"WARC-Record-ID\":\"<urn:uuid:fd5ff90c-7797-4e40-b220-c384f6a08ea5>\",\"Content-Length\":\"36781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d82ef7c-ad34-4511-bea2-fe562c8b6c30>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec0648ae-1fe0-4eac-8f48-766da93829b8>\",\"WARC-IP-Address\":\"149.28.27.86\",\"WARC-Target-URI\":\"https://charisontreetopspa.com/irvinebank/cmiss-sas-example-in-an-array-sas.php\",\"WARC-Payload-Digest\":\"sha1:QEGH74CYT7PEGVTPMA2R7AP3UEYK3KTF\",\"WARC-Block-Digest\":\"sha1:FTQS6BJ7SR3PJVOKET5LCBV6YJCAK7WE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894175.55_warc_CC-MAIN-20201027111346-20201027141346-00506.warc.gz\"}"} |
https://apcocoa.uni-passau.de/wiki/index.php?title=ApCoCoA-1:DA.LPot&diff=cur&oldid=8847 | [
"Difference between revisions of \"ApCoCoA-1:DA.LPot\"\n\nDA.LPot\n\nComputes the leading power of a differential polynomial.\n\nSyntax\n\nDA.LPot(F:POLY):POLY\n\nDescription\n\nDA.LPot returns the leading power of polynomial F wrt. the current differential term order, or the hereby induced ranking respectively.\n\n• @param F A differential polynomial.\n\n• @return The leading power of F.\n\nExample\n\nUse QQ[x[1..2,0..20]];\nUse QQ[x[1..2,0..20]], Ord(DA.DiffTO(\"Lex\"));\nDA.LPot(x[1,1]^2x[1,2]^2 + 1/4x[1,2]);\n-------------------------------\nx[1,2]^2\n-------------------------------"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.527922,"math_prob":0.97435224,"size":2194,"snap":"2022-05-2022-21","text_gpt3_token_len":717,"char_repetition_ratio":0.1694064,"word_repetition_ratio":0.22026432,"special_character_ratio":0.37511393,"punctuation_ratio":0.2080537,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96451604,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T03:11:10Z\",\"WARC-Record-ID\":\"<urn:uuid:84240c15-8f88-4632-97dc-2746d96f37e3>\",\"Content-Length\":\"28857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21a0d706-8f69-417e-a35c-c57635bc3bf5>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2e764ca-8bcc-43fc-a05b-79f3ead59931>\",\"WARC-IP-Address\":\"132.231.51.76\",\"WARC-Target-URI\":\"https://apcocoa.uni-passau.de/wiki/index.php?title=ApCoCoA-1:DA.LPot&diff=cur&oldid=8847\",\"WARC-Payload-Digest\":\"sha1:RWHCJ5BJCCVSFVPXRF6W4P6XS5AYGVDI\",\"WARC-Block-Digest\":\"sha1:BOS6QTQQ7KDL2UJ74B5U3FSLFZYBQGZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304749.63_warc_CC-MAIN-20220125005757-20220125035757-00291.warc.gz\"}"} |
https://www.sourcetable.com/formula/gauss | [
"# GAUSS\n\nFormulas / GAUSS\nCalculate the probability of a standard normal population member falling between the mean and standard deviations.\n`GAUSS(z)`\n• z - a number\n\n## Examples\n\n• `=GAUSS(2)`\n\nThe GAUSS function in Sourcetable returns the probability that a member of a standard normal population will fall between the mean and the provided standard deviation from the mean. For example, this would return the probability that a member of a standard normal population would fall between the mean and 2 standard deviations from the mean.\n\n• `=GAUSS(-1)`\n\nThe GAUSS Function is useful when performing statistical analysis and calculating probabilities. For example, if a sample population has a mean of 0 and a standard deviation of 1, and would return the probability that a member of the population would be less than -1.\n\n• `=GAUSS(3)`\n\nThe GAUSS Function is also useful when calculating z-scores. A z-score is a measure of how many standard deviations a value is from the mean. For example, if a sample population has a mean of 5 and a standard deviation of 2, and would return the probability that a member of the population would have a z-score of 3.\n\n• `=GAUSS(2)`\n\nThe GAUSS Function is also useful when calculating confidence intervals. For example, if a sample population has a mean of 10 and a standard deviation of 4, and would return the probability that a member of the population would have a confidence interval of 10 ± 2.\n\n## Summary\n\nThe GAUSS function is used to calculate the probability of a standard normal population member falling between a mean and its standard deviation. It requires a numerical argument to work.\n\n• The GAUSS function calculates the probability that a standard normal population will fall between the mean and a specified z-value from the mean, and returns a number as its argument.\n• The number returned by the GAUSS function is the probability."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9368935,"math_prob":0.9944165,"size":1433,"snap":"2023-14-2023-23","text_gpt3_token_len":316,"char_repetition_ratio":0.17214836,"word_repetition_ratio":0.31932774,"special_character_ratio":0.20655966,"punctuation_ratio":0.06716418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999739,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T19:52:55Z\",\"WARC-Record-ID\":\"<urn:uuid:2a0cad02-c479-4999-975e-cb6cf6f93b60>\",\"Content-Length\":\"56158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f456891-2c96-4c29-b5e8-d2e282a36ce8>\",\"WARC-Concurrent-To\":\"<urn:uuid:139b3bb8-7698-4c24-b0c2-2d4df666f50e>\",\"WARC-IP-Address\":\"40.64.128.226\",\"WARC-Target-URI\":\"https://www.sourcetable.com/formula/gauss\",\"WARC-Payload-Digest\":\"sha1:FRVW5OMWUA6D4LPLIJ4X6CDFKH3ETJ3Y\",\"WARC-Block-Digest\":\"sha1:2HS27NDG7NCI7TPMQAOC2W3ITWC5OWDI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654012.67_warc_CC-MAIN-20230607175304-20230607205304-00162.warc.gz\"}"} |
https://windows-hexerror.linestarve.com/q/so57932465-C-Access-Violation-reading-location-0xDDDDDDCD-when-I-try-to-delete-an-array-UPDATED | [
"# C++ Access Violation reading location 0xDDDDDDCD when I try to delete an array UPDATED\n\n0\n\nI'm working on a homework assignment. I'm trying to overload the \"=\" operator for an Array class I'm creating so that it will assign a newly created array with the same values as another array. This seems to work. The array is created and the data is copied over. I also check the location of the arrays first element and it is different than the original so I don't think it's trying to delete an array that's already deleted.\n\nI've tried messing around with my destructor, but I honestly have no idea where this is coming from. If anyone has any debugging strategies that might help, I'd love to hear them as well.\n\nDriver.cpp\n\n``````\nint main ()\n{\n//Initialize\nint size = 0;\nchar fill = '\\0';\n\nstd::cout << \"How long should the array be?\" << std::endl;\nstd::cin >> size;\nstd::cout << \"Choose fill character.\" << std::endl;\nstd::cin >> fill;\n\n//Create array & Print array details\nArray* arr = new Array(size, fill);\nstd::cout << \"The array size is: \" << arr->size() << std::endl;\nstd::cout << \"max size: \" << arr->max_size() << std::endl;\nstd::cout << \"The contents of the array is: \";\narr->printArr();\nstd::cout << std::endl;\n\n//Create new array & set it's values equal to old array\nArray* arr2 = new Array();\narr2 = arr;\n\nstd::cout << \"The array size is: \" << arr2->size() << std::endl;\nstd::cout << \"max size: \" << arr2->max_size() << std::endl;\nstd::cout << \"The contents of the array is: \";\narr2->printArr();\n\n//Deallocate memory\ndelete arr;\narr = nullptr;\ndelete arr2;\narr2 = nullptr;\n\n//Checking for memory leaks\n_CrtDumpMemoryLeaks();\n\nreturn 0;\n}\n``````\n\nArray.cpp file\n\n``````//Define MAX SIZE so that it can be easily changed.\n#define MAX_SIZE_ 200\n\n#include \"Array.h\"\n#include <iostream>\n#include <stdexcept>\n\nArray::Array (void)\n:data_ (new char[MAX_SIZE_]),\ncur_size_ (0),\nmax_size_ (MAX_SIZE_)\n{ }\n\n//Assigns the initial size of the array and fills each element with the character stored in fill.\nArray::Array (size_t length, char fill)\n: data_ (new char[length]),\ncur_size_ (length),\nmax_size_ (length)\n{\n//Fill each element with the character passed in to the function.\nfor(int i = 0; i < length; i++)\n{\nthis-> data_[i] = fill;\n}\n\nstd::cout << &this->data_ << std::endl;\n}\n\n//Destructor\nArray::~Array (void)\n{\ndelete[] this->data_;\nthis->data_ = nullptr;\n}\n\n//Sets new array equal to rhs.\nconst Array & Array::operator = (const Array & rhs)\n{\n//Set current and max size values to new array.\nthis->max_size_ = rhs.max_size_;\nthis->cur_size_ = rhs.cur_size_;\n\n//Copy data from rhs.data_ to new array's data_\nfor(int i = 0; i < rhs.cur_size_; i++)\n{\nthis->data_[i] = rhs.data_[i];\n}\n\nreturn *this;\n}\n\n//Print the contents of the array.\nvoid Array::printArr(void)\n{\nfor (int i = 0; i < (this->cur_size_) ; i++)\n{\nstd::cout << this->data_[i];\n}\n}\n``````\n``````\nExpected Results: The program displays information about the different arrays, then deletes them with no memory leaks.\n\nActual Results: The program displays all the correct data for both arrays and is able to delete the first array without a hitch, but runs into an exception when calling:\n\n``````\n``````delete[] this->data_;\n``````\n``````\non the second array.\n\n> Exception thrown at 0x5D13DB1B (ucrtbased.dll) in driver.exe: 0xC0000005: Access violation reading location 0xDDDDDDCD\n\nThanks for any help!\n``````\nc++\nvisual-studio\ndebugging\n\n0\n\nWhen you do `arr2 = arr;` you copy the pointer (memorty address) hold by `arr` into `arr2`:\n\n``````Array* arr2 = new Array();\narr2 = arr;\n``````\n\nSo after that call, both `arr2` and `arr` hold the same pointer (point to the same object). As of that `delete arr2;` will delete the same object you already deleted when you did `delete arr;` two lines before:\n\n``````delete arr;\narr = nullptr;\ndelete arr2;\n``````\n\nSo doing `delete arr2;` here causes already undefine behavior. At that point, anything could happen.\n\nUser contributions licensed under CC BY-SA 3.0"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7925779,"math_prob":0.83512,"size":3781,"snap":"2020-24-2020-29","text_gpt3_token_len":1007,"char_repetition_ratio":0.14402965,"word_repetition_ratio":0.08252427,"special_character_ratio":0.3046813,"punctuation_ratio":0.20942408,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.974547,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T11:19:48Z\",\"WARC-Record-ID\":\"<urn:uuid:2f0f6537-7fb5-484e-a46f-c25fa922abc3>\",\"Content-Length\":\"9270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f557153f-cff0-49d6-87a4-4df547b080d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:71c2561b-4e02-4808-847c-893deb382b15>\",\"WARC-IP-Address\":\"199.38.183.38\",\"WARC-Target-URI\":\"https://windows-hexerror.linestarve.com/q/so57932465-C-Access-Violation-reading-location-0xDDDDDDCD-when-I-try-to-delete-an-array-UPDATED\",\"WARC-Payload-Digest\":\"sha1:NJ3MFTWLJXDMV5F72KXXLZJIVYTOXLO3\",\"WARC-Block-Digest\":\"sha1:7I6CSK4MOTYOFCUUHOCAZ7FDOVICFWH7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655887319.41_warc_CC-MAIN-20200705090648-20200705120648-00530.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.