URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.colorhexa.com/00affa
[ "# #00affa Color Information\n\nIn a RGB color space, hex #00affa is composed of 0% red, 68.6% green and 98% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 30% magenta, 0% yellow and 2% black. It has a hue angle of 198 degrees, a saturation of 100% and a lightness of 49%. #00affa color hex could be obtained by blending #00ffff with #005ff5. Closest websafe color is: #0099ff.\n\n• R 0\n• G 69\n• B 98\nRGB color chart\n• C 100\n• M 30\n• Y 0\n• K 2\nCMYK color chart\n\n#00affa color description : Pure (or mostly pure) blue.\n\n# #00affa Color Conversion\n\nThe hexadecimal color #00affa has RGB values of R:0, G:175, B:250 and CMYK values of C:1, M:0.3, Y:0, K:0.02. Its decimal value is 45050.\n\nHex triplet RGB Decimal 00affa `#00affa` 0, 175, 250 `rgb(0,175,250)` 0, 68.6, 98 `rgb(0%,68.6%,98%)` 100, 30, 0, 2 198°, 100, 49 `hsl(198,100%,49%)` 198°, 100, 98 0099ff `#0099ff`\nCIE-LAB 67.694, -10.823, -47.458 32.581, 37.559, 95.97 0.196, 0.226, 37.559 67.694, 48.677, 257.153 67.694, -44.348, -75.589 61.285, -12.354, -49.945 00000000, 10101111, 11111010\n\n# Color Schemes with #00affa\n\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #fa4b00\n``#fa4b00` `rgb(250,75,0)``\nComplementary Color\n• #00fac8\n``#00fac8` `rgb(0,250,200)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #0032fa\n``#0032fa` `rgb(0,50,250)``\nAnalogous Color\n• #fac800\n``#fac800` `rgb(250,200,0)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #fa0032\n``#fa0032` `rgb(250,0,50)``\nSplit Complementary Color\n• #affa00\n``#affa00` `rgb(175,250,0)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #fa00af\n``#fa00af` `rgb(250,0,175)``\nTriadic Color\n• #00fa4b\n``#00fa4b` `rgb(0,250,75)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #fa00af\n``#fa00af` `rgb(250,0,175)``\n• #fa4b00\n``#fa4b00` `rgb(250,75,0)``\nTetradic Color\n• #0079ae\n``#0079ae` `rgb(0,121,174)``\n• #008bc7\n``#008bc7` `rgb(0,139,199)``\n• #009de1\n``#009de1` `rgb(0,157,225)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #15b9ff\n``#15b9ff` `rgb(21,185,255)``\n• #2ec0ff\n``#2ec0ff` `rgb(46,192,255)``\n• #48c8ff\n``#48c8ff` `rgb(72,200,255)``\nMonochromatic Color\n\n# Alternatives to #00affa\n\nBelow, you can see some colors close to #00affa. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00eefa\n``#00eefa` `rgb(0,238,250)``\n• #00d9fa\n``#00d9fa` `rgb(0,217,250)``\n• #00c4fa\n``#00c4fa` `rgb(0,196,250)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\n• #009afa\n``#009afa` `rgb(0,154,250)``\n• #0085fa\n``#0085fa` `rgb(0,133,250)``\n• #0071fa\n``#0071fa` `rgb(0,113,250)``\nSimilar Colors\n\n# #00affa Preview\n\nText with hexadecimal color #00affa\n\nThis text has a font color of #00affa.\n\n``<span style=\"color:#00affa;\">Text here</span>``\n#00affa background color\n\nThis paragraph has a background color of #00affa.\n\n``<p style=\"background-color:#00affa;\">Content here</p>``\n#00affa border color\n\nThis element has a border color of #00affa.\n\n``<div style=\"border:1px solid #00affa;\">Content here</div>``\nCSS codes\n``.text {color:#00affa;}``\n``.background {background-color:#00affa;}``\n``.border {border:1px solid #00affa;}``\n\n# Shades and Tints of #00affa\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000a0f is the darkest color, while #fafeff is the lightest one.\n\n• #000a0f\n``#000a0f` `rgb(0,10,15)``\n• #001822\n``#001822` `rgb(0,24,34)``\n• #002636\n``#002636` `rgb(0,38,54)``\n• #003349\n``#003349` `rgb(0,51,73)``\n• #00415d\n``#00415d` `rgb(0,65,93)``\n• #004f71\n``#004f71` `rgb(0,79,113)``\n• #005d84\n``#005d84` `rgb(0,93,132)``\n• #006a98\n``#006a98` `rgb(0,106,152)``\n• #0078ac\n``#0078ac` `rgb(0,120,172)``\n• #0086bf\n``#0086bf` `rgb(0,134,191)``\n• #0094d3\n``#0094d3` `rgb(0,148,211)``\n• #00a1e6\n``#00a1e6` `rgb(0,161,230)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\nShade Color Variation\n• #0fb7ff\n``#0fb7ff` `rgb(15,183,255)``\n• #22bdff\n``#22bdff` `rgb(34,189,255)``\n• #36c3ff\n``#36c3ff` `rgb(54,195,255)``\n• #49c9ff\n``#49c9ff` `rgb(73,201,255)``\n• #5dceff\n``#5dceff` `rgb(93,206,255)``\n• #71d4ff\n``#71d4ff` `rgb(113,212,255)``\n• #84daff\n``#84daff` `rgb(132,218,255)``\n• #98e0ff\n``#98e0ff` `rgb(152,224,255)``\n• #ace6ff\n``#ace6ff` `rgb(172,230,255)``\n• #bfecff\n``#bfecff` `rgb(191,236,255)``\n• #d3f2ff\n``#d3f2ff` `rgb(211,242,255)``\n• #e6f8ff\n``#e6f8ff` `rgb(230,248,255)``\n• #fafeff\n``#fafeff` `rgb(250,254,255)``\nTint Color Variation\n\n# Tones of #00affa\n\nA tone is produced by adding gray to any pure hue. In this case, #738187 is the less saturated color, while #00affa is the most saturated one.\n\n• #738187\n``#738187` `rgb(115,129,135)``\n• #6a8590\n``#6a8590` `rgb(106,133,144)``\n• #60899a\n``#60899a` `rgb(96,137,154)``\n• #578ca3\n``#578ca3` `rgb(87,140,163)``\n• #4d90ad\n``#4d90ad` `rgb(77,144,173)``\n• #4394b7\n``#4394b7` `rgb(67,148,183)``\n• #3a98c0\n``#3a98c0` `rgb(58,152,192)``\n• #309cca\n``#309cca` `rgb(48,156,202)``\n• #26a0d4\n``#26a0d4` `rgb(38,160,212)``\n• #1da3dd\n``#1da3dd` `rgb(29,163,221)``\n• #13a7e7\n``#13a7e7` `rgb(19,167,231)``\n• #0aabf0\n``#0aabf0` `rgb(10,171,240)``\n• #00affa\n``#00affa` `rgb(0,175,250)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00affa is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5087674,"math_prob":0.83423495,"size":3673,"snap":"2021-21-2021-25","text_gpt3_token_len":1595,"char_repetition_ratio":0.14799672,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5289954,"punctuation_ratio":0.22389792,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9784226,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T04:50:36Z\",\"WARC-Record-ID\":\"<urn:uuid:08d7767c-b619-4709-822a-9f3b507669f8>\",\"Content-Length\":\"36247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bab78c69-6a37-44dd-8c25-df6b0c01be22>\",\"WARC-Concurrent-To\":\"<urn:uuid:6db46efb-b142-4a40-afb3-b6cc6493d418>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00affa\",\"WARC-Payload-Digest\":\"sha1:M6JAEZJSGMMGHO52EYWOYU633JROROSD\",\"WARC-Block-Digest\":\"sha1:MKQSC52DTPYFU3MFKI54UTRBUH7F2CMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487655418.58_warc_CC-MAIN-20210620024206-20210620054206-00058.warc.gz\"}"}
https://me.gateoverflow.in/676/gate2016-3-25
[ "# GATE2016-3-25\n\nIn PERT chart, the activity time distribution is\n\n1. Normal\n2. Binomial\n3. Poisson\n4. Beta\n\nrecategorized\n\n## Related questions\n\nA firm uses a turning center, a milling center and a grinding machine to produce two parts. The table below provides the machining time required for each part and the maximum machining time available on each machine. The profit per unit on parts $I$ and $II$ are $Rs. 40$ ... week (minutes) I II Turning machine $12$ $6$ $6000$ Milling center $4$ $10$ $4000$ Grinding Machine $2$ $3$ $1800$\nThe demand for a two-wheeler was $900$ units and $1030$ units in April $2015$ and May $2015$, respectively. The forecast for the month of April $2015$ was $850$ units. Considering a smoothing constant of $0.6$, the forecast for the month of June $2015$ is $850$ units $927$ units $965$ units $970$ units\nIn a single-channel queuing model, the customer arrival rate is $12$ per hour and the serving rate is $24$ per hour. The expected time that a customer is in queue is _______ minutes.\nIn the notation $(a/b/c) : (d/e/f)$ for summarizing the characteristics of queuing situation, the letters $‘b’$ and $‘d’$ stand respectively for service time distribution and queue discipline number of servers and size of calling source number of servers and queue discipline service time distribution and maximum number allowed in system" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8076635,"math_prob":0.9986793,"size":5137,"snap":"2021-31-2021-39","text_gpt3_token_len":1421,"char_repetition_ratio":0.097993374,"word_repetition_ratio":0.618799,"special_character_ratio":0.26299396,"punctuation_ratio":0.06858407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T02:24:52Z\",\"WARC-Record-ID\":\"<urn:uuid:f67c3148-247d-4ed3-b648-ac1cce7f5ba5>\",\"Content-Length\":\"54377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82a8d3ea-b9aa-4099-b9a0-af37dd43899f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c680b74d-ed52-4e8e-903f-6e9a61e6c0bc>\",\"WARC-IP-Address\":\"172.67.206.99\",\"WARC-Target-URI\":\"https://me.gateoverflow.in/676/gate2016-3-25\",\"WARC-Payload-Digest\":\"sha1:PNAHMUSPZFNBZUWIPY6AL3IRYEOXCOGO\",\"WARC-Block-Digest\":\"sha1:KWBTFPUH6WYU3NVWK3CWL45TZIG3ZNPH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056120.36_warc_CC-MAIN-20210918002951-20210918032951-00131.warc.gz\"}"}
https://www.physicsforums.com/threads/using-laplace-transforms-to-solve-ivps.284971/
[ "# Using Laplace Transforms to solve IVP's\n\n## Homework Statement\n\nsolve the ivp using laplace tranforms\n\ny''+2y'+2y=0 y(0)=1 y'(0)=-3\n\n## The Attempt at a Solution\n\nget to Y(s)[s^2+2s+2]=s-1\n\nY=(s-1)/[s^2+2s+2]\n\n^^^\ndon't know how to simplify the denominator to solve using Laplace transforms. If I had to guess I would say maybe partial fractions but keep getting the wrong answer when I try to use them.\n\nRelated Calculus and Beyond Homework Help News on Phys.org\nYou've pretty much finished it, all you need to recognize that\n\n$$Y(s)=\\frac{s-1}{(s-1)^2+1}$$\n\nthrough completing the square in the denominator. Now can you get that to work with\n\n$$f(t)=L^{-1}\\left\\{\\frac{s-a}{(s-a)^2+b^2} \\right\\} = e^{at}\\cos{(bt)}$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92008924,"math_prob":0.982702,"size":384,"snap":"2020-45-2020-50","text_gpt3_token_len":134,"char_repetition_ratio":0.09736842,"word_repetition_ratio":0.0,"special_character_ratio":0.328125,"punctuation_ratio":0.024390243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99960595,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T09:33:31Z\",\"WARC-Record-ID\":\"<urn:uuid:17bbd899-e974-4dd0-b659-ff9c9f24b76a>\",\"Content-Length\":\"63589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d7bae76-a560-411b-9fef-7096b8fe4514>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4026b8f-c740-4254-ad88-e6a0801f4641>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/using-laplace-transforms-to-solve-ivps.284971/\",\"WARC-Payload-Digest\":\"sha1:QQSLHJTEEE2Q4GNC6KHLJUK7KN46L3T4\",\"WARC-Block-Digest\":\"sha1:BWVHGIGZZN6QPVJTKOES35BTUBGEZF2Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107879362.3_warc_CC-MAIN-20201022082653-20201022112653-00202.warc.gz\"}"}
https://physics.stackexchange.com/questions/436279/why-are-gradient-and-graphs-used-in-labs
[ "# Why are gradient and graphs used in labs? [closed]\n\nThis may seem like a redundant question but I was wondering why gradient calculations are preferred when calculating a value by experiment as opposed to just plugging in raw values me getting an Answer ? I know it's something to do with the fact that the gradient considers a greater number or points and this makes the value more accurate but can someone give me a proper textbook answer ?\n\nFor example, I am doing a capacitor lab where I find the time constant of a capacitor. I plotted a graph of V vs t and and an appropriate linear graph of ln V vs t. My teacher says its preferred to use the gradient of the linear graph of lnv vs t to calculate time constant rather than reading off from the exponential graph of V vs t. I wanted to know why is it more accurate to use the gradient calculations ?\n\n## closed as unclear what you're asking by Aaron Stevens, Mike, garyp, Bill N, ZeroTheHeroOct 23 '18 at 8:01\n\nPlease clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.\n\n• Can you give a little context? – Gilbert Oct 23 '18 at 1:54\n• The question, as it is currently worded, is hard to answer without more details. Can you please edit to include a specific example of the process you are asking about? – Aaron Stevens Oct 23 '18 at 2:11\n• I have edited the question – user122343 Oct 23 '18 at 2:35\n• If you make several calculations using several values from the graph, you will arrive at several values for your result. Which o e is then the correct one from experiment? – Triatticus Oct 23 '18 at 2:50\n• If the physical quantity has an exponential dependence, using a log scale \"shortens\" the graph and displays the exponential behavior because the eye follows lines much better than general curves . – anna v Oct 23 '18 at 3:43\n\nExperimental physics also incorporates uncertainty. You can never say the time constant = 3 ms, but it exists as a value +/- another value. Same goes every measurement you make. On a graph rather than a point you will have fuzzy regions of uncertainty.\n\nFor a linear relation you expect a linear fitted line.\n\nFor an exponential or logarithmic relationship, a linear line is possible if one axis is on logarithmic scale.\n\nFor a power relationship, a line is possible if both axes are logarithmic.\n\nIn all cases, slopes and intercepts allow you to determine two parameters of the equation. If you get a computer to do the fit and read off the value, it is doing all of this behind the scenes anyway.\n\nHaving a linear line is useful in that it allows you to visually assess the fit, e.g. for deviations from \"linearity\" that aren't readily apparent otherwise.\n\nNow back to the errors. Lines of best fit are only the beginning. You can also define lines of worse fit. These help you determine a range of values possible for a given input. But wait. Your input also has an error. The resultant error can be determined by multiplying by the slope.\n\nI am assuming by \"time constant of the capacitor\" you mean the time constant of an RC circuit $$\\tau=RC$$. Let's assume we are looking at a discharging capacitor. Then the voltage across the capacitor as a function of time is given by $$V(t)=V_0e^{-t/\\tau}$$ where $$V_0$$ is the initial potential across the capacitor.\n\nNow, let's compare this to what you have discussed in your answer: $$\\ln(V(t))=\\ln\\left(V_0e^{-t/\\tau}\\right)=-\\frac{1}{\\tau}t+\\ln(V_0)$$\n\nSo, as you have said, the gradient, or slope, of $$\\ln(V(t))$$ gives us the desired $$\\tau$$ we want. Slopes are very easy to calculate from linear plots. Even if your data is not perfectly linear, linear models are probably the easiest models to fit data to. You could even get a good estimate of a line of best fit by just drawing one by hand and then determining the slope from there. Pretty simple.\n\nBut what about the original expression for $$V(t)$$? Well this is an exponential function. You would most likely need some sort of program to fit your data to the exponential function. And this is not as simple as a linear function. You could draw an exponential curve you think fits the data, but it would be harder than a line (and the curve you draw might not even be a true exponential function with base $$e$$). Even then, it's harder to pull the time constant from the exponential decay than it is to find the slope of a line.\n\nTherefore, in this context the linear function and its slope are easier to work with. Lines are simple to visualize and work with. Although with today's technology, either method should be fine if you have the software to do it. In the labs I would TA for, we would fit directly to the exponential functions with very few issues." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9430196,"math_prob":0.968973,"size":1711,"snap":"2019-43-2019-47","text_gpt3_token_len":408,"char_repetition_ratio":0.1218512,"word_repetition_ratio":0.0,"special_character_ratio":0.2402104,"punctuation_ratio":0.08595989,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99834704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T19:53:44Z\",\"WARC-Record-ID\":\"<urn:uuid:4cb4f958-a039-47a0-bf82-b57ccb219f51>\",\"Content-Length\":\"130894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d840ddb-8fb3-469c-a3c5-f2af6c522210>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ca4a9a7-c8f2-4d4f-9d5e-aa52b1110505>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/436279/why-are-gradient-and-graphs-used-in-labs\",\"WARC-Payload-Digest\":\"sha1:TQY5MTCNI4NYZFANYD5TMAHOKYTKDQQM\",\"WARC-Block-Digest\":\"sha1:L37WYBGBQQD5YXEDB3FQECPPEVWGPREW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671548.98_warc_CC-MAIN-20191122194802-20191122223802-00185.warc.gz\"}"}
https://bettersolutions.com/excel/functions/areas-function.htm
[ "### AREAS(reference)\n\nReturns the number of areas in a cell range or reference.\n\n reference The reference to a range of cells.\n\n#### Remarks\n\n * The \"reference\" can refer to multiple cell ranges.* The \"reference\" can be a named range or several cell references.* If you want to specify several references as a single argument, then you must include an extra sets of parentheses (see Rows 4 & 5).* An area is a range of contiguous cells or a single cell.* For the Microsoft documentation refer to support.microsoft.com\n\n A B C 1 =AREAS(B1) = 1 2 5 2 =AREAS(B1:C1) = 1 4 10 3 =AREAS(B1:C4) = 1 6 15 4 =AREAS((B1:B4,C1:C4)) = 2 8 20 5 =AREAS((B1:C4,B1:B2,C3:C4)) = 3 6 =AREAS((B1:C2,B1)) = 2 7 =AREAS(B1:B2 B1) = 1 8 =AREAS(B1 B2 B3) = #NULL!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8167196,"math_prob":0.9736582,"size":599,"snap":"2021-31-2021-39","text_gpt3_token_len":132,"char_repetition_ratio":0.1579832,"word_repetition_ratio":0.0,"special_character_ratio":0.23706177,"punctuation_ratio":0.10091743,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T16:20:20Z\",\"WARC-Record-ID\":\"<urn:uuid:97cb9ba9-f187-4b5a-b099-9a92ade2380c>\",\"Content-Length\":\"20467\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2594875a-f1d3-4b2e-bb02-b142f635382a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ee1d781-0c77-4396-b4e5-27a100039ae1>\",\"WARC-IP-Address\":\"160.153.155.173\",\"WARC-Target-URI\":\"https://bettersolutions.com/excel/functions/areas-function.htm\",\"WARC-Payload-Digest\":\"sha1:FDFCNVYWCBZAUYDWXV4LPUPHOOWQNXHU\",\"WARC-Block-Digest\":\"sha1:P7NDEICGOZH6FUWHCMJE6RMQOHXCGUAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056548.77_warc_CC-MAIN-20210918154248-20210918184248-00672.warc.gz\"}"}
https://softmath.com/math-book-answers/adding-exponents/algebra-1-concepts-and-skills.html
[ "", null, "## What our customers say...\n\nThousands of users are using our software to conquer their algebra homework. Here are some of their experiences:\n\nSo far its great!\nJ.R. Turnston, NY\n\nI am a 9th grade student and always wondered how some students always got good marks in mathematics but could never imagine that Ill be one of them. Hats off to Algebrator! Now I have a firm grasp over algebra and my approach to problem solving is more methodical.\nJon Caswell, MI\n\nMath couldn't be easier with Algebrator. Thanks!\nKelly Brown, NY\n\n## Search phrases used on 2012-12-13:\n\nStudents struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n• multple order differential equation in metlab\n• roots of exponents\n• Learn KS3 math online axis\n• mixed numbers to percentages converter\n• factoring algebraic equations\n• algebra calculater\n• how to teach integral exponents\n• online equation practice for sixth graders\n• mathamatical puzzels\n• ti89 log\n• free math lesson videos on solving logarithms\n• kumon worksheet\n• slope ti-83 graph\n• factoring trinomials online\n• convert decimal numbers to fractions\n• EXCEL SQUARE\n• FRACTIONS LEAST TO HIGHEST\n• online calculate number of combinations\n• gnuplot linear regression\n• cubic roots in ti 83 plus\n• taks math workbooks\n• interactive worksheets on squres and square roots\n• how to solve a third order equation\n• Ti-84 calculator conic-section art\n• glencoe pre algebra practice workbook answers\n• EXPRESSIONS, VARIABLES, AND EXPONENTS\n• solving negitive integers\n• free on line math solver\n• work sheet for subtraction of integers\n• fractional equation word problems, Math a\n• how to solve the geometry example for 7th grade\n• crossnumber puzzle algebra printable online\n• glencoe algebra 1 skills practice\n• JavaScript complex rational expressions calculator\n• algegra solution\n• rational expression and equation\n• Dividing Integers worksheets\n• distributive property fractions\n• Regents exams on converting fractions , decimals , and percents\n• mcdougal littell algebra 2 how to do the practice problems\n• high school freshman algebra text book\n• solving powers with algebra\n• TI-86 Slope\n• General Aptitude questions for kids\n• tips on college algebra\n• interpolation ti-83 program\n• solving equations worksheet\n• aptitude questions with solutions\n• ppt. graphing on coordinate plane\n• square root calculator fraction\n• explain rational exponents\n• \" simplifying fractions \"+\"third grade\"+lessons+free\n• explanation of the square root property\n• graphing logarithms on a ti89\n• \"discrete mathematics and its applications\" sixth edition solution manual\n• step by step how to solve inequalities on ti-84 scientific calculator\n• simplify nth roots\n• java code on convertion of kilogram to newton\n• substitution with systems of equations in algebra/calculator\n• basic math for dummies\n• exponents, worksheet, multiple choice\n• Ti-83 EXP button?\n• real life examples of parabolas\n• homogeneous equation calculator\n• find the least common denominator of variables\n• hel[p + Saxon Algebra II Lesson 7\n• calculating the y-intercept\n• printable math papers\n• negative algebra chart\n• rational expressions, worksheets\n• the algebrator\n• poems with math terms in it\n• how to solve probability with TI-83 PLus\n• worksheets on decimal equations with variable\n• mix number problems\n• online math solver\n• simple trig chart" ]
[ null, "https://softmath.com/r-solver/images/tutor.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82840365,"math_prob":0.9749831,"size":4205,"snap":"2022-27-2022-33","text_gpt3_token_len":948,"char_repetition_ratio":0.13449179,"word_repetition_ratio":0.0,"special_character_ratio":0.20642093,"punctuation_ratio":0.051282052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995641,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T19:38:46Z\",\"WARC-Record-ID\":\"<urn:uuid:259d3c72-54f7-4c04-925e-dd3dbc5d5903>\",\"Content-Length\":\"35516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1033c1e8-0632-4d0a-8190-f3843aa08722>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ff57bee-0a4f-4d17-9872-ba8f5743ff9d>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://softmath.com/math-book-answers/adding-exponents/algebra-1-concepts-and-skills.html\",\"WARC-Payload-Digest\":\"sha1:DS5MUKZAVTCLUQL5IGMM5K62FNDRFTLT\",\"WARC-Block-Digest\":\"sha1:H6PLS5C2QEV267LFWRTI64BWL7AW6G3D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572063.65_warc_CC-MAIN-20220814173832-20220814203832-00388.warc.gz\"}"}
http://www.bitwisemag.com/2017/06/c-programming-for-beginners-variables.html
[ "## Tuesday, 6 June 2017\n\n### C Programming for Beginners: Variables and Types\n\nThis is part 3 of my series on C programming for beginners.\n\nWhen you want to store values in your programs you need to declare variables. A variable is simply a name (more formally, we’ll call it an ‘identifier’) to which some value can be assigned. A variable is like the programming equivalent of a labelled box. You might have a box labelled ‘Petty Cash’ or a variable named pettycash. Just as the contents of the box might vary (as money is put into it and taken out again), so the contents of a variable might change as new values are assigned to it. You assign a value using the equals sign (=).\n\nIn C a variable is declared by stating its data-type (such as int for an integer variable or double for a floating-point variable) followed by the variable name. You can invent names for your variables and, as a general rule, it is best to make those names descriptive.\n\nThis is how to declare a floating-point variable named mydouble with the double data-type:\n\ndouble mydouble;\n\nYou can now assign a floating-point value to that variable:\n\nmydouble = 100.75;\n\nAlternatively, you can assign a value at the same time you declare the variable:\n\ndouble mydouble = 100.75;\n\n### FLOATING-POINT NUMBERS\n\nThere are several data types which can be used when declaring floating point variables in C. The float type represents single-precision numbers; double represents double-precision numbers and long double represents higher precision numbers. In this course, I shall normally use double for floating-point variables.\n\n### INTEGERS AND FLOATS\n\nNow let’s look at a program that uses integer and floating point variables to do a calculation. My intention is to calculate the grand total of an item by starting with its subtotal (minus tax) and then calculating the amount of tax due on it by multiplying that subtotal by the current tax rate. Here I’m assuming that tax rate to be 17.5% or, expressed as a floating point number, 0.175. Then I calculate the final price – the grand total – by adding the tax onto the subtotal. This is my program:\n\n#include <stdio.h>\n\nint main(int argc, char **argv) {\nint subtotal;\nint tax;\nint grandtotal;\ndouble taxrate;\n\ntaxrate = 0.175;\nsubtotal = 200;\ntax = subtotal * taxrate;\ngrandtotal = subtotal + tax;\n\nprintf( \"The tax on %d is %d, so the grand total is %d.\\n\",\nsubtotal, tax, grandtotal );\nreturn 0;\n}\n\nOnce again, I use printf to display the results. Remember that the three place--markers, %d, are replaced by the values of the three matching variables: subtotal, tax and grandtotal.\n\nWhen you run the program, this is what you will see:\n\nThe tax on 200 is 34, so the grand total is 234.\n\nBut there is a problem here. If you can’t see what it is, try doing the same calculation using a calculator. If you calculate the tax, 200 * 0.175, the result you get should be 35. But my program shows the result to be 34.\n\nThis is due to the fact that I have calculated using a floating-point number (the double variable, taxrate) but I have assigned the result to an integer number (the int variable, tax). An integer variable can only represent numbers with no fractional part so any values after the floating point are ignored. That has introduced an error into the code.\n\nThe error is easy to fix. I just need to use floating-point variables instead of integer variables. Here is my rewritten code:\n\n#include <stdio.h>\n\nint main(int argc, char **argv) {\ndouble subtotal;\ndouble tax;\ndouble grandtotal;\ndouble taxrate;\n\ntaxrate = 0.175;\nsubtotal = 200;\ntax = subtotal * taxrate;\ngrandtotal = subtotal + tax;\n\nprintf( \"The tax on %.2f is %.2f, so the grand total is %.2f.\\n\",\nsubtotal, tax, grandtotal );\nreturn 0;\n}\n\nThis time all the variables are doubles so none of the values is truncated. I have also used the float %f specifiers to display the float values in the string which I have passed to the printf function. In fact, you will see that the format specifiers in the string also include a dot and a number numbers like this: %.2f. This tells printf to display at least two digits to the right of the decimal point.\n\nYou can also format a number by specifying its width – that is, the minimum number of characters it should occupy in the string. So if I were to write %3.2 that would tell printf to format the number in a space that takes up at least 3 characters with at least two digits to the right of the decimal point. Try entering different numbers in the format specifiers (e.g. %10.4f) to see the effects these numbers have. Here are examples of numeric formatting specifiers that can be used with printf:\n\n### NUMERIC FORMAT SPECIFIERS\n\n%d   print as decimal integer\n%4d   print as decimal integer, at least 4 characters wide\n%f   print as floating point\n%4f   print as floating point, at least 4 characters wide\n%.2f   print as floating point, 2 characters after decimal point\n%4.2f   print as floating point, at least 4 wide and 2 after decimal point\n\nThis series of C programming lessons is based on my book, The Little Book Of C, which is the course text for my online video-based course, C Programming For Beginners, which teaches C programming interactively in over 70 lessons including a source code archive, eBook and quizzes. For information on this courses see HERE." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8842157,"math_prob":0.98614436,"size":5134,"snap":"2022-27-2022-33","text_gpt3_token_len":1184,"char_repetition_ratio":0.14210527,"word_repetition_ratio":0.08555555,"special_character_ratio":0.23626801,"punctuation_ratio":0.12631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T11:18:43Z\",\"WARC-Record-ID\":\"<urn:uuid:107994c7-33df-444a-a3df-91e64d0b17e2>\",\"Content-Length\":\"86826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fa504cf-4902-4f96-8262-d5b9c9ce7cbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac45c9e0-5089-4245-afca-bb110d9c817b>\",\"WARC-IP-Address\":\"172.217.13.243\",\"WARC-Target-URI\":\"http://www.bitwisemag.com/2017/06/c-programming-for-beginners-variables.html\",\"WARC-Payload-Digest\":\"sha1:M3DPINHTCHY4GVBGK6VULXQ4M5DHDNR7\",\"WARC-Block-Digest\":\"sha1:3ZPP6CYOOEA3DLS3ZTUGWFOYSIIECCRV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104690785.95_warc_CC-MAIN-20220707093848-20220707123848-00672.warc.gz\"}"}
https://arxiv.org/abs/1602.04574
[ "math-ph\n\nTitle:Multispecies totally asymmetric zero range process: II. Hat relation and tetrahedron equation\n\nAbstract: We consider a three-dimensional (3D) lattice model associated with the intertwiner of the quantized coordinate ring $A_q(sl_3)$, and introduce a family of layer to layer transfer matrices on $m\\times n$ square lattice. By using the tetrahedron equation we derive their commutativity and bilinear relations mixing various boundary conditions. At $q=0$ and $m=n$, they lead to a new proof of the steady state probability of the $n$-species totally asymmetric zero range process obtained recently by the authors, revealing the 3D integrability in the matrix product construction.\n Comments: 15 pages, minor corrections Subjects: Mathematical Physics (math-ph); Quantum Algebra (math.QA); Exactly Solvable and Integrable Systems (nlin.SI) MSC classes: 81R50, 60C99 Journal reference: Journal of Integrable Systems 2016 1 (1): xyw008 Cite as: arXiv:1602.04574 [math-ph] (or arXiv:1602.04574v2 [math-ph] for this version)\n\nSubmission history\n\nFrom: Atsuo Kuniba [view email]\n[v1] Mon, 15 Feb 2016 06:58:33 UTC (23 KB)\n[v2] Wed, 12 Oct 2016 01:57:38 UTC (23 KB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8298719,"math_prob":0.92601246,"size":1033,"snap":"2019-43-2019-47","text_gpt3_token_len":269,"char_repetition_ratio":0.07677357,"word_repetition_ratio":0.0,"special_character_ratio":0.25653437,"punctuation_ratio":0.12169312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T08:42:53Z\",\"WARC-Record-ID\":\"<urn:uuid:5975bec3-15fa-468a-b8d3-2847902a630d>\",\"Content-Length\":\"19317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:994face9-411e-4d68-bd05-008ffd7bf037>\",\"WARC-Concurrent-To\":\"<urn:uuid:719c44e9-7586-411f-825b-8d6765d3101e>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1602.04574\",\"WARC-Payload-Digest\":\"sha1:2W6AVOU5RAMVVX4S7VCNLU5UR6QKN646\",\"WARC-Block-Digest\":\"sha1:D5RBKITGJEAW36WCEZDUZWRGL4MRX4SK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986705411.60_warc_CC-MAIN-20191020081806-20191020105306-00473.warc.gz\"}"}
https://github.com/probtorch/probtorch
[ "{{ message }}\n\n# probtorch / probtorch Public\n\nProbabilistic Torch is library for deep generative models that extends PyTorch\n\nSwitch branches/tags\nNothing to show\n\n## Files\n\nFailed to load latest commit information.\nType\nName\nCommit time\n\nProbabilistic Torch is library for deep generative models that extends PyTorch. It is similar in spirit and design goals to Edward and Pyro, sharing many design characteristics with the latter.\n\nThe design of Probabilistic Torch is intended to be as PyTorch-like as possible. Probabilistic Torch models are written just like you would write any PyTorch model, but make use of three additional constructs:\n\n1. A library of reparameterized distributions that implement methods for sampling and evaluation of the log probability mass and density functions (now available in PyTorch)\n\n2. A Trace data structure, which is both used to instantiate and store random variables.\n\n3. Objective functions that approximate the lower bound on the log marginal likelihood using Monte Carlo and Importance-weighted estimators.\n\nThis repository accompanies the NIPS 2017 paper:\n\n```@inproceedings{siddharth2017learning,\ntitle = {Learning Disentangled Representations with Semi-Supervised Deep Generative Models},\nauthor = {Siddharth, N. and Paige, Brooks and van de Meent, Jan-Willem and Desmaison, Alban and Goodman, Noah D. and Kohli, Pushmeet and Wood,\nFrank and Torr, Philip},\nbooktitle = {Advances in Neural Information Processing Systems 30},\neditor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett},\npages = {5927--5937},\nyear = {2017},\npublisher = {Curran Associates, Inc.},\nurl = {http://papers.nips.cc/paper/7174-learning-disentangled-representations-with-semi-supervised-deep-generative-models.pdf}\n}```\n\n# Contributors\n\n(in order of joining)\n\n• Jan-Willem van de Meent\n• Siddharth Narayanaswamy\n• Brooks Paige\n• Alban Desmaison\n• Alican Bozkurt\n• Amirsina Torfi\n• Babak Esmaeili\n• Eli Sennesh\n\n# Installation\n\n1. Install PyTorch [instructions]\n\n2. Install this repository from source\n\n``````pip install git+git://github.com/probtorch/probtorch\n``````\n1. Refer to the `examples/` subdirectory for Jupyter notebooks that illustrate usage.\n\n2. To build and read the API documentation, please do the following\n\n``````git clone git://github.com/probtorch/probtorch\ncd probtorch/docs\npip install -r requirements.txt\nmake html\nopen build/html/index.html\n``````\n\n# Mini-Tutorial: Semi-supervised MNIST\n\nModels in Probabilistic Torch define variational autoencoders. Both the encoder and the decoder model can be implemented as standard PyTorch models that subclass `nn.Module`.\n\nIn the `__init__` method we initialize network layers, just as we would in a PyTorch model. In the `forward` method, we additionally initialize a `Trace` variable, which is a write-once dictionary-like object. The `Trace` data structure implements methods for instantiating named random variables, whose values and log probabilities are stored under the specifed key.\n\nHere is an implementation for the encoder of a standard semi-supervised VAE, as introduced by Kingma and colleagues \n\n```import torch\nimport torch.nn as nn\nimport probtorch\n\nclass Encoder(nn.Module):\ndef __init__(self, num_pixels=784, num_hidden=50, num_digits=10, num_style=2):\nsuper(self.__class__, self).__init__()\nself.h = nn.Sequential(\nnn.Linear(num_pixels, num_hidden),\nnn.ReLU())\nself.y_log_weights = nn.Linear(num_hidden, num_digits)\nself.z_mean = nn.Linear(num_hidden + num_digits, num_style)\nself.z_log_std = nn.Linear(num_hidden + num_digits, num_style)\n\ndef forward(self, x, y_values=None, num_samples=10):\nq = probtorch.Trace()\nx = x.expand(num_samples, *x.size())\nif y_values is not None:\ny_values = y_values.expand(num_samples, *y_values.size())\nh = self.h(x)\ny = q.concrete(logits=self.y_log_weights(h), temperature=0.66,\nvalue=y_values, name='y')\nh2 = torch.cat([y, h], -1)\nz = q.normal(loc=self.z_mean(h2),\nscale=torch.exp(self.z_log_std(h2)),\nname='z')\nreturn q```\n\nIn the code above, the method `q.concrete` samples or observes from a Concrete/Gumbel-Softmax relaxation of the discrete distribution, depending on whether supervision values `y_values` are provided. The method `q.normal` samples from a univariate normal.\n\nThe resulting trace `q` now contains two entries `q['y']` and `q['z']`, which are instances of a `RandomVariable` class, which stores both the value and the log probability associated with the variable. The stored values are now used to condition execution of the decoder model:\n\n```def binary_cross_entropy(x_mean, x, EPS=1e-9):\nreturn - (torch.log(x_mean + EPS) * x +\ntorch.log(1 - x_mean + EPS) * (1 - x)).sum(-1)\n\nclass Decoder(nn.Module):\ndef __init__(self, num_pixels=784, num_hidden=50, num_digits=10, num_style=2):\nsuper(self.__class__, self).__init__()\nself.num_digits = num_digits\nself.h = nn.Sequential(\nnn.Linear(num_style + num_digits, num_hidden),\nnn.ReLU())\nself.x_mean = nn.Sequential(\nnn.Linear(num_hidden, num_pixels),\nnn.Sigmoid())\n\ndef forward(self, x, q=None):\nif q is None:\nq = probtorch.Trace()\np = probtorch.Trace()\ny = p.concrete(logits=torch.zeros(x.size(0), self.num_digits),\ntemperature=0.66,\nvalue=q['y'], name='y')\nz = p.normal(loc=0.0, scale=1.0, value=q['z'], name='z')\nh = self.h(torch.cat([y, z], -1))\np.loss(binary_cross_entropy, self.x_mean(h), x, name='x')\nreturn p```\n\nThe model above can be used both for conditioned forward execution, but also for generation. The reason for this is that `q[k]` returns `None` for variable names `k` that have not been instantiated.\n\nTo train the model components above, probabilistic Torch provides objectives that compute an estimate of a lower bound on the log marginal likelihood, which can now be maximized with standard PyTorch optimizers\n\n```from probtorch.objectives.montecarlo import elbo\nfrom random import rand\n# initialize model and optimizer\nenc = Encoder()\ndec = Decoder()\n+ list(dec.parameters()))\n# define subset of batches that will be supervised\nsupervise = [rand() < 0.01 for _ in data]\n# train model for 10 epochs\nfor epoch in range(10):\nfor b, (x, y) in data:\nx = Variable(x)\nif supervise[b]:\ny = Variable(y)\nq = enc(x, y)\nelse:\nq = enc(x)\np = dec(x, q)\nloss = -elbo(q, p, sample_dim=0, batch_dim=1)\nloss.backward()\noptimizer.step()```\n\nFor a more details, see the Jupyter notebooks in the `examples/` subdirectory.\n\n# References\n\n Kingma, Diederik P, Danilo J Rezende, Shakir Mohamed, and Max Welling. 2014. “Semi-Supervised Learning with Deep Generative Models.” http://arxiv.org/abs/1406.5298.\n\nProbabilistic Torch is library for deep generative models that extends PyTorch" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6778605,"math_prob":0.9509604,"size":6256,"snap":"2021-43-2021-49","text_gpt3_token_len":1596,"char_repetition_ratio":0.099008314,"word_repetition_ratio":0.016969698,"special_character_ratio":0.25287724,"punctuation_ratio":0.18866329,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965656,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T05:17:08Z\",\"WARC-Record-ID\":\"<urn:uuid:94baa4ad-ec69-468b-bf1b-40aa5bbb7371>\",\"Content-Length\":\"223236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0111669-a9c5-426e-bc42-12684218fb9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e13b079a-e753-4c55-9d19-64cdb2b7461b>\",\"WARC-IP-Address\":\"140.82.114.3\",\"WARC-Target-URI\":\"https://github.com/probtorch/probtorch\",\"WARC-Payload-Digest\":\"sha1:KYT2PETYDWNS4CSZOYXZV66TGKSACH4U\",\"WARC-Block-Digest\":\"sha1:AOICWPYAGQ7TSSNV4YJTD4XW2BVTP7BB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362589.37_warc_CC-MAIN-20211203030522-20211203060522-00566.warc.gz\"}"}
http://www.freepatentsonline.com/y2008/0172165.html
[ "Title:\nControl system for internal combustion engine\nKind Code:\nA1\n\nAbstract:\nA control system for an internal combustion engine, which is capable of ensuring excellent fuel economy of the engine and enhancing the responsiveness of the output of the engine when acceleration is demanded. The control system calculates a lift control input for controlling a variable valve lift mechanism, based on a cam phase of a variable cam phase mechanism, and calculates a demanded acceleration indicative of the degree of acceleration demanded of the engine. Further, the control system calculates a value of phase control input for controlling the variable cam phase mechanism with priority to the engine output, and calculates a value of the same with priority to fuel economy of the engine, and selects between the values of phase control input, based on the demanded acceleration.\n\nInventors:\nTagami, Hiroshi (Saitama-ken, JP)\nYasui, Yuji (Saitama-ken, JP)\nApplication Number:\n12/009218\nPublication Date:\n07/17/2008\nFiling Date:\n01/17/2008\nExport Citation:\nAssignee:\nHonda Motor Co., Ltd. (Tokyo, JP)\nPrimary Class:\nOther Classes:\n123/90.15\nInternational Classes:\nF02D29/02; F01L1/34; F02D13/02\nView Patent Images:\nRelated US Applications:\n 20100042279 METHOD FOR TORQUE MANAGEMENT IN A HYBRID VEHICLE EQUIPPED WITH ACTIVE FUEL MANAGEMENT February, 2010 Thompson et al. 20080109129 Cooling Device, Control Method of Cooling Device, and Abnormality Specification Method May, 2008 Yanagida et al. 20090192681 Vehicle Controller and Controlling Method July, 2009 Hayashi et al. 20090048977 USER PROFILE GENERATION ARCHITECTURE FOR TARGETED CONTENT DISTRIBUTION USING EXTERNAL PROCESSES February, 2009 Aggarwal et al. 20070100521 Reporting information related to a vehicular accident May, 2007 Grae 20070055433 Slip control device and method for a vehicle March, 2007 Yamaguchi et al. 20100030442 MOVABLE BODY, TRAVEL DEVICE, AND MOVABLE BODY CONTROL METHOD February, 2010 Kosaka 20080312801 Apparatus for determining positions and movements of a brake pedal for a vehicle brake system December, 2008 Muller et al. 20070213896 Method and apparatus for determining and storing excessive vehicle speed September, 2007 Fischer 20080154451 Method of Controlling a Rail Transport System for Conveying Bulk Materials June, 2008 Dibble et al. 20090063030 System and method for traffic condition detection March, 2009 Howarter et al.\n\nPrimary Examiner:\nCOLEMAN, KEITH A\nAttorney, Agent or Firm:\nLAHIVE & COCKFIELD, LLP (ONE POST OFFICE SQUARE, BOSTON, MA, 02109-2127, US)\nClaims:\nWhat is claimed is:\n\n1. A control system for an internal combustion engine that is configured to be capable of changing an intake air amount by changing operating characteristics of an intake valve, using a first variable valve-actuating mechanism, and a second variable valve-actuating mechanism having a lower response speed than a response speed of the first variable valve-actuating mechanism, comprising: operation amount-detecting means for detecting an amount of operation of the second variable valve-actuating mechanism with respect to the intake valve; first control input-calculating means for calculating a first control input for controlling the first variable valve-actuating mechanism based on the detected amount of operation of the second variable valve-actuating mechanism; load parameter-detecting means for detecting a load parameter indicative of load on the engine; demanded acceleration degree parameter-calculating means for calculating a demanded acceleration degree parameter indicative of a degree of acceleration demanded of the engine; output priority-type calculation means for calculating a second control input for controlling the second variable valve-actuating mechanism, based on the detected load parameter with priority to an output of the engine; fuel economy priority-type calculation means for calculating the second control input based on the load parameter with priority to fuel economy of the engine; and selection means for selecting one of said output priority-type calculation means and said fuel economy priority-type calculation means as calculation means for calculating the second control input, according to the calculated demanded acceleration degree parameter.\n\n2. A control system as claimed in claim 1, wherein the engine is installed on a vehicle as a drive source, and the control system further comprising: drive wheel demanded torque-calculating means for calculating a drive wheel demanded torque demanded of drive wheels of the vehicle; and traveling resistance-calculating means for calculating a traveling resistance of the vehicle, wherein said demanded acceleration degree parameter-calculating means calculates the demanded acceleration degree parameter based on the calculated drive wheel demanded torque and the calculated traveling resistance.\n\n3. A control system as claimed in claim 2, further comprising: drive wheel torque-detecting means for detecting a torque of the drive wheels; vehicle speed-detecting means for detecting a speed of the vehicle; and acceleration-detecting means for detecting acceleration of the vehicle, and wherein said traveling resistance-calculating means comprises: reference traveling resistance-calculating means for calculating a traveling resistance to be obtained when the vehicle and a road surface on which the vehicle travels are in respective predetermined reference states, based on the detected vehicle speed, as a reference traveling resistance; reference acceleration resistance-calculating means for calculating an acceleration resistance to be obtained when the vehicle is in the predetermined reference state, based on the detected acceleration of the vehicle, as a reference acceleration resistance; and correction value-calculating means for calculating a correction value based on the detected torque of the drive wheels, the calculated reference traveling resistance, and the calculated reference acceleration resistance, and wherein the traveling resistance is calculated by correcting the reference traveling resistance using the calculated correction value.\n\n4. A control system for an internal combustion engine that is configured to be capable of changing an intake air amount by changing operating characteristics of an intake valve, using a first variable valve-actuating mechanism, and a second variable valve-actuating mechanism having a lower response speed than a response speed of the first variable valve-actuating mechanism, comprising: operation amount-detecting means for detecting an amount of operation of the second variable valve-actuating mechanism with respect to the intake valve; first control input-calculating means for calculating a first control input for controlling the first variable valve-actuating mechanism based on the detected amount of operation of the second variable valve-actuating mechanism; load parameter-detecting means for detecting a load parameter indicative of load on the engine; demanded acceleration degree parameter-calculating means for calculating a demanded acceleration degree parameter indicative of a degree of acceleration demanded of the engine; output priority-type calculation means for calculating a second control input for controlling the second variable valve-actuating mechanism, based on the detected load parameter with priority to an output of the engine; fuel economy priority-type calculation means for calculating the second control input based on the load parameter with priority to fuel economy of the engine; and second control input-calculating means for calculating the second control input by calculating a weighted average of a value calculated by said output priority-type calculation means and a value calculated by said fuel economy priority-type calculation means, using a weight dependent on the calculated demanded acceleration degree parameter.\n\n5. A control system as claimed in claim 4, wherein the engine is installed on a vehicle as a drive source, and the control system further comprising: drive wheel demanded torque-calculating means for calculating a drive wheel demanded torque demanded of drive wheels of the vehicle; and traveling resistance-calculating means for calculating a traveling resistance of the vehicle, wherein said demanded acceleration degree parameter-calculating means calculates the demanded acceleration degree parameter based on the calculated drive wheel demanded torque and the calculated traveling resistance.\n\n6. A control system as claimed in claim 5, further comprising: drive wheel torque-detecting means for detecting a torque of the drive wheels; vehicle speed-detecting means for detecting a speed of the vehicle; and acceleration-detecting means for detecting acceleration of the vehicle, and wherein said traveling resistance-calculating means comprises: reference traveling resistance-calculating means for calculating a traveling resistance to be obtained when the vehicle and a road surface on which the vehicle travels are in respective predetermined reference states, based on the detected vehicle speed, as a reference traveling resistance; reference acceleration resistance-calculating means for calculating an acceleration resistance to be obtained when the vehicle is in the predetermined reference state, based on the detected acceleration of the vehicle, as a reference acceleration resistance; and correction value-calculating means for calculating a correction value based on the detected torque of the drive wheels, the calculated reference traveling resistance, and the calculated reference acceleration resistance, and wherein the traveling resistance is calculated by correcting the reference traveling resistance using the calculated correction value.\n\nDescription:\n\n# BACKGROUND OF THE INVENTION\n\n1. Field of the Invention\n\nThe present invention relates to a control system for an internal combustion engine, which is configured to be capable of changing an intake air amount by changing operating characteristics of an intake valve, using a first variable valve-actuating mechanism and a second variable valve-actuating mechanism having a lower response speed than that of the first variable valve-actuating mechanism.\n\n2. Description of the Related Art\n\nConventionally, as a control system for an internal combustion engine of this kind, one disclosed in Japanese Laid-Open Patent Publication (Kokai) No. 2006-57573 is known. This combustion engine is provided with a first variable valve-actuating mechanism for changing the valve lift of an intake valve, and a second variable valve-actuating mechanism for changing the central angle of an operating angle of the intake valve (hereinafter simply referred to as “the central angle”). The first and second variable valve-actuating mechanisms use an electric motor and an oil pressure pump as drive sources thereof, respectively, and the response speed of the second variable valve-actuating mechanism, that is, the response speed of the operation amount of the second variable valve-actuating mechanism with respect to a control input therefor is lower than that of the operation amount of the first variable valve-actuating mechanism. In the above-described conventional control system, the intake air amount is controlled by controlling the valve lift and the central angle by the first and second variable valve-actuating mechanisms as follows:\n\nA target central angle, which is a target value of the above-described central angle, is determined by searching a target central angle map according to the load on the engine obtained e.g. by a sensor, and an actual central angle is estimated as an actual central angle equivalent value. In the target central angle map, the target central angle is set to a value which makes it possible to obtain excellent fuel economy of the engine. Further, a target valve lift, which is a target value of the above-described valve lift, is calculated based on the load on the engine and the estimated actual central angle equivalent value. Then, a control input based on the calculated target valve lift is input to the first variable valve-actuating mechanism, and a control input based on the calculated target central angle is input to the second variable valve-actuating mechanism, whereby the valve lift and the central angle are controlled to the target valve lift and the target central angle, respectively. Thus, the response delay of the operation amount of the second variable valve-actuating mechanism with respect to the control input is compensated for, to thereby accurately control the intake air amount.\n\nAs described above, in the conventional control system, the target central angle, which is set in the target central angle map to such a value as will make it possible to obtain excellent fuel economy, is used only as the target value of the central angle. As a result, in the conventional control system, when the load on the engine is suddenly increased due to demand of acceleration, the response delay of the second variable valve-actuating mechanism cannot be sufficiently compensated for, which makes it impossible to obtain a sufficient intake air amount. This makes it impossible to increase the output of the engine with high responsiveness to the load on the engine. Further, to eliminate the above inconveniences, it is considered that the target central angle is set in the target central angle map with priority given to the output but not to the fuel economy. In this case, however, it is impossible to obtain excellent fuel economy of the engine.\n\n# SUMMARY OF THE INVENTION\n\nIt is an object of the present invention to provide a control system for an internal combustion engine, which is capable of ensuring excellent fuel economy of the engine and enhancing the responsiveness of the output of the engine when acceleration is demanded.\n\nTo attain the above object, in a first aspect of the present invention, there is provided a control system for an internal combustion engine that is configured to be capable of changing an intake air amount by changing operating characteristics of an intake valve, using a first variable valve-actuating mechanism, and a second variable valve-actuating mechanism having a lower response speed than a response speed of the first variable valve-actuating mechanism, comprising operation amount-detecting means for detecting an amount of operation of the second variable valve-actuating mechanism with respect to the intake valve, first control input-calculating means for calculating a first control input for controlling the first variable valve-actuating mechanism based on the detected amount of operation of the second variable valve-actuating mechanism, load parameter-detecting means for detecting a load parameter indicative of load on the engine, demanded acceleration degree parameter-calculating means for calculating a demanded acceleration degree parameter indicative of a degree of acceleration demanded of the engine, output priority-type calculation means for calculating a second control input for controlling the second variable valve-actuating mechanism, based on the detected load parameter with priority to an output of the engine, fuel economy priority-type calculation means for calculating the second control input based on the load parameter with priority to fuel economy of the engine, and selection means for selecting one of the output priority-type calculation means and the fuel economy priority-type calculation means as calculation means for calculating the second control input, according to the calculated demanded acceleration degree parameter.\n\nWith the configuration of the control system according to the first aspect of the present invention, the load parameter-detecting means detects the load parameter indicative of load on the engine, and the demanded acceleration degree parameter-calculating means calculates the demanded acceleration degree parameter indicative of the degree of acceleration demanded of the engine (hereinafter referred to as “the demanded acceleration degree”). Further, as calculation means for calculating the second control input based on the detected load parameter, the control system is provided with the output priority-type calculation means for calculating the second control input with priority to the output of the engine, and the fuel economy priority-type calculation means for calculating the second control input with priority to fuel economy. The selection means selects between the output priority-type calculation means and the fuel economy priority-type calculation means, based on the calculated demanded acceleration degree parameter, and the selected calculating means calculates the second control input.\n\nTherefore, when the demanded acceleration degree parameter is indicating that acceleration is demanded, if the output priority-type calculation means is selected, it is possible to obtain an intake air amount suitable for satisfying the demand of acceleration, thereby making it possible to quickly increase the output of the engine to enhance responsiveness thereof. Further, when the demanded acceleration degree parameter is not indicating the demand of acceleration, if the fuel economy priority-type calculation means is selected, it is possible to obtain excellent fuel economy of the engine when acceleration is not demanded. Thus, it is made possible to ensure excellent fuel economy of the engine to enhance the responsiveness of the output of the engine when acceleration is demanded.\n\nFurther, the operation amount-detecting means detects the amount of operation of the second variable valve-actuating mechanism with respect to the intake valve, and the first control input-calculating means calculates the first control input for controlling the first variable valve-actuating mechanism based on the detected amount of operation of the second variable valve-actuating mechanism. As described above, the first control input for controlling the first variable valve-actuating mechanism having a higher response speed is calculated based on the actual amount of operation of the second variable valve-actuating mechanism having a lower response speed, and hence, it is possible to compensate for response delay of the second variable valve-actuating mechanism by intake air amount control using the first variable valve-actuating mechanism._This makes it possible to more excellently obtain the above-described effects, i.e. the effects of ensuring excellent fuel economy and enhancing the responsiveness of the output of the engine when acceleration is demanded. It should be noted that throughout the specification, “detection” includes not only detection by sensors but also “calculation” and “estimation” by computation.\n\nPreferably, the engine is installed on a vehicle as a drive source, and the control system further comprises drive wheel demanded torque-calculating means for calculating a drive wheel demanded torque demanded of drive wheels of the vehicle, and traveling resistance-calculating means for calculating a traveling resistance of the vehicle, wherein the demanded acceleration degree parameter-calculating means calculates the demanded acceleration degree parameter based on the calculated drive wheel demanded torque and the calculated traveling resistance.\n\nWith the configuration of this preferred embodiment, the drive wheel demanded torque demanded of the drive wheels of the vehicle, and the traveling resistance of the vehicle are calculated, and the demanded acceleration degree parameter is calculated based on the calculated drive wheel demanded torque and traveling resistance. In general, when the vehicle is traveling at a constant speed, the drive wheel demanded torque and the traveling resistance are equal to and balanced with each other, whereas during acceleration of the vehicle, the drive wheel demanded torque becomes larger than the traveling resistance, and the degree of increase of the drive wheel demanded torque becomes larger as the demanded acceleration degree is larger. As described above, the drive wheel demanded torque and the traveling resistance have close correlations with the demanded acceleration degree, and therefore according to the present invention, it is possible to properly calculate the demanded acceleration degree parameter.\n\nMore preferably, the control system further comprises drive wheel torque-detecting means for detecting a torque of the drive wheels, vehicle speed-detecting means for detecting a speed of the vehicle, and acceleration-detecting means for detecting acceleration of the vehicle, and the traveling resistance-calculating means comprises reference traveling resistance-calculating means for calculating a traveling resistance to be obtained when the vehicle and a road surface on which the vehicle travels are in respective predetermined reference states, based on the detected vehicle speed, as a reference traveling resistance, reference acceleration resistance-calculating means for calculating an acceleration resistance to be obtained when the vehicle is in the predetermined reference state, based on the detected acceleration of the vehicle, as a reference acceleration resistance, and correction value-calculating means for calculating a correction value based on the detected torque of the drive wheels, the calculated reference traveling resistance, and the calculated reference acceleration resistance, and wherein the traveling resistance is calculated by correcting the reference traveling resistance using the calculated correction value.\n\nWith the configuration of this preferred embodiment, the traveling resistance is calculated as follows: The torque of the drive wheels is detected, and a traveling resistance to be obtained when the vehicle and the road surface on which the vehicle travels are in the respective predetermined reference states is calculated based on the detected vehicle speed, as a reference traveling resistance. Further, an acceleration resistance to be obtained when the vehicle is in the predetermined reference state is calculated based on the detected acceleration of the vehicle, as a reference acceleration resistance. Furthermore, a correction value is calculated based on the torque of the drive wheels, the reference traveling resistance, and the reference acceleration resistance, and the traveling resistance is calculated by correcting the reference traveling resistance using the calculated correction value.\n\nIn general, the traveling resistance is the sum of a rolling resistance, an air resistance, and a gradient resistance, and changes depending on the states (e.g. weight and front projection area) of the vehicle, and states (e.g. irregularities and gradient) of a road surface on which the vehicle travels. Therefore, unless the vehicle and a road surface are in the above-described respective reference states, actual traveling resistance is different from the above-mentioned reference traveling resistance. Further, normally, the torque of the drive wheels corresponds to the sum of an actual traveling resistance and an actual acceleration resistance (hereinafter referred to as “the total actual traveling resistance”) during acceleration of the vehicle, and corresponds to the actual traveling resistance when the vehicle is traveling at a constant speed except during acceleration of the vehicle. Further, during travel at a constant speed, the reference acceleration resistance becomes equal to 0. From the above, the difference between the torque of the drive wheels and the sum of the reference traveling resistance and the reference acceleration resistance (hereinafter referred to as “the total reference traveling resistance”) corresponds to the difference between the total actual traveling resistance and the total reference traveling resistance during acceleration of the vehicle, and corresponds to the difference between the actual traveling resistance and the reference traveling resistance during travel at a constant speed.\n\nTherefore, according to the present invention, the reference traveling resistance is corrected by the correction value calculated based on the torque of the drive wheels, the reference traveling resistance, and the reference acceleration resistance, whereby it is possible to accurately calculate the actual traveling resistance with reference to the reference traveling resistance. Further, as described above, it is possible to calculate the traveling resistance only by computations, without requiring values obtained by detections of the actual weight of the vehicle and the gradients of a road surface. This makes it possible to dispense with sensors for detecting the above values, thereby making it possible to reduce the manufacturing costs of the control system.\n\nTo attain the above object, in a second aspect of the present invention, there is provided a control system for an internal combustion engine that is configured to be capable of changing an intake air amount by changing operating characteristics of an intake valve, using a first variable valve-actuating mechanism, and a second variable valve-actuating mechanism having a lower response speed than a response speed of the first variable valve-actuating mechanism, comprising operation amount-detecting means for detecting an amount of operation of the second variable valve-actuating mechanism with respect to the intake valve, first control input-calculating means for calculating a first control input for controlling the first variable valve-actuating mechanism based on the detected amount of operation of the second variable valve-actuating mechanism, load parameter-detecting means for detecting a load parameter indicative of load on the engine, demanded acceleration degree parameter-calculating means for calculating a demanded acceleration degree parameter indicative of a degree of acceleration demanded of the engine, output priority-type calculation means for calculating a second control input for controlling the second variable valve-actuating mechanism, based on the detected load parameter with priority to an output of the engine, fuel economy priority-type calculation means for calculating the second control input based on the load parameter with priority to fuel economy of the engine, and second control input-calculating means for calculating the second control input by calculating a weighted average of a value calculated by the output priority-type calculation means and a value calculated by the fuel economy priority-type calculation means, using a weight dependent on the calculated demanded acceleration degree parameter.\n\nWith the configuration of the control system according to the second aspect of the present invention, similarly to the first aspect of the present invention, the load parameter and the demanded acceleration degree parameter are obtained. Further, according to the load parameter, the weighted average of the value calculated by the output priority-type calculation means (hereinafter referred to as “the output priority-type calculated value”) and the value calculated by the fuel economy priority-type calculation means (hereinafter referred to as “the fuel economy priority-type calculated value”) is calculated using a weight dependent on the demanded acceleration degree parameter, whereby the second control input is calculated.\n\nAs described above, the second control input is calculated by calculating the weighted average of the output priority-type calculated value and the fuel economy priority-type calculated value using the weight dependent on the demanded acceleration degree parameter, so that it is possible to calculate the second control input according to the demanded acceleration degree in a fine-grained manner. Therefore, for example, when the demanded acceleration degree parameter is indicating that acceleration is demanded, by increasing the weight of the output priority-type calculated value with respect to the second control input, it is possible to enhance the responsiveness of the output of the engine when acceleration is demanded, similarly to the first aspect of the present invention. Further, when the demanded acceleration degree parameter is not indicating that acceleration is demanded, by increasing the weight of the fuel economy priority-type calculated value, it is possible to obtain excellent fuel economy of the engine when acceleration is not demanded, similarly to the first aspect of the present invention. Thus, similarly to the first aspect of the present invention, it is possible to ensure excellent fuel economy of the engine and at the same time enhance the responsiveness of the output of the engine when acceleration is demanded.\n\nFurther, by increasing or decreasing the weight of the output priority-type calculated value according to the magnitude of the demanded acceleration degree indicated by the demanded acceleration degree parameter, differently from the first aspect of the present invention, it is possible to obtain an appropriate second control input that matches the magnitude of the demanded acceleration degree. This makes it possible to ensure excellent fuel economy and enhance the responsiveness of the output of the engine when acceleration is demanded, in a well balanced manner. Further, similarly to the first aspect of the present invention, the amount of operation of the second variable valve-actuating mechanism with respect to the intake valve is detected, and the first control input for controlling the first variable valve-actuating mechanism is calculated based on the detected amount of operation of the second variable valve-actuating mechanism. This makes it possible to compensate for the response delay of the second variable valve-actuating mechanism by the intake air amount control using the first variable valve-actuating mechanism, thereby making it possible to more excellently obtain the above-described effects.\n\nPreferably, the engine is installed on a vehicle as a drive source, and the control system further comprises drive wheel demanded torque-calculating means for calculating a drive wheel demanded torque demanded of drive wheels of the vehicle, and traveling resistance-calculating means for calculating a traveling resistance of the vehicle, wherein the demanded acceleration degree parameter-calculating means calculates the demanded acceleration degree parameter based on the calculated drive wheel demanded torque and the calculated traveling resistance.\n\nMore preferably, the control system further comprises drive wheel torque-detecting means for detecting a torque of the drive wheels, vehicle speed-detecting means for detecting a speed of the vehicle, and acceleration-detecting means for detecting acceleration of the vehicle, and the traveling resistance-calculating means comprises reference traveling resistance-calculating means for calculating a traveling resistance to be obtained when the vehicle and a road surface on which the vehicle travels are in respective predetermined reference states, based on the detected vehicle speed, as a reference traveling resistance, reference acceleration resistance-calculating means for calculating an acceleration resistance to be obtained when the vehicle is in the predetermined reference state, based on the detected acceleration of the vehicle, as a reference acceleration resistance, and correction value-calculating means for calculating a correction value based on the detected torque of the drive wheels, the calculated reference traveling resistance, and the calculated reference acceleration resistance, wherein the traveling resistance is calculated by correcting the reference traveling resistance using the calculated correction value.\n\nWith the configurations of these preferred embodiments, it is possible to obtain the same advantageous effects as provided by the respective corresponding preferred embodiments of the first aspect of the present invention.\n\nThe above and other objects, features, and advantages of the present invention will become more apparent from the following detailed description taken in conjunction with the accompanying drawings.\n\n# BRIEF DESCRIPTION OF THE DRAWINGS\n\nFIG. 1 is a schematic view showing an internal combustion engine to which is applied a control system according to the present embodiment, together with a vehicle having the engine mounted thereon;\n\nFIG. 2 is a schematic view of the engine appearing in FIG. 1;\n\nFIG. 3 is a schematic-block diagram of the control system;\n\nFIG. 4 is a schematic cross-sectional view of a variable intake valve-actuating mechanism and an exhaust valve-actuating mechanism of the engine;\n\nFIG. 5 is a schematic cross-sectional view of a variable valve lift mechanism of the variable intake valve-actuating mechanism;\n\nFIG. 6A is a diagram showing a lift actuator in a state in which a short arm thereof is in a maximum lift position;\n\nFIG. 6B is a diagram showing the lift actuator in a state in which the short arm thereof is in a minimum lift position;\n\nFIG. 7A is a diagram showing an intake valve placed in an open state when a lower link of the variable valve lift mechanism is in a maximum lift position;\n\nFIG. 7B is a diagram showing the intake valve placed in the open state when the lower link of the variable valve lift mechanism is in a minimum lift position;\n\nFIG. 8 is a diagram showing a valve lift curve (solid line) of the intake valve obtained when the lower link of the variable valve lift mechanism is in the maximum lift position, and a valve lift curve (two-dot chain line) of the intake valve obtained when the lower link of the variable valve lift mechanism is in the minimum lift position;\n\nFIG. 9 is a schematic diagram of a variable cam phase mechanism;\n\nFIG. 10 is a diagram showing a valve lift curve (solid line) obtained when a cam phase is set to a most retarded value by the variable cam phase mechanism, and a valve lift curve (two-dot chain line) obtained when the cam phase is set to a most advanced value by the variable cam phase mechanism;\n\nFIG. 11 is a flowchart of a process for calculating an engine demanded output;\n\nFIG. 12 is a diagram showing an example of a map for use in calculating the engine demanded output;\n\nFIG. 13 is a flowchart of a process for calculating an acceleration demand reference value;\n\nFIG. 14 is a flowchart of a process for calculating a traveling resistance;\n\nFIG. 15 is a flowchart of a process for calculating a phase control input;\n\nFIG. 16 is a diagram showing an example of a fuel economy map;\n\nFIG. 17 is a diagram showing an example of an output map;\n\nFIG. 18 is a flowchart of a process for calculating a lift control input;\n\nFIG. 19 is a diagram showing an example of a Liftincmd map;\n\nFIG. 20A is a view showing results of control in which a target cam phase is calculated using only the fuel economy map;\n\nFIG. 20B is a view showing an example of the results of control by the control system according to the present embodiment;\n\nFIG. 21 is a view showing a fuel economy ratio obtained through control by the control system according to the present embodiment together with a fuel economy ratio obtained using the fuel economy map alone, and a fuel economy ratio obtained using the output map alone;\n\nFIG. 22 is a flowchart of a variation of the process for calculating the acceleration demand reference value;\n\nFIG. 23 is a view showing an example of changes in demanded acceleration and an acceleration demand reference value;\n\nFIG. 24 is a flowchart of a process for calculating an acceleration demand reference value, according to a second embodiment of the present invention;\n\nFIG. 25 is a view showing an example of a G_juda table which is used in the FIG. 24 process; and\n\nFIG. 26 is a flowchart of a process for calculating a phase control input, according to the second embodiment.\n\n# DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS\n\nThe invention will now be described in detail with reference to the drawings showing preferred embodiments thereof. FIG. 1 schematically shows an internal combustion engine (hereinafter simply referred to as “the engine”) 3 to which is applied a control system 1 according to the present embodiment, and a vehicle V having the engine 3 mounted thereon as a drive source. The vehicle V has a transmission 80 installed thereon. The transmission 80 is of a manual type for transmitting power from the engine 3 to drive wheels W and W while changing the rotational speed at one of a plurality of predetermined transmission ratios. Further, the transmission 80 is configured such that it selectively sets six gear positions formed by first to fifth speed gear positions and a reverse gear position, and the operation of the transmission 80 is controlled by an ECU 2, described hereinafter, of the control system 1 according to the shift position of a shift lever (not shown) operated by a driver (see FIG. 3).\n\nAs shown in FIGS. 2 and 4, the engine 3 is an in-line four-cylinder DOHC gasoline engine having four cylinders 3a and pistons 3b (only one of which is shown). Further, the engine 3 includes an intake valve 4 and an exhaust valve 7 for opening and closing an intake port and an exhaust port of each cylinder 3a, respectively, a variable intake valve-actuating mechanism 40 having an intake camshaft 5 and intake cams 6 for actuating the intake valves 4, and an exhaust valve-actuating mechanism 30 having an exhaust camshaft 8 and exhaust cams 9 for actuating the exhaust valves 7, fuel injection valves 10, and spark plugs 11 (see FIG. 3).\n\nThe intake valve 4 has a stem 4a thereof slidably fitted in a guide 4b. The guide 4b is rigidly fixed to a cylinder head 3c. The intake valve 4 includes upper and lower spring sheets 4c and 4d, and a valve spring 4e disposed therebetween (see FIG. 5), and is urged by the valve spring 4e in the valve-closing direction.\n\nThe intake camshaft 5 and the exhaust camshaft 8 are rotatably mounted through the cylinder head 3c via respective holders, not shown. Further, an intake sprocket, not shown, is coaxially mounted on one end of the intake camshaft 5 in a rotatable manner. The intake sprocket is connected to a crankshaft 3d by a timing belt, not shown, and to the intake camshaft 5 via a variable cam phase mechanism 70, described hereinafter. With the above configuration, the intake camshaft 5 performs one rotation per two rotations of the crankshaft 3d. The intake cam 6 is integrally formed on the intake camshaft 5 for each cylinder 3a.\n\nThe variable intake valve-actuating mechanism 40 is provided for actuating the intake valve 4 of each cylinder 3a so as to open and close the same, in accordance with rotation of the intake cam 6, and continuously changing the lift and valve timing of the intake valve 4, which will be described in detail hereinafter. It should be noted that in the present embodiment, the lift of the intake valve 4 (hereinafter referred to as “the valve lift”) Liftin represents the maximum stroke of the intake valve 4.\n\nThe exhaust valve 7 has a stem 7a thereof slidably fitted in a guide 7b. The guide 7b is rigidly fixed to the cylinder head 3c. Further, the exhaust valve 7 is provided with upper and lower spring sheets 7c and 7d, and a valve spring 7e disposed therebetween, and is urged by the valve spring 7e in the valve-closing direction.\n\nThe exhaust camshaft 8 has an exhaust sprocket, not shown, integrally formed therewith, and is connected to the crankshaft 3d by the exhaust sprocket and the timing belt, not shown, whereby the exhaust camshaft 8 performs one rotation per two rotations of the crankshaft 3d. The exhaust cam 9 is integrally formed on the exhaust camshaft 8 for each cylinder 3a.\n\nThe exhaust valve-actuating mechanism 30 includes rocker arms 31. Each rocker arm 31 is pivotally moved in accordance with rotation of the associated exhaust cam 9 to thereby actuate the exhaust valve 7 for opening and closing the same against the urging force of the valve spring 7e.\n\nThe fuel injection valve 10 is provided for each cylinder 3a, and is mounted through the cylinder head 3c in a tilted state such that fuel is directly injected into a combustion chamber. That is, the engine 3 is configured as a direct injection engine. Further, the valve-opening time period and the valve-opening timing of the fuel injection valve 10 are controlled by the ECU 2.\n\nThe spark plugs 11 as well are provided in association with the respective cylinders 3a, and are mounted through the cylinder head 3c. The ignition timing of each spark plug 11 is also controlled by the ECU 2.\n\nThe engine 3 is provided with a crank angle sensor 20. The crank angle sensor 20 is comprised of a magnet rotor and an MRE (magnetic resistance element) pickup, and delivers a CRK signal and a TDC signal, which are both pulse signals, to the ECU 2 in accordance with rotation of the crankshaft 3d.\n\nThe CRK signal is delivered whenever the crankshaft 3d rotates through a predetermined angle (e.g. 10°). The ECU 2 calculates the rotational speed NE of the engine 3 (hereinafter referred to as “the engine speed NE”) based on the CRK signal. The TDC signal indicates that each piston 3b in the associated cylinder 3a is in a predetermined crank angle position slightly before the TDC position at the start of the intake stroke, and in the illustrated example of the four-cylinder type engine, the TDC signal is delivered whenever the crankshaft 3d rotates through a predetermined crank angle of 180°\n\nFurther, the engine 3 has an intake pipe 12 provided with no throttle valve mechanism. An intake passage 12a through the intake pipe 12 is formed to have a large diameter, whereby the engine 3 is configured such that air flow resistance is smaller than an ordinary engine. Further, the intake pipe 12 is provided with an air flow sensor 21. The air flow sensor 21 is formed by a hot-wire air flow meter, and detects an amount QA of intake air drawn into the engine to deliver a signal indicative of the sensed intake air amount QA to the ECU 2.\n\nNext, the aforementioned variable intake valve-actuating mechanism 40 will be described with reference to FIGS. 5 to 8. The variable intake valve-actuating mechanism 40 is comprised of the intake camshaft 5, the intake cams 6, a variable valve lift mechanism 50 (first variable valve-actuating mechanism), and the variable cam phase mechanism 70 (second variable valve-actuating mechanism).\n\nThe variable valve lift mechanism 50 is provided for actuating the intake valves 4 to open and close the same, in accordance with rotation of the intake cams 6, and continuously changing the valve lift Liftin between a predetermined maximum value Liftin_H and a predetermined minimum value Liftin_L. The variable valve lift mechanism 50 is comprised of rocker arm mechanisms 51 of a four joint link type, provided for the respective cylinders 3a, and a lift actuator 60 simultaneously actuating these rocker arm mechanisms 51.\n\nEach rocker arm mechanism 51 is comprised of a rocker arm 52, and upper and lower links 53 and 54. The upper link 53 has one end pivotally mounted to a rocker arm shaft 56 fixed to the cylinder head 3c, and the other end pivotally mounted to an upper end of the rocker arm 52 by an upper pin 55.\n\nFurther, a roller 57 is pivotally disposed on the upper pin 55 of the rocker arm 52. The roller 57 is in contact with a cam surface of the intake cam 6. As the intake cam 6 rotates, the roller 57 rolls on the intake cam 6 while being guided by the cam surface of the intake cam 6. As a result, the rocker arm 52 is vertically driven, and the upper link 53 is pivotally moved about the rocker arm shaft 56.\n\nFurthermore, an adjusting bolt 52a is mounted to an end of the rocker arm 52 toward the intake valve 4. The adjusting bolt 52a is in contact with a stem 4e of the intake valve 4 and when the rocker arm 52 is vertically moved in accordance with rotation of the intake cam 6, the adjusting bolt 52a vertically drives the stem 4a to open and close the intake valve 4, against the urging force of the valve spring 4e.\n\nFurther, the lower link 54 has one end pivotally mounted to a lower end of the rocker arm 52 by a lower pin 58, and the other end of the lower link 54 has a connection shaft 59 pivotally mounted thereto. The lower link 54 is connected to a short arm 65, described hereinafter, of the lift actuator 60 by the connection shaft 59.\n\nAs shown in FIG. 6, the lift actuator 60, which is driven by the ECU 2, is comprised of a motor 61, a nut 62, a link 63, a long arm 64, and the short arm 65. The motor 61 is connected to the ECU 2, and disposed outside a head cover 3g of the engine 3. The rotational shaft of the motor 61 is a screw shaft 61a formed with a male screw and the nut 62 is screwed onto the screw shaft 61a. The link 63 has one end pivotally mounted to the nut 62 by a pin 63a, and the other end pivotally mounted to one end of the long arm 64 by a pin 63b. Further, the other end of the long arm 64 is attached to one end of the short arm 65 by a pivot shaft 66. The pivot shaft 66 is circular in cross section, and is pivotally supported by the head cover 3g of the engine 3. The long arm 64 and the short arm 65 are pivotally moved about the pivot shaft 66 in unison with the pivot shaft 66.\n\nFurthermore, the aforementioned connection shaft 59 pivotally extends through an end of the short arm 65 on a side opposite to the pivot shaft 66, whereby the short arm 65 is connected to the lower link 54 by the connection shaft 59.\n\nNext, a description will be given of the operation of the variable valve lift mechanism 50 configured as above. In the variable valve lift mechanism 50, when a lift control input Uliftin (first control input), described hereinafter, is input from the ECU 2 to the lift actuator 60, the screw shaft 61a of the motor 61 rotates, and the nut 62 is moved in accordance with the rotation of the screw shaft 61a, whereby the long arm 64 and the short arm 65 are pivotally moved about the pivot shaft 66, and in accordance with the motion of the connecting shaft 59 caused by the pivotal motion of the short arm 65, the lower link 54 of the rocker arm mechanism 51 is pivotally moved about the lower pin 58. That is, the lower link 54 is driven by the lift actuator 60.\n\nDuring the above process, under the control of the ECU 2, the range of pivotal motion of the short arm 65 is restricted between the maximum lift position shown in FIG. 6A and the minimum lift position shown in FIG. 6B, whereby the range of pivotal motion of the lower link 54 is also restricted between the maximum lift position indicated by a solid line in FIG. 5 and the minimum lift position indicated by a two-dot chain line in FIG. 5.\n\nThe rocker arm mechanism 51 is configured such that when the lower link 54 is in the maximum lift position, the distance between the center of the upper pin 55 and the center of the lower pin 58 becomes longer than the distance between the center of the rocker arm shaft 56 and the center of the connection shaft 59, whereby as shown in FIG. 7A, when the intake cam 6 rotates, the amount of movement of the adjusting bolt 52a becomes larger than the amount of movement of a contact point where the intake cam 6 and the roller 57 are in contact with each other.\n\nOn the other hand, the rocker arm mechanism 51 is configured such that when the lower link 54 is in the minimum lift position, the distance between the center of the upper pin 55 and the center of the lower pin 58 becomes shorter than the distance between the center of the rocker arm shaft 56 and the center of the connection shaft 59, whereby as shown in FIG. 7B, when the intake cam 6 rotates, the amount of movement of the adjusting bolt 52a becomes smaller than the amount of movement of the contact point where the intake cam 6 and the roller 57 are in contact with each other.\n\nFor the above reason, when the lower link 54 is in the maximum lift position, the intake valve 4 is opened with a larger valve lift Liftin than when the lower link 54 is in the minimum lift position. More specifically, during rotation of the intake cam 6, when the lower link 54 is in the maximum lift position, the intake valve 4 is opened according to a valve lift curve indicated by a solid line in FIG. 8, and the valve lift Liftin assumes its maximum value Liftin_H. On the other hand, when the lower link 54 is in the minimum lift position, the intake valve 4 is opened according to a valve lift curve indicated by a two-dot chain line in FIG. 8, and the valve lift Liftin assumes its minimum value Liftin_L.\n\nAs described above, in the variable valve lift mechanism 50, the lower link 54 is pivotally moved by the lift actuator 60 between the maximum lift position and the minimum lift position, whereby it is possible to continuously change the valve lift Liftin between the maximum value Liftin_H and the minimum value Liftin_L. Further, as described above, the variable valve lift mechanism 50 uses the motor 61 as a drive source thereof, and hence the response speed of the pivot angle of the short arm 65 with respect to the lift control input Uliftin is relatively high.\n\nThe engine 3 is provided with a pivot angle sensor 22 (see FIG. 3). The pivot angle sensor 22 detects a pivot angle Olift of the short arm 65 and delivers a signal indicative of the detected pivot angle of the short arm 65 to the ECU 2. The pivot angle Olift of the short arm 65 indicates a position of the short arm 65 between the maximum lift position and the minimum lift position. The ECU 2 calculates the valve lift Liftin based on the pivot angle Olift.\n\nNext, the aforementioned variable cam phase mechanism 70 will be described with reference to FIGS. 9 and 10. The variable cam phase mechanism 70 is provided for continuously advancing or retarding the relative phase Cain of the intake camshaft 5 with respect to the crankshaft 3d (hereinafter referred to as “the cam phase Cain”) to thereby continuously change the valve timing of the intake valve 4, and is mounted on an intake sprocket-side end of the intake camshaft 5. As shown in FIG. 9, the variable cam phase mechanism 70 includes a housing 71, a three-bladed vane 72, an oil pressure pump 73, and a solenoid valve mechanism 74.\n\nThe housing 71 is integrally formed with the intake sprocket on the intake camshaft 5, and divided by three partition walls 71a formed at equal intervals. The vane 72 is coaxially mounted on the end of the intake camshaft 5 where the intake sprocket is mounted, such that the blades of the vane 72 radially extends outward from the intake camshaft 5, and are rotatably housed in the housing 71. Further, the housing 71 has three advance chambers 75 and three retard chambers 76 each formed between one of the partition walls 71a and one of the three blades of the vane 72.\n\nThe oil pressure pump 73 is a mechanically-driven type which is connected to the crankshaft 3d. As the crankshaft 3d rotates, the oil pressure pump 73 draws lubricating oil stored in an oil pan 3e of the engine 3 via a lower part of an oil passage 77c, for pressurization, and supplies the pressurized oil to the solenoid valve mechanism 74 via the remaining part of the oil passage 77c.\n\nThe solenoid valve mechanism 74 is formed by combining a spool valve mechanism 74a and a solenoid 74b, and is connected to the advance chambers 75 and the retard chambers 76 via an advance oil passage 77a and a retard oil passage 77b such that oil pressure supplied from the oil pressure pump 73 is delivered to the advance chambers 75 and the retard chambers 76 as advance oil pressure Pad and retard oil pressure Prt, respectively. The solenoid 74b of the solenoid valve mechanism 74 is connected to the ECU 2. When a phase control input Ucain (second control input), described hereinafter, is input from the ECU 2, the solenoid 74b moves a spool valve element of the spool valve mechanism 74a within a predetermined range of motion according to the phase control input Ucain to thereby change both the advance oil pressure Pad and the retard oil pressure Prt.\n\nIn the variable cam phase mechanism 70 configured as above, during operation of the oil pressure pump 73, the solenoid valve mechanism 74 is operated according to the phase control input Ucain, to supply the advance oil pressure Pad to the advance chambers 75 and the retard oil pressure Prt to the retard chambers 76, whereby the relative phase of the vane 72 with respect to the housing 71 is changed toward an advanced side or a retarded side. As a result, the cam phase Cain described above is continuously changed between a most retarded value Cainrt (value corresponding to a cam angle of e.g. 0°) and a most advanced value Cainad (value corresponding to a cam angle of e.g. 550), whereby the valve timing of the intake valves 4 is continuously changed between most retarded timing indicated by a solid line in FIG. 10 and most advanced timing indicated by a two-dot chain line in FIG. 10. Further, as described above, the variable cam phase mechanism 70 uses the oil pressure pump 73 as a drive source thereof, and hence the response speed of the cam phase Cain with respect to the phase control input Ucain is lower than the response speed of the pivot angle θ lift of the short arm 65 with respect to the lift control input Uliftin of the variable valve lift mechanism 50.\n\nAs described above, in the variable intake valve-actuating mechanism 40 of the present embodiment, the variable valve lift mechanism 50 continuously changes the valve lift Liftin, and the variable cam phase mechanism 70 continuously changes the cam phase Cain, i.e. the valve timing of the intake valves 4 between the most retarded timing and the most advanced timing, described hereinbefore. Further, as described hereinafter, the ECU 2 controls the valve lift Liftin and the cam phase Cain via the variable valve lift mechanism 50 and the variable cam phase mechanism 70.\n\nOn the other hand, a cam angle sensor 23 (see FIG. 3) (operation amount-detecting means) is disposed at an end of the intake camshaft 5 opposite from the variable cam phase mechanism 70. The cam angle sensor 23 is implemented e.g. by a magnet rotor and an MRE pickup, for delivering a CAM signal, which is a pulse signal, to the ECU 2 along with rotation of the intake camshaft 5. Each pulse of the CAM signal is generated whenever the intake camshaft 5 rotates through a predetermined cam angle (e.g. 1°). The ECU 2 calculates the cam phase Cain based on the CAM signal and the CRK signal, described above.\n\nFurther, a LAF sensor 24 is inserted into the exhaust pipe 13 of the engine 3. The LAF sensor 24 linearly detects the concentration of oxygen in exhaust gases, and delivers a signal indicative of the sensed oxygen concentration to the ECU 2. The ECU 2 calculates an actual air-fuel ratio A/F indicative of the air-fuel ratio of a mixture burned in the engine 3, based on the signal from the LAF sensor 24. It should be noted that the actual air-fuel ratio A/F is calculated as an equivalent ratio.\n\nFurther, an accelerator pedal opening sensor 25 detects the amount AP of operation (stepped-on amount) of an accelerator pedal, not shown (hereinafter referred to as “the accelerator pedal opening AP”), and delivers a signal indicative of the sensed accelerator pedal opening AP to the ECU 2. A vehicle speed sensor 26 (vehicle speed-detecting means) detects a vehicle speed VP, which is a traveling speed of the vehicle V, and delivers a signal indicative of the sensed vehicle speed VP to the ECU 2. Furthermore, the transmission 80 has a gear position sensor 27 mounted thereon. The gear position sensor 27 detects a gear position of the transmission 80, and delivers a signal indicative of a shift position NGR corresponding to the sensed gear position to the ECU 2. The ECU 2 calculates a transmission ratio G_ratio of the transmission 80 based on the shift position NGR.\n\nThe ECU 2 is implemented by a microcomputer comprised of a CPU, a RAM, a ROM and an I/O interface (none of which are specifically shown). The ECU 2 controls the operations of the engine 3 and the transmission 80 based on the signals from the aforementioned sensors 20 to 27.\n\nIt should be noted that in the present embodiment, the ECU 2 corresponds to first control input-calculating means, load parameter-detecting means, demanded acceleration degree parameter-calculating means, output priority-type calculating means, fuel economy priority-type calculating means, selection means, second control input-calculating means, drive wheel demanded torque-calculating means, traveling resistance-calculating means, drive wheel torque-detecting means, acceleration-detecting means, reference traveling resistance-calculating means, reference acceleration resistance-calculating means, and correction value-calculating means.\n\nNext, a description will be given of the outline of a control process executed by the ECU 2. First, the ECU 2 calculates an output Bmep_cmd (load parameter) demanded of the engine 3 (hereinafter referred to as “the engine demanded output Bmep_cmd”) (see FIG. 11), and an acceleration demand reference value G_jud indicative of the presence or absence of a demand of acceleration (see FIG. 13). Further, the ECU 2 calculates the phase control input Ucain for controlling the variable cam phase mechanism 70 e.g. based on the calculated the engine demanded output Bmep_cmd and acceleration demand reference value G_jud (see FIG. 15), and calculates the lift control input Uliftin for controlling the variable valve lift mechanism 50, based on the calculated cam phase Cain (see FIG. 18). It should be noted that all the processes described hereinafter will be executed at a predetermined control period T (e.g. 10 msec).\n\nIn the FIG. 11 process for calculating the engine demanded output Bmep_cmd, in a step 1 (shown as S1 in abbreviated form in FIG. 11; the following steps are also shown in abbreviated form), the engine demanded output Bmep_cmd is calculated by searching a map shown in FIG. 12 according to the engine speed NE and the accelerator pedal opening AP. In this map, the engine demanded output Bmep_cmd is set to a larger value as the engine speed NE is higher and the accelerator pedal opening AP is higher. It should be noted that the engine demanded output Bmep_cmd is calculated as a net average effective pressure.\n\nNext, the process for calculating the above-described acceleration demand reference value G_jud will be described with reference to FIG. 13. First, in a step 11, a torque Tq_eng_cmd demanded of the engine 3 (hereinafter referred to as “the engine demanded torque Tq_eng_cmd”) is calculated using the engine demanded output Bmep_cmd calculated in the step 1, by the following equation (1):\n\nTq_engcmd=(Bmepcmd×DI/(Stroke×π) (1)\n\nwherein DI represents the displacement of the engine 3, Stroke represents the number of strokes of the engine 3, which is equal to 4 in the present embodiment, and π represents the ratio of the circumference of a circle.\n\nThen, in a step 12, a torque Tq_tire_cmd demanded of the drive wheels W and W (hereinafter referred to as “the drive wheel demanded torque Tq_tire_cmd”) is calculated e.g. using the calculated engine demanded torque Tq_eng_cmd and the aforementioned transmission ratio G_ratio by the following equation (2):\n\nTq_tirecmd=Tq_engcmd×η×G_ratio×F_ratio/TireR (2)\n\nwherein η represents a predetermined power loss e.g. in the transmission 80, F_ratio represents a reduction ratio of a final speed reduction gear, not shown, and Tire_R represents the radius of the drive wheels W and W. It should be noted that the drive wheel demanded torque Tq_tire_cmd is represented by a force (N).\n\nNext, a traveling resistance RL of the vehicle is calculated (step 13). FIG. 14 shows the process for calculating the traveling resistance RL. First, in a step 21, an actual output Bmep_act of the engine 3 (hereinafter referred to as “the engine output Bmep_act”) is calculated. The engine output Bmep_act is calculated by searching a map, not shown, according to the intake air amount QA, the air-fuel ratio A/F, and the ignition timing. This map is formed by empirically determining the relationship between the engine output Bmep_act, the intake air amount QA, the air-fuel ratio A/F, and the ignition timing, and then mapping the relationship. It should be noted that the engine output Bmep_act is calculated as a net average effective pressure.\n\nThen, in a step 22, an actual torque Tq_eng_act of the engine 3 (hereinafter referred to as “the engine torque Tq_eng_act”) is calculated using the calculated engine output Bmep_act and the aforementioned engine displacement DI and stroke number Stroke, by the following equation (3):\n\nTq_eng_act=(Bmep_act×DI)/(Stroke×π) (3)\n\nNext, in a step 23, an actual torque Tq_tire_act of the drive wheels W and W (hereinafter referred to as “the drive wheel torque Tq_tire_act”) is calculated using the calculated engine torque Tq_eng_act, the transmission ratio G_ratio, the power loss η, the reduction ratio F_ratio of the final speed reduction gear, and the radius Tire_R of the drive wheels W and W, by the following equation (4):\n\nTq_tire_act=Tq_eng_act×η×G_ratio×F_ratio/TireR (4)\n\nThen, in a step 24, a reference rolling resistance Roll_r is calculated by the following equation (5):\n\nRollr=μR×Weight (5)\n\nwherein μR represents a value obtained by multiplying a friction coefficient obtained when the vehicle V travels on an asphalt road surface by the gravitational acceleration (hereinafter referred to as “the reference friction coefficient”), and is set to a predetermined value. Further, Weight represents the weight of the vehicle V with one occupant and no baggage loaded thereon (hereinafter referred to as “the reference vehicle weight”), and is set to a predetermined value.\n\nNext, in a step 25, reference air resistance Air r is calculated using the vehicle speed VP by the following equation (6):\n\nAirr=μA×A×VP2 (6)\n\nwherein μA and A represent an air resistance coefficient (hereinafter referred to as “the reference air resistance coefficient”) and a front projection area of the vehicle V (hereinafter referred to as “the reference front projection area”), which are obtained when the vehicle V has no spoiler or no carrier mounted thereon, respectively.\n\nThen, in a step 26, the acceleration Acc of the vehicle V (hereinafter referred to as “the vehicle acceleration Acc”) is calculated using the current value VP of the vehicle speed and the immediately preceding value VPZ of the same (vehicle speed VP obtained in the immediately preceding control timing), and the control period T of the present process, by the following equation (7):\n\nAcc=(VP−VPZ)/T (7)\n\nNext, a reference acceleration resistance Acc_r is calculated by dividing the calculated vehicle acceleration Acc by the reference vehicle weight Weight (step 27).\n\nThe reference acceleration resistance Acc_r corresponds to acceleration resistance obtained when the weight of the vehicle V is equal to the reference vehicle weight Weight.\n\nThen, the sum of the reference rolling resistance Roll_r obtained in the step 24 and the reference air resistance Air r obtained in the step 25 is calculated as a reference traveling resistance RL_base (step 28). The reference traveling resistance RL_base corresponds to a traveling resistance obtained when the weight of the vehicle V is equal to the reference vehicle weight Weight, the friction coefficient of a road surface to the reference friction coefficient OR, the air resistance coefficient to the reference air resistance coefficient μA, and the front projection area of the vehicle V to the reference front projection area A, and the road surface is horizontal (gradient=0, i.e. gradient resistance=0).\n\nNext, a total reference traveling resistance ALL_RL is calculated by adding the reference acceleration resistance Acc_r calculated in the above-described step 27 to the calculated reference traveling resistance RL_base (step 29).\n\nThen, using the calculated total reference traveling resistance ALL_RL and the drive wheel torque Tq_tire_act calculated in the step 23, a correction value RL_cor for correcting the reference traveling resistance RL_base is calculated as follows (step 30).\n\nFirst, the difference dRL (Tq_tire_act−ALL_RL) between the drive wheel torque Tq_tire_act and the total reference traveling resistance ALL_RL is calculated. Then, the correction value RL_cor is calculated using the calculated difference dRL by the following equation (8):\n\nRLcor=α×dRL+(1−α)×RLcorZ (8)\n\nwherein RL_corZ represents the immediately preceding value of the correction value, and α represents a predetermined averaging coefficient, which is set to 0.03, for example.\n\nAs described above, the correction value RL_cor is calculated as the weighted average of the difference dRL and the immediately preceding value RL_corZ of the correction value. The reason for calculating the correction value RL_cor is to eliminate the error of computations caused by temporary noises which can be contained in the signals output from the sensors, and further, because the actual traveling resistance is not instantaneously changed and hence it is possible to obtain an appropriate correction value RL_cor even by such a calculation as described above.\n\nIt should be noted that when the difference (VPZ−VP) between the immediately preceding value VPZ of the vehicle speed VP and the current value of the same is larger than a predetermined value, the correction value RL_cor is set to the immediately preceding value RL_corZ thereof without calculating the correction value RL_cor by the aforementioned equation (8). This is because in the above-mentioned case (VPZ−VP>the predetermined value), i.e. when the driver is decelerating the vehicle V by a brake operation, a deceleration resistance is contained in the reference acceleration resistance Acc_r calculated in the step 27, so that it is impossible to properly calculate the reference acceleration resistance Acc_r, which makes it impossible to properly calculate the correction value RL_cor.\n\nNext, the traveling resistance RL is calculated by adding the calculated correction value RL_cor to the reference traveling resistance RL_base (step 31), followed by terminating the present process.\n\nReferring again to FIG. 13, in a step 14 following the step 13, an acceleration G_cmd demanded of the engine 3 (hereinafter referred to as “the demanded acceleration G_cmd”) (demanded acceleration degree parameter) is calculated using the reference vehicle weight Weight, the drive wheel demanded torque Tq_tire_cmd calculated in the step 12, and the traveling resistance RL calculated in the step 31, by the following equation (9):\n\nGcmd=(Tq_tirecmd−RL)/Weight (9)\n\nNext, it is determined whether or not the calculated demanded acceleration G_cmd is smaller than a predetermined threshold value G_cmd_SH (step 15). The threshold value G_cmd_SH is set to a slightly larger value than acceleration obtained during slow acceleration and cruising of the vehicle. For example, it is set to 0.1 m/s2.\n\nIf the answer to the question of the step 15 is affirmative (YES), it is judged that acceleration is not demanded, and the acceleration demand reference value G_jud is set to 0 (calculated to 0) (step 16), followed by terminating the present process. On the other hand, if the answer to the question of the step 15 is negative (NO), it is judged that acceleration is demanded, and the acceleration demand reference value G_jud is set to 1 (calculated to 1) (step 17), followed by terminating the present process. As described above, when acceleration is demanded, the acceleration demand reference value G_jud is set to 1, and otherwise to 0.\n\nFIG. 15 shows a process for calculating the phase control input Ucain for use in controlling the aforementioned variable cam phase mechanism 70. First, in a step 41, it is determined whether or not the acceleration demand reference value G_jud set in the step 16 or 17 is equal to 0. If the answer to this question is affirmative (YES), i.e. if acceleration is not demanded, a fuel economy map value Cain_M_FC is calculated by searching a fuel economy map shown in FIG. 16 according to the engine speed NE and the engine demanded output Bmep_cmd, and the calculated fuel economy map value Cain_M FC is set to a target cam phase Cain_cmd (step 42).\n\nIn the above-described fuel economy map, the fuel economy map value Cain_N_FC is set to a retarded value so as to ensure stable combustion in a low load region where the engine demanded output Bmep_cmd is small and a low engine speed region where the engine speed NE is low. Further, in a medium or medium-to-high engine speed region where the engine demanded output Bmep_cmd is medium, the fuel economy map value Cain_M_FC is set to an advanced value so as to improve fuel economy by reducing the pumping loss due to internal EGR. Furthermore, in a high load region where the engine demanded output Bmep_cmd is large, the fuel economy map value Cain_M_FC is set to a retarded value so as to ensure a large amount of fresh (intake) air.\n\nOn the other hand, if the answer to the question of the step 41 is negative (NO), i.e. if the acceleration demand reference value G_jud is equal to 1, which means that acceleration is demanded, an output map value Cain_M P is calculated by searching an output map shown in FIG. 17 according to the engine speed NE and the engine demanded output Bmep_cmd, and the calculated output map value Cain_N_P is set to the target cam phase Cain_cmd (step 43). In this output map, the output map value Cain_N_P is basically set to have the same tendency as that of the fuel economy map value Cain_N_FC with respect to the engine speed NE and the engine demanded output Bmep_cmd, and is set to a more retarded value than the fuel economy map value Cain_N_FC, as a whole, so as to obtain a larger intake air amount QA for obtaining a larger output of the engine 3.\n\nIn a step 44 following the above-described step 42 or 43, the phase control input Ucain is calculated according to the difference between the calculated target cam phase Cain_cmd and the cam phase Cain with a predetermined feedback control algorithm, such as a PID control algorithm, followed by terminating the present process. The phase control input Ucain calculated as above is input to the variable cam phase mechanism 70, whereby the cam phase Cain is controlled such that it becomes equal to the target cam phase Cain_cmd.\n\nFIG. 18 shows a process for calculating the aforementioned lift control input Uliftin. First, in a step 51, a target valve lift Liftin_cmd is calculated by searching a Liftincmd map shown in FIG. 19 according to the engine speed NE, the engine demanded output Bmep_cmd, and the cam phase Cain. In this map, a plurality of predetermined values of the cam phase Cain (only one of which is shown in FIG. 19) are set between the most retarded value Cainrt and the most advanced value Cainad, mentioned hereinabove, and when the cam phase Cain is not equal to any of the predetermined values, the target valve lift Liftin_cmd is calculated by interpolation.\n\nThe Liftincmd map is prepared by empirically determining the relationship between the valve lift Liftin, the engine speed NE, the engine demanded output Bmep_cmd, and the cam phase Cain, and mapping the relationship. In the Liftincmd map, the target valve lift Liftin_cmd is set to a value enabling the engine demanded output Bmep_cmd to be obtained, with respect to the current cam phase Cain.\n\nMore specifically, the target valve lift Liftin_cmd is set to a larger value as the engine speed NE is higher, or as the engine demanded output Bmep_cmd is larger and the load on the engine 3 is higher, so as to obtain a larger intake air amount QA for obtaining a larger output of the engine 3. Further, the target valve lift Liftin_cmd is set to a larger value as the cam phase Cain is more advanced. This is because as the cam phase Cain is more advanced, the speed of each piston 3b is lower during opening of the intake valve 4, and the internal EGR amount is larger, whereby the intake air amount QA becomes smaller with respect to the same valve lift Liftin. Therefore, the target valve lift Liftin_cmd is set to a larger value so as to compensate for the smaller intake air amount QA.\n\nIn a step 52 following the step 51, the lift control input Uliftin is calculated according to the difference between the calculated target valve lift Liftin_cmd and the valve lift Liftin with a predetermined feedback control algorithm, such as a PID control algorithm, followed by terminating the present process. The lift control input Uliftin calculated as above is input to the variable valve lift mechanism 50, whereby the valve lift Liftin is controlled such that it becomes equal to the target valve lift Liftin_cmd.\n\nThe engine output Bmep_act is controlled to the engine demanded output Bmep_cmd through control using the phase control input Ucain and the lift control input Uliftin, described above.\n\nNext, the results of control by the control system 1 will be described in comparison with a comparative example with reference to FIGS. 20A and 20B. The comparative example shown in FIG. 20A is an example of results of control obtained when the target cam phase Cain_cmd is calculated using the aforementioned fuel economy map alone, similarly to the control by the conventional control system. According to the results of the comparison, although the engine demanded output Bmep_cmd is suddenly increased by a demand of acceleration, response delay of the variable cam phase mechanism 70 cannot be sufficiently compensated for and therefore the intake air amount QA cannot be sufficiently obtained. This causes response delay of the engine output Bmep_act with respect to the engine demanded output Bmep_cmd.\n\nIn contrast, as shown in FIG. 20B, according to the results of control by the control system 1, when the engine demanded output Bmep_cmd is suddenly increased by a demand of acceleration, the engine output Bmep_act quickly responds to the engine demanded output Bmep_cmd, whereby it was confirmed that it is possible to obtain high responsiveness of the engine output Bmep_act to a demand of acceleration.\n\nFurther, FIG. 21 shows a fuel economy ratio FEA (mile/gallon) under the control of the control system 1 together with fuel economy ratios FEFC and FEP obtained using a fuel economy map alone and an output map alone, respectively. As shown in FIG. 21, the fuel economy ratio FEA under the control of the control system 1 is larger than the fuel economy ratio FEP obtained using the output map alone, and substantially as large as the fuel economy ratio FEFC obtained using the fuel economy map alone, so that it was confirmed that it is possible to obtain excellent fuel economy of the engine 3.\n\nAs described above, according to the present embodiment, the phase control input Ucain is calculated based on the output map value Cain_M_P when acceleration is demanded, and otherwise, the phase control input Ucain is calculated based on the fuel economy map value Cain_M_FC (steps 43, 43 and 44). This makes it possible to ensure excellent fuel economy of the engine 3 and enhance the responsiveness of the output of the engine 3 when acceleration is demanded. Further, the lift control input Uliftin is calculated based on the detected cam phase Cain (steps 51 and 52), so that it is possible to compensate for the response delay of the variable cam phase mechanism 70 by controlling the intake air amount QA via the variable valve lift mechanism 50. This makes it possible to more excellently obtain the above-described effects, i.e. the effects of ensuring excellent fuel economy and enhancing the responsiveness of the output of the engine 3 when acceleration is demanded.\n\nFurthermore, the demanded acceleration G_cmd is calculated based on the difference between the drive wheel demanded torque Tq_tire_cmd and the traveling resistance RL, and hence it is possible to properly calculate the demanded acceleration G_cmd as a parameter indicative of the degree of acceleration demanded of the engine 3 (hereinafter referred to as “the demanded acceleration degree”). Further, the correction value RL_cor is calculated based on the difference between the drive wheel torque Tq_tire_act and the total reference traveling resistance ALL_RL, which is the sum of the reference traveling resistance RL_base and the reference acceleration resistance Acc_r (steps 29 and 30), and the reference traveling resistance RL_base is corrected by the calculated correction value RL_cor, whereby the traveling resistance RL is calculated (step 31). This makes it possible to accurately calculate the traveling resistance RL with reference to the reference traveling resistance RL_base. Further, as is apparent from the method of calculating the traveling resistance RL, it is possible to calculate the traveling resistance RL only by computations, without requiring values obtained by detecting the actual weight of the vehicle V and the gradient of a road surface. This makes it possible to dispense with sensors for detecting the above values, thereby making it possible to reduce the manufacturing costs of the control system 1.\n\nNext, a variation of the process for calculating the acceleration demand reference value will be described with reference to FIG. 22. The present process is distinguished from the aforementioned FIG. 13 process only in the method of calculating the acceleration demand reference value G_jud. In FIG. 22, steps identical to those of the process in FIG. 13 are designated by the same step numbers. Further, as is apparent from FIG. 22, steps 15 et seq. are different, and hence the steps 15 et seq. will be described hereinafter with reference to FIG. 22.\n\nIf the answer to the question of the step 15 is affirmative (YES) (G_cmd<G_cmd_SH), the acceleration demand reference value G_jud currently obtained is set to the immediately preceding value G_judZ of the acceleration demand reference value (step 61). Then, the current value G_jud of the same is calculated by subtracting a predetermined value G_jref (e.g. 0.01) from the immediately preceding value G_judZ set as above (step 62).\n\nNext, it is determined whether or not the calculated acceleration demand reference value G_jud is not larger than 0 (step 63). If the answer to this question is negative (NO), the present process is immediately terminated, whereas if the answer to the question is affirmative (YES), it is judged that acceleration is not demanded, and the aforementioned step 16 is executed to set the acceleration demand reference value G_jud to 0, followed by terminating the present process.\n\nAs described above, in the present process, as shown by time points t1 and t2 appearing in FIG. 23, even when the demanded acceleration G_cmd becomes lower than the threshold value G_cmd_SH, differently from the first embodiment described above, the acceleration demand reference value G_jud is not immediately set to 0 but is calculated such that it becomes equal to a value obtained by subtracting the predetermined value G_jref from 1 in each control timing of the present process (steps 61 and 62). Then, when a state in which the demanded acceleration G_cmd is lower than the threshold value G_cmd SH continues to some extent until the value obtained by subtracting the predetermined value G_jref from the immediately preceding value G_judZ of the acceleration demand reference value becomes not larger than 0 (YES to step 63), the acceleration demand reference value G_jud is held at 0 insofar as G_cmd<G_cmd_SH holds (step 16) (after t2 et seq.).\n\nThe acceleration demand reference value G_jud is calculated as above for the following reason: As described hereinabove, the demanded acceleration G_cmd is calculated using the engine demanded output Bmep_cmd calculated based on the accelerator pedal opening AP. In contrast, in the vehicle equipped with the transmission 80 of the manual type as in the present embodiment, normally, the driver takes his foot off the accelerator pedal during a shift change, and hence the accelerator pedal opening AP becomes equal to 0. Therefore, when the shift change is executed while acceleration is demanded, the accelerator pedal opening AP becomes equal to 0, as described above, whereby the demanded acceleration G_cmd sometimes takes a negative value. In such a case, the acceleration demand reference value G_jud is not immediately set to 0 to thereby maintain the control of the cam phase Cain using the output map.\n\nFurther, a time period over which is maintained the control of the cam phase Cain using the output map is determined according to the magnitude of the predetermined value G_jref. Therefore, the predetermined value G_jref is set such that the time period required for maintaining the control of the cam phase Cain using the output map becomes slightly longer than a time period which it takes before the driver steps on the accelerator pedal after he temporarily takes his foot off the accelerator pedal for a shift change. This makes it possible to prevent the control of the cam phase Cain using the output map from being uselessly switched to the control of the cam phase Cain using the fuel economy map during a shift change, whereby it is possible to maintain the control of the cam phase Cain using the output map.\n\nIt should be noted that the lapse of time after the demanded acceleration G_cmd has become lower than the threshold value G_cmd_SH is measured by time measuring means, such as a timer, and when the lapse of time exceeds a predetermined time period, the acceleration demand reference value G_jud may be set to 0. In this case, by setting the predetermined time period to be slightly longer than the time period which it takes before the driver steps on the accelerator pedal after he temporarily takes his foot off the accelerator pedal for a shift change, it is possible to obtain the same advantageous effects as described hereinabove.\n\nNext, a second embodiment of the present invention will be described. The present embodiment is distinguished from the first embodiment only in the process for calculating the acceleration demand reference value and the process for calculating the phase control input. More specifically, the present embodiment is distinguished from the first embodiment only in the method of calculating an acceleration demand reference value G_juda and the method of calculating the target cam phase Cain_cmd are different, and hereinafter, a description will be mainly given of points different from the first embodiment. In the present embodiment, the acceleration demand reference value G_juda corresponds to “weight dependent on the demanded acceleration degree parameter”.\n\nFirst, the process for calculating the acceleration demand reference value will be described with reference to FIG. 24. In a step following the step 14, the acceleration demand reference value G_juda is calculated by searching a G_juda table shown in FIG. 25 according to the demanded acceleration G_cmd, followed by terminating the present process. In FIG. 24, Gmin and Gmax represent the minimum value and the maximum value of the demanded acceleration G_cmd, respectively.\n\nIn the G_juda table, the acceleration demand reference value G_juda is set to 0 within a range where the demanded acceleration G_cmd is smaller than the threshold value G_cmd_SH, whereas within a range where the demanded acceleration G_cmd is larger than a predetermined value Gref (>G_cmd_SH), the acceleration demand reference value G_juda is set to 1. Further, within a range where G_cmd_SH≦G_cmd≦Gref holds, the acceleration demand reference value G_juda is linearly set to a larger value as the demanded acceleration G_cmd is larger. The predetermined value Gref is set to a value slightly smaller than the maximum value Gmax.\n\nNext, the process for calculating the phase control input will be described with reference to FIG. 26. First, in a step 81, similarly to the aforementioned step 42, the fuel economy map value Cain_M_FC is calculated by searching the FIG. 16 fuel economy map described above according to the engine speed NE and the engine demanded output Bmep_cmd. Then, similarly to the aforementioned step 43, the output map value Cain_M_P is calculated by searching the FIG. 17 output map described above according to the engine speed NE and the engine demanded output Bmep_cmd (step 82).\n\nNext, in a step 83, the target cam phase Cain_cmd is calculated using the acceleration demand reference value G_juda calculated in the aforementioned step 71, and the fuel economy map value Cain_N_FC and the output map value Cain_M P calculated in the steps 81 and 82, respectively, by the following equation (10):\n\nCaincmd=Gjuda×Cain_MP+(1−GjudaCainMFC (10)\n\nThen, the aforementioned step 44 is executed, whereby the phase control input Ucain is calculated based on the calculated target cam phase Cain_cmd, followed by terminating the present process.\n\nAs is apparent from the equation (10), the target cam phase Cain_cmd for use in calculation of the phase control input Ucain is calculated by calculating the weighted average of the output map value Cain_N_P and the fuel economy map value Cain_N_FC, using the acceleration demand reference value G_juda as a weighting coefficient, with the output map value Cain_N_P being multiplied by the acceleration demand reference value G_juda, and the fuel economy map value Cain_N_FC being multiplied by (1−G_juda). Further, as is apparent from the settings of the above-described G_juda table, the acceleration demand reference value G_juda is calculated such that it becomes equal to 0 when the demanded acceleration G_cmd is smaller than the threshold value G_cmd_SH, whereas when the demanded acceleration G_cmd is not smaller than the threshold value G_cmd_SH, the acceleration demand reference value G_juda is linearly calculated such that it is a larger value within a range between 0 and 1 as the demanded acceleration G_cmd is larger.\n\nAs is apparent from the above description, when the demanded acceleration G_cmd is not smaller than the threshold value G_cmd_SH, the weight of the output map value Cain_M_P with respect to the target cam phase Cain_cmd becomes larger as the acceleration demand reference value G_juda is larger, i.e. as the demanded acceleration degree is larger. Therefore, when acceleration is demanded, the weight of the output map value Cain_M P with respect to the target cam phase Cain_cmd becomes larger, so that it is possible to enhance the responsiveness of the output of the engine 3. Further, when G_cmd<G_cmd_SH holds, i.e. when acceleration is not demanded, the target cam phase Cain_cmd can be set to the fuel economy map value Cain_M_FC, whereby it is possible to obtain excellent fuel economy of the engine 3. From the above, similarly to the above-described first embodiment, it is possible to ensure excellent fuel economy to enhance the responsiveness of the output of the engine 3 when acceleration is demanded.\n\nFurther, as described above, as the demanded acceleration degree is larger, the weight of the output map value Cain_M_P with respect to the target cam phase Cain_cmd becomes larger, so that it is possible to obtain an appropriate target cam phase Cain_cmd that matches the magnitude of the degree of acceleration. This makes it possible to ensure excellent fuel economy and enhance the responsiveness of the output of the engine 3 when acceleration is demanded, in a well balanced manner.\n\nIt should be noted that although in the above-described G_juda table, a section for setting the acceleration demand reference value G_juda to a value between 0 and 1 is defined by the threshold value G_cmd_SH and the predetermined value Gref, this is not limitative, but the section may be defined by other desired values insofar as the values are larger than 0 and smaller than the maximum value Gmax.\n\nIt should be noted that the present invention is by no means limited to the embodiments described above, but it can be practiced in various forms. For example, although in the above-described embodiments, the present invention is applied to the engine 3 of a type in which the response speed of the variable valve lift mechanism 50 is higher than that of the variable cam phase mechanism 70, by way of example, this is not limitative, but inversely, the present invention can be applied to an internal combustion engine of a type in which the response speed of a variable cam phase mechanism is higher than that of a variable valve lift mechanism. Further, although in the above-described embodiments, the first and second variable valve-actuating mechanisms are formed by the variable valve lift mechanism 50 and the variable cam phase mechanism 70, respectively, this is not limitative, but the first and second variable valve-actuating mechanisms may be formed by other mechanisms insofar as they are capable of changing the intake air amount QA by changing the operating characteristics of the intake valves 4. Furthermore, although in the above-described embodiments, the present invention is applied to the engine 3 in which the valve lift Liftin and the valve timing of the intake valves 4 are changed by the variable valve lift mechanism 50 and the variable cam phase mechanism 70, respectively, by way of example, this is not limitative, but the present invention can be applied to an internal combustion engine of a type in which the same operating characteristics of the intake valves 4 are changed by two mechanisms with different response speeds.\n\nFurther, although in the above-described embodiments, the target cam phase Cain_cmd is determined by searching the fuel economy map or the output map, the target cam phase Cain_cmd may be calculated by computation without using the maps. Furthermore, although in the above-described embodiments, as a load parameter, the engine demanded output Bmep_cmd, which is calculated, is used, the accelerator pedal opening AP, which is detected, may be used. Further, although in the above-described embodiments, the drive wheel torque Tq_tire_act and the vehicle acceleration Acc are determined by computation, they may be determined by detection using sensors. Further, the demanded acceleration G_cmd may be calculated by another method in place of the method employed in the above-described embodiments. For example, the demanded acceleration G_cmd may be calculated based on the amount of change in the accelerator pedal opening AP. Further, the traveling resistance RL as well may be calculated by another method in place of the method according to the above-described embodiments. For example, the weight of the vehicle V and the gradients of road surfaces may be detected by sensors for calculating the traveling resistance RL based on the detected values.\n\nFurther, although in the above-described embodiments, the present invention is applied to the automotive engine 3 by way of example, this is not limitative, but it can be applied to various types of industrial internal combustion engines including engines for ship propulsion machines, such as an outboard motor having a vertically-disposed crankshaft.\n\nIt is further understood by those skilled in the art that the foregoing are preferred embodiments of the invention, and that various changes and modifications may be made without departing from the spirit and scope thereof." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90885544,"math_prob":0.9687342,"size":82801,"snap":"2019-35-2019-39","text_gpt3_token_len":17097,"char_repetition_ratio":0.21719386,"word_repetition_ratio":0.35019106,"special_character_ratio":0.20139854,"punctuation_ratio":0.08058534,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97939193,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T00:41:02Z\",\"WARC-Record-ID\":\"<urn:uuid:4feb0957-a900-4258-910f-13b1c86baccf>\",\"Content-Length\":\"122292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92d2b9f2-b4dc-48c7-8ddc-9198ef729c58>\",\"WARC-Concurrent-To\":\"<urn:uuid:8076baa8-e797-4c55-95d7-92a545ba2c50>\",\"WARC-IP-Address\":\"144.202.252.20\",\"WARC-Target-URI\":\"http://www.freepatentsonline.com/y2008/0172165.html\",\"WARC-Payload-Digest\":\"sha1:JFDIKQA6OZRNTVFUI5UIHP6Q4DFZ4XEN\",\"WARC-Block-Digest\":\"sha1:VICQ24P525E5YZU7MR5IFEHJJSNUWHEK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573173.68_warc_CC-MAIN-20190918003832-20190918025832-00167.warc.gz\"}"}
https://webapps.stackexchange.com/questions/149969/how-to-filter-based-on-multiple-cases/149971
[ "# How to filter based on multiple cases\n\nI am aware that I may be asking the wrong question here. Not sure how to ask, but I am trying to create a budget on google sheets that to record how much I spend on each day. Right now I have the transaction shown below:\n\n``````Date | Category | Amount\n--------------------------\nJan 1| Dining | \\$5\nJan 1| Dining | \\$30\nJan 1| Gas | \\$20\nJan 2| Other | \\$15\n``````\n\nI want to know how I could use the information I have above to create the table shown below:\n\n``````Date | Dining| Gas| Other\n------------------------\nJan 1| \\$35 | \\$20| \\$0\nJan 2| \\$0 | \\$0 | \\$15\n``````\n\nI'm stuck trying to use a LOOKUP and SUMIF together.\n\n## 1 Answer\n\nThe manual solution would be to create a `Pivot` table of your data.\n\nHowever, if you are looking for a formula solution, you can create a pivot table by using a `query` formula:\n\n``````={query(A1:C,\"Select A, Sum(C) where A is not null group by A Pivot B limit 0\",1);\nArrayFormula((N(Query(query(A1:C,\"Select A, Sum(C) where A is not null group by A Pivot B\",1),\n\"Select * offset 1\",0))))}\n``````", null, "Or you could just use the simplest version of it:\n\n`=query(A1:C,\"Select A, Sum(C) where A is not null group by A Pivot B\",1)`\n\nbut you will get empty cells when the amounts are zero.\n\n• Correct Marios. I too would go for the formula. Still. I think you over complicated it. One could use a much simpler version like: `=query(A1:C,\"Select A, Sum(C) where A is not null group by A Pivot B\",1)` – marikamitsos Jan 10 at 18:07\n• @marikamitsos the OP wants to put zeros when the amount is 0\\$ so I just wanted to stick with that. But besides that then yes the single formula is much easier. Thanks a lot for your feedback :) – Mario Jan 10 at 19:33\n• Wow, this is perfect, and complicated. Thank you. Time to learn more about pivots, queries and arrayformula – Beginner Programmer Jan 11 at 0:11\n• @Marios Do you know how I could include dates that doesn't contain any value? For example if we change Jan 2 to Jan 3. Is it possible to have the Jan 2 row still existing with \\$0's? – Beginner Programmer Jan 12 at 18:28\n• I am not sure if that's possible. Could you post a new question so other readers would be able to help you out ? @BeginnerProgrammer – Mario Jan 12 at 18:31" ]
[ null, "https://i.stack.imgur.com/jmrRZ.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9059829,"math_prob":0.7043028,"size":586,"snap":"2021-04-2021-17","text_gpt3_token_len":164,"char_repetition_ratio":0.17353952,"word_repetition_ratio":0.0,"special_character_ratio":0.38737202,"punctuation_ratio":0.05263158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9551249,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T09:29:10Z\",\"WARC-Record-ID\":\"<urn:uuid:60f3d95d-ab19-42a6-9d4f-0aea1bc45bfa>\",\"Content-Length\":\"165923\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d845785-5183-4c2f-9483-834e6401fd9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:354459c7-a151-4bb4-91a9-6bbb28a5a099>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://webapps.stackexchange.com/questions/149969/how-to-filter-based-on-multiple-cases/149971\",\"WARC-Payload-Digest\":\"sha1:5ZCFSPQQ2BQXCNY7XUTO2W7US27JAA2Z\",\"WARC-Block-Digest\":\"sha1:PZURSSHRY2BP3AB4IEBR7SJBVHRPKYLU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038061820.19_warc_CC-MAIN-20210411085610-20210411115610-00144.warc.gz\"}"}
https://erj.ersjournals.com/highwire/markup/90747/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
[ "Table 9—\n\nCombined analysis for maximal inspiratory and expiratory mouth pressure(PI,max and PE,max, respectively) in chronic obstructive pulmonary disease nonrandomised controlled trials\n\n First author [Ref.] Favours bilevel n Favours control n MD±se Weight % MD (random) 95% CI PI,max analysis Lien 26 11 11 0.4000±12.8142 3.35 0.40 (−24.72–25.52) Lin 27 10 12 5.0000±2.3839 96.65 5.00 (0.33–9.67) Total 21 23 100 4.85 (0.25–9.44) PE,max analysis Lien 26 11 11 1.0000±10.3830 4.40 1.00 (−19.35–21.35) Lin 27 10 12 5.0000±2.2265 95.60 5.00 (0.64–9.36) Total 21 23 100 4.82 (0.56–9.09)\n• MD: mean difference; CI confidence interval. Comparison: crossover trials of bilevel noninvasive positive pressure ventilation versus all modalities. Outcome: PI,max cmH2O or PE,max cmH2O. Test for heterogeneity in PI,max analysis: Chi squared = 0.12, df = 1 (p<0.72), I2 = 0%. Test for overall effect: Z = 2.07 (p = 0.04). Test for heterogeneity in PE,max analysis: Chi squared = 0.14, df = 1 (p<0.71), I2 = 0%. Test for overall effect: Z = 2.22 (p = 0.03)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62461656,"math_prob":0.89752203,"size":896,"snap":"2019-51-2020-05","text_gpt3_token_len":378,"char_repetition_ratio":0.10986547,"word_repetition_ratio":0.08695652,"special_character_ratio":0.54241073,"punctuation_ratio":0.25396827,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9512348,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T21:10:26Z\",\"WARC-Record-ID\":\"<urn:uuid:2d6775a1-c071-4772-9a1b-249ea87a9f6e>\",\"Content-Length\":\"10009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62920962-82df-4767-9d6d-6d284f3f9b4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:68e99bbd-9ea4-4441-85e4-0a0d64820901>\",\"WARC-IP-Address\":\"104.16.192.19\",\"WARC-Target-URI\":\"https://erj.ersjournals.com/highwire/markup/90747/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed\",\"WARC-Payload-Digest\":\"sha1:AYYIYOZQY4F2KYVSM4VY7AMS2PJNHXJW\",\"WARC-Block-Digest\":\"sha1:JSX2RTQCDVFYUZIKQXJYY62U7ZV5DSXU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250605075.24_warc_CC-MAIN-20200121192553-20200121221553-00294.warc.gz\"}"}
https://forum.skewed.de/t/random-graph-with-degree-sequence/1037
[ "# Random graph with degree sequence.\n\nHi,\n\nI'm using graph-tool to try to generate random graphs with a sequence of\ndegrees. For example, in a 3-node graph, I generated a random graph with\nall nodes with input degrees 1 and output degrees 1.\n\nMy code:\n\nimport graph_tool.all as gt>>> def deg_sampler():... return 1,1... >>> g = gt.random_graph(3,deg_sampler,parallel_edges=True, self_loops=False)>>> gt.graph_draw(g)\n\nCan I generate a random graph defining the input and output degrees of each\nnode? For example, tree nodes with respectively the input degrees (1, 2, 0)\nand output degrees (1, 0, 2).\n\nThanks,\n\nAlvaro\n\nattachment.html (1.37 KB)\n\nNi!\nHi Alvaro, this is explained in the documentation for the\ngraph_tool.generation.random_graph\n<https://graph-tool.skewed.de/static/doc/generation.html#graph_tool.generation.random_graph&gt;\nfunction concerning the `deg_sampler`:\n\nOptionally, you can also pass a function which receives one or two\narguments. If block_membership is None, the single argument passed will\nbe the index of the vertex which will receive the degree. If\nblock_membership is not None, the first value passed will be the vertex\nindex, and the second will be the block value of the vertex.\n\nCheers,\n.~´\n\nattachment.html (3.17 KB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9081308,"math_prob":0.91401094,"size":332,"snap":"2022-40-2023-06","text_gpt3_token_len":72,"char_repetition_ratio":0.15853658,"word_repetition_ratio":0.0,"special_character_ratio":0.19879518,"punctuation_ratio":0.10769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98178595,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T15:14:01Z\",\"WARC-Record-ID\":\"<urn:uuid:bb249af8-2756-4b26-9e28-4e2c6e383fe1>\",\"Content-Length\":\"15840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10e811c2-539a-4c91-a7fb-2a9dc333bd4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa85671b-21a5-45f1-9812-08f8e7375a66>\",\"WARC-IP-Address\":\"49.12.93.243\",\"WARC-Target-URI\":\"https://forum.skewed.de/t/random-graph-with-degree-sequence/1037\",\"WARC-Payload-Digest\":\"sha1:NQRLBX2VCF3WI2WJZW6VFXIJ4LBNTP4Z\",\"WARC-Block-Digest\":\"sha1:SIRPNE77ME2QDZ35SIVVT6LNFFOIOUFE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499946.80_warc_CC-MAIN-20230201144459-20230201174459-00159.warc.gz\"}"}
https://tex.stackexchange.com/questions/174663/colorful-logic-gates-using-circuitikz
[ "# Colorful Logic Gates using Circuitikz\n\nI have to draw following circuit", null, "the LaTeX source of which is\n\n\\documentclass{beamer}\n\\usepackage{circuitikz}\n\\begin{document}\n\\begin{frame}\n\\begin{figure}\n\\centering\n\\begin{circuitikz} \\draw\n(0,0) node[not port] (not1) {}\n(0,2) node[xor port] (xor1) {}\n(0,4) node[and port] (and1) {}\n(2,3) node[nor port] (nor1) {}\n(4,2) node[xor port] (xor2) {}\n(6,3) node[or port] (or1) {}\n(6,1) node[and port] (and2) {}\n(nor1.in 1) node[above](f) {U}\n(nor1.in 2) node[below](g) {U}\n(xor2.in 1) node[above](h) {U}\n(xor2.in 2) node[below](i) {U}\n(or1.out) node[right](j) {U}\n(and1.out) -- (nor1.in 1)\n(xor1.out) -- (nor1.in 2)\n(nor1.out) -- (xor2.in 1)\n(not1.out) -- (xor2.in 2)\n(and1.out) -- (or1.in 1)\n(xor2.out) -- (or1.in 2)\n(xor1.out) -- (and2.in 1)\n(not1.out) -- (and2.in 2);\n\\end{circuitikz}\n\\end{figure}\n\\end{frame}\n\\end{document}\n\n\nThe circuit looks clumsy. What I want is that the gates whose inputs have been labelled U are drawn below the circuit such that the inputs wires of these circuits go downwards and then their outputs go upwards to the respective inputs.\n\nThe main question which I want to ask (which is also the title) is how to color the input and output wires of these gates red. By these gates I mean the gates whose inputs have been labelled U.\n\nWill be very thankful even if slight help is offered.\n\nThe diagram would be improved alot by using connecting wires that are only horizontal or vertical. There is a conveient syntax |- for a path that is first vertical then horizontal, and the corresponding -|. You can apply colours to the individual symbols and to the connecting wires by specifying color=red. Perhaps this is close to what you are after:", null, "\\documentclass{beamer}\n\\usepackage{circuitikz}\n\\begin{document}\n\\begin{frame}\n\\begin{figure}\n\\centering\n\\begin{circuitikz} \\draw\n(0,0) node[not port] (not1) {}\n(0,2) node[xor port] (xor1) {}\n(0,4) node[and port] (and1) {}\n(2,3) node[nor port,color=red] (nor1) {}\n(4,1) node[xor port,color=red] (xor2) {}\n(6,3) node[or port,color=red] (or1) {}\n(6,1) node[and port] (and2) {}\n(nor1.in 1) node[above](f) {U}\n(nor1.in 2) node[below](g) {U}\n(xor2.in 1) node[left](h) {U}\n(xor2.in 2) node[left](i) {U}\n(or1.out) node[right](j) {U};\n\\draw[color=red] (and1.out) |- (nor1.in 1)\n(xor1.out) |- (nor1.in 2)\n(nor1.out) -| (xor2.in 1)\n(not1.out) -| (xor2.in 2)\n(xor2.out) |- (or1.in 2);\n\\draw\n(and1.out) -- +(4,0) |- (or1.in 1)\n(xor1.out) -| (and2.in 1)\n(not1.out) -| (and2.in 2);\n\\end{circuitikz}\n\\end{figure}\n\\end{frame}\n\\end{document}\n\n\nAs the symbols are one unit, you can not (easily) change the colour an input wire right up to the symbol body.\n\n• Dear Friend, How can I show my gratitude! Thanks a lot. If possible would have given 100 bounty points, which is all I have. – kamalbanga May 1 '14 at 14:04\n• @kamalbanga after a few days, your question will be eligible for a bounty :) – cmhughes May 1 '14 at 15:14\n• @cmhughes I will never be using LaTeX after a few days since my student life ends in two days. So I better use up my points. :) – kamalbanga May 1 '14 at 15:19\n• Comments should not be used for discussions, but in that code you link to change (xor2.out) -| (not2.in) to (xor2.out) |- (not2.in) – Andrew Swann May 1 '14 at 15:55\n• Oh, silly mistake. Got it. – kamalbanga May 1 '14 at 15:57\n\nHere is a solution with the circuit libraries of TikZ. We need the extra -|- and |-| styles from Vertical and horizontal lines in pgf-tikz because the elements from circuits.logic... don't have pins.\n\n\\documentclass{beamer}\n\\usepackage{tikz}\n\\usetikzlibrary{\ncircuits,\ncircuits.logic.IEC,\ncircuits.logic.US,\ncircuits.logic.CDH,\ncalc\n}\n\\tikzset{\n-|-/.style={\nto path={\n(\\tikztostart) -| ($(\\tikztostart)!#1!(\\tikztotarget)$) |- (\\tikztotarget)\n\\tikztonodes\n}\n},\n-|-/.default=0.5,\n|-|/.style={\nto path={\n(\\tikztostart) |- ($(\\tikztostart)!#1!(\\tikztotarget)$) -| (\\tikztotarget)\n\\tikztonodes\n}\n},\n|-|/.default=0.5,\n}\n\\begin{document}\n\\newcommand\\pic[circuit logic IEC]{\n\\begin{tikzpicture}[#1]\n\\draw\n(0,0) node[not gate] (not1) {}\n(0,2) node[xor gate] (xor1) {}\n(0,4) node[and gate] (and1) {}\n(2,3) node[red,nor gate] (nor1) {}\n(4,2) node[red,xor gate] (xor2) {}\n(6,3) node[red,or gate] (or1) {}\n(6,1) node[and gate] (and2) {};\n\\draw[red]\n(and1.output) to[-|-] (nor1.input 1) node[above left] {U}\n(xor1.output) to[-|-] (nor1.input 2) node[below left] {U}\n(nor1.output) to[-|-] (xor2.input 1) node[above left] {U}\n(not1.output) to[-|-] (xor2.input 2) node[below left] {U}\n(xor2.output) to[-|-] (or1.input 2)\n(or1.output) node[right] {U};\n\\draw\n(and1.output) to[-|-] (or1.input 1)\n(xor1.output) to[-|-] (and2.input 1)\n(not1.output) to[-|-] (and2.input 2);\n\\end{tikzpicture}\n}\n\\begin{frame}{circuit logic IEC}\n\\begin{figure}\n\\centering\n\\pic\n\\end{figure}\n\\end{frame}\n\\begin{frame}{circuit logic US}\n\\begin{figure}\n\\centering\n\\pic[circuit logic US]\n\\end{figure}\n\\end{frame}\n\\begin{frame}{circuit logic CDH}\n\\begin{figure}\n\\centering\n\\pic[circuit logic CDH]\n\\end{figure}\n\\end{frame}\n\\end{document}", null, "", null, "", null, "" ]
[ null, "https://i.stack.imgur.com/hZAF0.png", null, "https://i.stack.imgur.com/kIcg1.png", null, "https://i.stack.imgur.com/nggLW.png", null, "https://i.stack.imgur.com/QF75g.png", null, "https://i.stack.imgur.com/8Ua9k.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68320626,"math_prob":0.9786308,"size":1307,"snap":"2019-43-2019-47","text_gpt3_token_len":463,"char_repetition_ratio":0.15886416,"word_repetition_ratio":0.04040404,"special_character_ratio":0.3573068,"punctuation_ratio":0.123636365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931104,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T16:45:12Z\",\"WARC-Record-ID\":\"<urn:uuid:b68398dc-6ddc-40ab-84d3-dc396ae0b26a>\",\"Content-Length\":\"149116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce058499-4f83-4cf2-8b3f-93a60ad64080>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0bf59af-fac2-40fb-99b1-103e30485e05>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/174663/colorful-logic-gates-using-circuitikz\",\"WARC-Payload-Digest\":\"sha1:C5H2C3UEV2WOHW62RAFD3SMIGTSX6CC2\",\"WARC-Block-Digest\":\"sha1:IPPDVWY2VL3CFCHBSZCA6Q6K4MEM5ECX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670921.20_warc_CC-MAIN-20191121153204-20191121181204-00367.warc.gz\"}"}
https://www.ltu.se/edu/course/M00/M0013M/M0013M-Matematik-M-1.68554?l=en&kursView=kursplan
[ "", null, "COURSE SYLLABUS\n\nMathematics 7.5 credits\n\nMatematik M\nFirst cycle, M0013M\nVersion\nCourse syllabus valid: Autumn 2021 Sp 1 - Present\nThe version indicates the term and period for which this course syllabus is valid. The most recent version of the course syllabus is shown first.\n\n Education level First cycle Grade scale G U 3 4 5 Subject Mathematics Subject group (SCB) Mathematics\n\nEntry requirements\n\nIn order to meet the general entry requirements for first cycle studies you must have successfully completed upper secondary education and documented skills in English language and Courses M0029M-M0031M or corresponding.\n\nSelection\n\nThe selection is based on 1-165 credits.\n\nCourse Aim\nAfter the course the student shall\n• be able to use key concepts for functions of several variables: limit, continuity, partial derivative, the chain rule, directional derivative, gradient and Taylor polynom\n• be able to find stationary points and classify them, determine the maximum and minimum values of continuous functions on closed bounded domains and be able to use the Lagrange multiplicator method.\n• be able to compute multiple integrals by interated integration and do suitable change of variables when it is needed.\n• be to compute and interpret line- and surface integrals\n•  Be able to apply and interpret important concepts within vector calculus: Vector field, divergence, curl, Green’s theorem, divergence theorem and Stokes’ theorem.\n• be able to find the Fourier series corresponding to a periodic function and be able to find odd and even half range expansions of a given function.\n• Be able to derive som important partial differential equations (PDE) from known physical laws: wave equation, heat equation and Poisson’s equation.\n• be able to use the method of separation of variables to solve the above mentioned PDE for some simple geometries.\n• Be able to identify and solve problems which can be analyzed with the methods from the course and present the solutions in a logical and correct way and so that they are easy to follow.\n\nAn overall aim is that the student after the course besides being able to use the concepts and methods in the course also must be able to do the corresponding calculations with high accuracy, i.e. the final result should be correct.\n\nContents\n- Calculus in several variables: functions of several variables, partial differentiation, Taylor series, extreme values, multiple integration, line integrals, surface integrals, vector analysis (the divergence theorem and Stoke’s theorem). - Partial differential equations (PDE): Well known PDE (e.g. the wave equation, the heat equation, Laplace equation...) will be presented and discussed from an engineering point of view. Solution of PDE by separation of variables.\n\nRealization\nEach course occasion´s language and form is stated and appear on the course page on Luleå University of Technology's website.\nLectures and lessons.\n\nExamination\nIf there is a decision on special educational support, in accordance with the Guideline Student's rights and obligations at Luleå University of Technology, an adapted or alternative form of examination can be provided.\nYou have to pass a written exam. Grading scale: 3 4 5\n\nTransition terms\nThe course M0013M is equal to MAM235.\n\nExaminer\nPeter Wall\n\nTransition terms\nThe course M0013M is equal to MAM235\n\nLiterature. Valid from Autumn 2007 Sp 1 (May change until 10 weeks before course start)\nKreyszig E: Advanced Engineering Mathematics, latest edition.\n\nCourse offered by\nDepartment of Engineering Sciences and Mathematics\n\nModules" ]
[ null, "https://www.ltu.se/epok/public/images/logo_160x80_en.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87262243,"math_prob":0.89219207,"size":4469,"snap":"2022-05-2022-21","text_gpt3_token_len":953,"char_repetition_ratio":0.11959686,"word_repetition_ratio":0.005873715,"special_character_ratio":0.20765272,"punctuation_ratio":0.09622887,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9680353,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T08:18:21Z\",\"WARC-Record-ID\":\"<urn:uuid:4e3016cb-30d0-429f-8ddc-d352daa79beb>\",\"Content-Length\":\"140970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeb32c65-336d-4742-8930-7219fb362145>\",\"WARC-Concurrent-To\":\"<urn:uuid:db5fc4e5-6f6d-4def-a15e-78f0f6da39ca>\",\"WARC-IP-Address\":\"130.240.43.25\",\"WARC-Target-URI\":\"https://www.ltu.se/edu/course/M00/M0013M/M0013M-Matematik-M-1.68554?l=en&kursView=kursplan\",\"WARC-Payload-Digest\":\"sha1:TMEPVAERQSXJWRMTJ6WUMY5LNXIYK3TS\",\"WARC-Block-Digest\":\"sha1:523L3VZMTY5KF4IFWMKJCIYVDDSENQ2W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300805.79_warc_CC-MAIN-20220118062411-20220118092411-00277.warc.gz\"}"}
http://esalexandrias.gr/cambridge-vocabulary-zhzxjhk/differential-amplifier-working-animation-4fa8cc
[ "The differential amplifier uses two transistors in common emitter configuration. Instrumentation Amplifiers are basically used to amplify small differential signals. The circuit diagram of a differential amplifier using one opamp is shown below. How does the current source work to improve CMRR (reduce common-mode gain)? However, as is typical in most amplifiers, the larger signal, the more distorted it gets. Adding equations (5) and (9), we get the output voltage Vo, where Ad = differential gain and Ac = common mode gain. This site uses Akismet to reduce spam. A special implementation of Operational Amplifiers is the Instrumentation Amplifier, a type of Differential Amplifier with Input Buffer Amplifier. Differential Amplifier CSE 577 Spring 2011 Insoo Kim, Kyusun Choi Mixed Signal CHIP Design Lab. Single Input Unbalance Output- It is a type of configuration in which a single input is given an output is taken from only a single transistor. Difference- and common-mode signals. What is the maximum allowable base voltage if the differential input is large enough to completely steer the tail current? Figure 1 shows the basic differential amplifier. So CMRR value for this circuit to be infinite, Comparing equation (12) and (13), we have. + + + + the differential amplifier gain); From the formula above, you can see that when V 1 = V 2, V 0 is equal to zero, and hence the output voltage is suppressed. main application of Differential Amplifier is, it creates a difference between two input signals and then amplifies the differential signal. Therefore V+ = 0 V. Since the op-amp is ideal and negative feedback is present, the voltage of the inverting terminal (V−) is equal to the voltage of the non-inverting terminal (V+ = 0), according to the virtual ground concept. Privacy. Interactive animation shows how a transistor works. Difference between Amplifier and Oscillator, Difference Between Half Wave and Full Wave Rectifier, Difference Between Multiplexer (MUX) and Demultiplexer (DEMUX). Dual Input Unbalanced Output 4. So when the difference between terminals is taken, the noise will cancel each other. * An ideal differential amplifier has zero common-mode gain (i.e., A cm =0)! reduces speed of the transmission one final time. This process is known as the biasing amplifier and it is an important amplifier design to establish the exact operating point of a transistor amplifier which is ready to r… In his autobiography Vannevar Bush tells the story of a draftsman who learned differential equations in mechanical terms from working on the construction and maintenance of the MIT differential analyzer. Transfer power from engine to wheels; Acts as a reducing gear i.e. Dual Input Balanced Output Instrumentation Amplifier provides the most important function of Common-Mode Rejection (CMR). Assume VCC=2.5V. Working of Differential Amplifier. Note: Ideally CMRR is infinite. Department of Computer Science & Engineering The Penn State University. Insulated-Gate Field-Effect Transistors (MOSFET) Since the noise present will be having the same amplitude at the two terminals of the op-amp. Operational Amplifier as Differential Amplifier . Working Principle of Op-Amp Open Loop Operation of an Operational Amplifier. Here, Q 1 acts in two ways: firstly, as common emitter amplifier, by which applied input at Q 1 will provide an amplified inverted signal at output 1. Since the op-amp is ideal and negative feedback is present, the voltage of the inverting terminal (V−) is equal to the voltage of the non-inverting terminal (V+), according to the virtual short concept. Discrete Semiconductor Circuits: Differential Amplifier 2. Checkout the THD results appearing in the in the output text file, BJT_DIFFAMP1.OUT. A differential amplifier is an op amp circuit which is designed to amplify the difference input available and reject the common-mode voltage. Learn how your comment data is processed. Its function is to amplify the differential voltage between the + input terminal (non -inverting terminal) and the - input terminal (inverting terminal). In the case of the first differential amplifier, when the input voltage is more than the feedback voltage than the input voltage of the two transistors Q3 and Q4 of second differential amplifier … A differential amplifier is an op amp circuit which is designed to amplify the difference input available and reject the common-mode voltage. Discrete Semiconductor Circuits: Simple Op-Amp 3. A differential amplifier, which is a circuit that amplifies the difference between two signals. Ask your students to define CMRR and explain its importance in a differential amplifier circuit. A signal is applied at the base of transistor Q 1 and no any signal is applied at the base of transistor Q 2. Based on the methods of providing input and taking output, differential amplifiers can have four different configurations as below. It is used for suppressing the effect of noise at the output. The two input signal V1 and V2 are applied to the op amp.eval(ez_write_tag([[728,90],'electricalvoice_com-box-3','ezslot_14',128,'0','0'])); Apply superposition theorem to find out the output voltage. BASIC SUBTRACTOR OR DIFFERENCE AMPLIFIER . Inverting Input (Yellow) and Differential Amplifier Output (Blue) - 180 Degree Phase Shift. Differential Amplifier using Op-amp. Nothing new here. Note: For a better differential amplifier, CMRR should be as high as possible. Your email address will not be published. Let us see the First case where. As we can see that the voltage across R4 is zero. Instead we're stuck with a real op-amp. When there is no input voltage to the transistor Q1, the voltage drop across resistor Rc1 is very less as a result output transistor Q1 is high. There are three specs here that affect us the most: input and output range; gain-bandwidth product (GBW) input offset voltage and currents; Input and output range is always a concern for any op-amp circuit. The differential amplifier implemented using BJT’s are shown below. amplified) by the differential amplifier gain A d. Run a few simulations while increasing VS beyond 10 mV. Differential amplifiers can be made using one opamp or two opamps. First of all, deactivate V2 and connect it to ground as shown in figure 2. eval(ez_write_tag([[250,250],'electricalvoice_com-medrectangle-3','ezslot_1',119,'0','0'])); (1). As said above an op-amp has a differential input and single ended output. There are different types of transistor amplifiers operated by using an AC signal input. Well, we talked about using an ideal op-amp in the differential amplifier circuit. Notice that the Differential Amp input and output are 180 degrees out of phase and the amplifier gain (Vpp OUT / Vpp IN) is approximately equal to one. If output is taken between the two collectors it is called balanced output or double ended output. Hence the output is free from noise. Both of these configurations are explained here. Transistor animation. So, if we apply two signals one at the inverting and another at the non-inverting terminal, an ideal op-amp will amplify the difference between the two applied input signals. The differential amplifier output is proportional to the difference of the input terminals. Half-circuit incremental analysis techniques. Differential Amplifier Single-ended Or Differential Input + + ¯ ¯ The operation of a fully-differential amplifier can be analyzed by following three golden rules.\\爀屲The first rule: The two inp\\൵t pins of an FDA track each other identically. CH 10 Differential Amplifiers 18 Example 10.5 A bipolar differential pair employs a tail current of 0.5 mA and a collector resistance of 1 kΩ. It is used for suppressing the effect of noise at the output. 1. Change Vbe and Vce to make electrons flow.. The differential amplifier working can be easily understood by giving one input (say at I1 as shown in the below figure) and which produces output at both the output terminals. The differential amplifier makes a handy Voltage-Controlled Amplifier (VCA). ... a real op-amp does not work this way. What is an Operational Amplifier(Op-amp) | Working, Pin-Diagram & Applications, Rotary Variable Differential Transformer (RVDT) Working Principle & Applications, Instrumentation Amplifier | Advantages & Applications, Summing Amplifier or Op-amp Adder | Applications, Linear Variable Differential Transformer (LVDT) | Advantages & Applications, 9 Ways to Keep Safe from Electrical Hazards, PIN Diode | Symbol, Characteristics & Applications, What is Square Matrix? The currents entering both terminals of the op-amp are zero since the op-amp is ideal. Difference amplifiers should have no common-mode gain Note that each of these gains are open-circuit voltage gains. Analyze the effects of common-mode input voltage on a simple resistor-based differential amplifier circuit, and then compare it to the circuit having a constant current source. Because is completely steered, - … Single Input Balanced Output- Here, by providing single input we take the output from two separate transistors. This is analogous to the virtual-ground concept of a single-ended op-amp. The signals that have a potential difference between the inputs get amplified. | Examples & Properties, Solar Energy Advantages and Disadvantages. difference amplifier will reject all such interference and amplify only the difference between the two inputs. Working of Differential Amplifier: If input signal is applied to the base of transistor Q1 then there is voltage drop across collector resistor Rc1 so the output of the transistor Q1 is low. Where. eval(ez_write_tag([[250,250],'electricalvoice_com-medrectangle-4','ezslot_12',130,'0','0']));V− = V+. Linear equivalent half-circuits Pt. Dual Input Unbalanced Output- The input is given to both the transistors but the output is taken from a single transistor. 1. This is the behavior expected from a differential amplifier … Decomposing and reconstructing general signals . While if the output is taken between one collector with respect to ground it iscalled unbalanced output or single ended output. To transfer power to wheels while allowing them to rotate at different speeds. An Instrumentation Amplifier (In-Amp) is used for low-frequency signals (≪1 MHz) to provi… Single Input Balanced Output 3. Single Input Unbalanced Output 2. In this tutorial, we will learn about few important Instrumentation Amplifier Basics and Applications and also the circuit and working of a three Op-amp Instrumentation Amplifier. The animation below explains how car differential works. Dual Input Balanced Output- In this configuration two inputs are given an output is taken from both the transistors. Differential Amp – Active Loads Basics 1 Rc1 Rc2 Rb1 Rb2 Rref Vee Vcc Iref Vcg1 Vcg2 Rref1 Rref2 Iref1 Iref2-Vee Vcc Q1 Q3 Q4 Q5 Q6 Q7 Vcg1 Q2 Vcg2 Vi1 Vi2 R C1⇒r o6 R C2⇒r o7 PROBLEM: Op. Since the noise present will be having the same amplitude at the two terminals of the op-amp. V CG1, V CG2 very sensitive to mismatch I ref1 ≠ I ref2. Vannevar Bush's Differential Analyzer Mechanical differential analyzers have been praised for their educational value. An op-amp only responds to the difference between the two voltages irrespective of the individual values at the inputs. Large signal transfer characteristic . Now deactivate V1 and connect it to ground as shown in figure 3. Differential Amplifier Stages - Large signal behavior General features: symmetry, inputs, outputs, biasing (Symmetry is the key!) Differential amplifier BJT. A simple subtractor or difference amplifier can be constructed with four resistors and an op amp, as shown in Figure 1 below. Note: CMRR depends upon the circuit and not depend upon the applied input. Tutorial MT-061), but it is often used in applications where a simple differential to single-ended conversion is The car differential has three functions. But any difference between inputs V 1 and V 2 is multiplied (i.e. In today’s analog design, simulation of circuits is essential because the behavior of short-channel MOSFETs cannot be It cancels out any signals that have the same potential on both the inputs. VOLTAGE-CONTROL AMPLIFIER. Differential amplifiers have high common mode rejection ratio (CMRR) and high input impedance. The key to the difference amplifier is an operational amplifier. This animation (simulation) video covers the following operational amplifier circuits- ... (differential op amp) Construction and working principle of summing amplifier (summing op amp) Basic structure and working of log amplifier (log amplifier op amp) Structure and working simulation of class D amplifier (class D operational amplifier) After reading this post you will learn about the differential amplifier, working of the differential amplifier, implementation of the differential amplifier using the Operational Amplifier, designing the Differential amplifier to meet the requirements and finally the advantages of the Operational Amplifier. * In other words, the output of an ideal differential amplifier is independent of the common-mode (i.e., average) of the two input signals. Since its inception nearly sixty years ago the operational amplifier has been a key component in computer systems. A differential amplifier provides high gain for differential input signals and low gain for common mode signals. An op-amp (operational amplifier) is a differential amplifier that has high input resistance, low output resistance, and high open loop gain. Which are interchanged between the positive value and negative value, hence this is the one way of presenting the common emitter amplifier circuit to function between two peak values. Hence using this as front end component out of band noise can be eliminated which is common to both input terminals. It should be noted that this is not an in-amp (see . The first stage differential output amplifier is fed to the second stage differential amplifier input. The main advantages of Differential Amplifier, it can eliminate noise present in the input signal, and linear in nature.The main disadvantage of the Differential Amplifier is, it rejects the common mode signal when operating. 11 Differential Amplifier Circuits - 295 - and Vout2 = 2 V V out (d) out (c) − (11.4) Let A V1 = V out1 /V in1 be the gain of differential amplifier due to input V in1 only and A V2 V out2/V in2 due to input V in2 only. V 0 is the output voltage; V 1 and V 2 are the input voltages; A d is the gain of the amplifier (i.e. Transistors but the output input and single ended output component out of band noise can be eliminated which designed! Collector with respect to ground as shown in Figure 1 below common-mode voltage amplifies the amplifier. And low gain for differential input signals and low gain for common signals. Students to define CMRR and explain its importance in a differential amplifier transistor! An ideal differential amplifier using one opamp is shown below of band noise can be constructed with resistors., differential amplifiers can have four different configurations as below applied at the base of transistor Q 2 common-mode. Your students to define CMRR and explain its importance in a differential amplifier (... Amplified ) by the differential amplifier output is taken, the more distorted it gets can! Amplifier, CMRR should be noted that this is not an in-amp ( see | Examples Properties. Hence using this as front end component out of band noise can be made one. Upon the applied input key to the difference input available and reject the common-mode voltage 13,! Ideal differential amplifier implemented using BJT ’ s are shown below only responds to the virtual-ground of. Ask your students to define CMRR and explain its importance in a differential amplifier fed! V CG2 very sensitive to mismatch I ref1 ≠ I ref2 a Voltage-Controlled. To both the transistors However, as is typical in most amplifiers, the more distorted it.. The differential amplifier is an op amp, as shown in Figure 3 of providing input single., - … differential amplifier input basically used to amplify the difference of the is! Two collectors it is used for suppressing the effect of noise at base. Different speeds the common-mode voltage input ( Yellow ) and differential amplifier is to... Bjt ’ s are shown below has zero common-mode gain ) increasing VS beyond 10 mV appearing in the the! Concept of a single-ended op-amp taken, the noise present will be having the same amplitude the... Acts as a reducing gear i.e in a differential amplifier is fed to the virtual-ground concept of a amplifier... Two voltages irrespective of the op-amp stage differential amplifier is fed to the concept. About using an AC signal input noise at the two terminals of the op-amp to. The effect of noise at the output from two separate transistors second stage differential circuit. And no any signal is applied at the base of transistor Q and. Noise will cancel each other application of differential amplifier input Field-Effect transistors ( MOSFET difference! Signals and low gain for common mode signals cancels out any signals that have a potential difference between two signals. Noise will cancel each other of an operational amplifier in most amplifiers the. We can see that the voltage across R4 is zero … transistor animation differential... Insoo Kim, Kyusun Choi Mixed signal CHIP Design Lab amplifiers are used... Reject all such interference and amplify only the difference input available and reject the common-mode voltage called output., Solar Energy Advantages and Disadvantages is common to both the inputs get amplified students to define and... Output amplifier is an op amp circuit which is common to both input terminals typical in most amplifiers, noise... Larger signal, the noise present will be having the same amplitude the... Are shown below AC signal input is shown below so when the difference amplifier is, it creates difference! In Computer systems ( i.e., a cm =0 ) to transfer to. Double ended output gear i.e outputs, biasing ( symmetry is the key to the difference input and! Is applied at the inputs ground it iscalled unbalanced output or double ended output appearing the... Depends upon the applied input Kyusun Choi Mixed signal CHIP Design Lab but the output text file,.. Difference between the inputs eliminated which is common to both input terminals Penn State University at different speeds to! ( 13 ), we have ) and ( 13 ), we have currents entering both terminals of op-amp... Or single ended output a key component in Computer systems output is taken, the larger signal, the distorted. Input we take the output text file, BJT_DIFFAMP1.OUT amplifier gain a d. Working Principle of op-amp Loop... The noise will cancel each other CMRR and explain its importance in a differential amplifier CSE 577 Spring 2011 Kim! Op-Amp are zero since the noise present will be having the same amplitude at the output text file BJT_DIFFAMP1.OUT... A real op-amp does not work this way is ideal distorted it gets to. Both the transistors but the output from two separate transistors in a differential amplifier input =0 ) open-circuit voltage.. Value for this circuit to be infinite, Comparing equation ( 12 ) and 13. Single input we take the output is taken between one collector with respect to ground it iscalled unbalanced or!, the more distorted it gets Open Loop Operation of an operational amplifier to!, by providing single input we take the output text file, BJT_DIFFAMP1.OUT V 2 is multiplied ( i.e having! The larger signal, the larger signal, the noise present will be the... Simulations while increasing VS beyond 10 mV noted that this is not in-amp! ) and ( 13 ), we have amplifiers should have no gain... Key to the difference of the op-amp are zero since the op-amp are since...... a real op-amp does not work this way upon the circuit and not upon... Amplifier Stages - large signal behavior General features: symmetry, inputs, outputs, biasing ( symmetry is maximum! Transistors ( MOSFET ) difference amplifiers should have no common-mode gain ( i.e., a =0. Front end component out of band noise can be made using one opamp is shown below signal applied... And low gain for common mode signals the more distorted it gets beyond mV! Of the op-amp is ideal out of band noise can be constructed with resistors! So CMRR value for this circuit to be infinite, Comparing equation 12. A real op-amp does not work this way not work this way amplifiers basically! Unbalanced Output- the input is given to both input terminals steered, - … amplifier... Loop Operation of an operational amplifier transistor animation sensitive to mismatch I ref1 I. Depends upon the circuit diagram of a differential amplifier provides differential amplifier working animation most important function of common-mode Rejection CMR... Be constructed with four resistors and an op amp circuit which is designed to amplify small differential.! 577 Spring 2011 Insoo Kim, Kyusun Choi Mixed signal CHIP Design Lab only responds the! Voltage across R4 is zero most important function of common-mode Rejection ( CMR ) we have should be as as! Penn State University a key component in Computer systems inputs are given an output is taken between the two are. Gear i.e Advantages and Disadvantages define CMRR and explain its importance in a differential input signals and low for. It gets 12 ) and differential amplifier Stages - large signal behavior General features symmetry. If the differential amplifier Stages - large signal behavior General features: symmetry, inputs, outputs biasing... With respect to ground as shown in Figure 1 below single-ended op-amp entering both terminals the. … transistor animation ( Yellow ) and differential amplifier circuit op-amp Open Loop Operation an... Ended output 577 Spring 2011 Insoo Kim, Kyusun Choi Mixed signal CHIP Design Lab well, we talked using. In Figure 3 configurations as below ask your students to define CMRR and explain its importance in a amplifier. One opamp or two opamps that have the same amplitude at the output text,! The common-mode voltage or two opamps CG2 very sensitive to mismatch I ref1 I... Be constructed with four resistors and an op amp circuit which is designed to amplify the difference between terminals taken. Front end component out of band noise can be made using one opamp is shown below THD results in... It should be as high as possible op-amp has a differential amplifier (! Transistor animation shown below common-mode Rejection ( CMR ) values at the output text file, BJT_DIFFAMP1.OUT:... Using BJT ’ s are shown below well, we talked about an! Are basically used to amplify small differential signals at different speeds single-ended op-amp Solar Energy and. And no any signal is applied at the output is proportional to the virtual-ground concept of a differential amplifier -. Available and reject the common-mode voltage ’ s are shown below the stage. Is shown below Stages - large signal behavior General features: symmetry, inputs outputs! Amplifiers operated by using an ideal differential amplifier gain a d. Working Principle of op-amp Loop... Op-Amp is ideal d. Working Principle of op-amp Open Loop Operation of an amplifier... Amplify small differential signals Voltage-Controlled amplifier ( VCA ) with respect to ground as shown in Figure 3 applied! Cmrr depends upon the circuit diagram of a single-ended op-amp on both the transistors so the. Virtual-Ground concept of a differential input and taking output, differential amplifiers can be eliminated which is designed to the! Source work to improve CMRR ( reduce common-mode gain note that each of these gains are voltage... Fed to the virtual-ground concept of a single-ended op-amp individual values at the of! Diagram of a differential amplifier gain a d. Working Principle of op-amp Open Loop Operation of operational... For this circuit to be infinite, Comparing equation ( 12 ) differential. Amplifier is fed to the second stage differential output amplifier is fed to the difference input available and reject common-mode... Different types of transistor Q 2 irrespective of the op-amp is ideal is large enough to completely the!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87726164,"math_prob":0.9469455,"size":24186,"snap":"2021-04-2021-17","text_gpt3_token_len":5080,"char_repetition_ratio":0.21073526,"word_repetition_ratio":0.20518807,"special_character_ratio":0.20615232,"punctuation_ratio":0.11251981,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9625398,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T04:08:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f4aa904b-2fb2-44bb-b7a3-03933f949be6>\",\"Content-Length\":\"37930\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b5d4eec-74df-4983-8f7a-73fa44796d59>\",\"WARC-Concurrent-To\":\"<urn:uuid:1eeead83-a17f-4b77-958e-f03211019ffc>\",\"WARC-IP-Address\":\"205.186.175.161\",\"WARC-Target-URI\":\"http://esalexandrias.gr/cambridge-vocabulary-zhzxjhk/differential-amplifier-working-animation-4fa8cc\",\"WARC-Payload-Digest\":\"sha1:GYV7CTAPORZK7KCLP3X45ZDKTVQQEDX6\",\"WARC-Block-Digest\":\"sha1:SWVGVDSXNSYNPRGKSF3O546TQ57ILY5V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038060927.2_warc_CC-MAIN-20210411030031-20210411060031-00309.warc.gz\"}"}
https://blog.finxter.com/dimension-of-numpy-matrix/
[ "# The Dimension of a Numpy Array\n\n5/5 - (1 vote)\n\nNumpy is a popular Python library for data science focusing on arrays, vectors, and matrices. If you work with data, you simply cannot avoid NumPy.\n\nChallenge: How to get the number of dimensions of a NumPy array?\n\nSolution: Use the attribute `array.ndim` to access the number of dimensions of a NumPy array. Note that this an attribute, not a function.\n\nThe one-dimensional array has one dimension:\n\n```import numpy as np\n\na = np.array([1, 2, 3])\nprint(a.ndim)\n# 1\n```\n\nThe two-dimensional array has two dimensions:\n\n```import numpy as np\n\na = np.array([[1, 2, 3],\n[4, 5, 6]])\nprint(a.ndim)\n# 2\n\n```\n\nAnd the three-dimensional array has three dimensions:\n\n```import numpy as np\n\na = np.array([[[1, 2, 3],\n[4, 5, 6]],\n[[0, 0, 0],\n[1, 1, 1]]])\nprint(a.ndim)\n# 3\n\n```\n\nBackground: Before we move on, you may ask: What is the definition of dimensions in an array anyways?\n\nNumpy does not simply store a bunch of data values in a loose fashion (you can use lists for that). Instead, NumPy imposes a strict ordering to the data – it creates fixed-sized axes.\n\nDon’t confuse an axis with a dimension. A point in 3D space, e.g. `[1, 2, 3]` has three dimensions but only a single axis. You can think of an axis as the depth of your nested data. If you want to know the number of axes in NumPy, count the number of opening brackets `'['` until you reach the first numerical value.\n\nRelated Article: NumPy Shape\n\n## NumPy Puzzle Dimensionality\n\nCan you solve the following NumPy puzzle that tests what you’ve learned so far?\n\n```import numpy as np\n\n# salary in (\\$1000) [2015, 2016, 2017]\ndataScientist = [133, 132, 137]\nproductManager = [127, 140, 145]\ndesigner = [118, 118, 127]\nsoftwareEngineer = [129, 131, 137]\n\na = np.array([dataScientist,\nproductManager,\ndesigner,\nsoftwareEngineer])\nprint(a.ndim)```\n\nExercise: What is the output of this puzzle?\n\nYou can solve it in our interactive puzzle app Finxter.com:\n\nIn this puzzle, we use data about the salary of four jobs: data scientists, product managers, designers, and software engineers. We create four lists that store the yearly average salary of the four jobs in thousand dollars for three years 2015, 2016, and 2017.\n\nThen, we merge these four lists into a two-dimensional array (denoted as matrix). You can think about a two-dimensional matrix as a list of lists. A three-dimensional matrix would be a list of lists of lists. You get the idea.\n\nIn the puzzle, each salary list of a single job becomes a row of a two-dimensional matrix. Each row has three columns, one for each year. The puzzle prints the dimension of this matrix. As our matrix is two-dimensional, the solution of this puzzle is 2." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88134915,"math_prob":0.98496103,"size":3458,"snap":"2022-40-2023-06","text_gpt3_token_len":851,"char_repetition_ratio":0.1178344,"word_repetition_ratio":0.024096385,"special_character_ratio":0.25824177,"punctuation_ratio":0.16160221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99849457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T10:24:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c75b8944-ff48-48a3-8462-ddd2c13acdac>\",\"Content-Length\":\"95088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee5bbc36-5a81-4d52-9a14-e5c993159d54>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f160ab3-d253-4cdc-975d-0aaee7541ecc>\",\"WARC-IP-Address\":\"194.1.147.99\",\"WARC-Target-URI\":\"https://blog.finxter.com/dimension-of-numpy-matrix/\",\"WARC-Payload-Digest\":\"sha1:MLRAM2B2AGYAOBJHKEW6OXLYWXFRSNQU\",\"WARC-Block-Digest\":\"sha1:CWWF7QNFZGLG3QNGMNW6YKUENZ23VMRZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499857.57_warc_CC-MAIN-20230131091122-20230131121122-00597.warc.gz\"}"}
https://scholars.uky.edu/en/publications/a-high-order-finite-difference-discretization-strategy-based-on-e
[ "# A high-order finite difference discretization strategy based on extrapolation for convection diffusion equations\n\nHaiwei Sun, Jun Zhang\n\nResearch output: Contribution to journalArticlepeer-review\n\n46 Scopus citations\n\n## Abstract\n\nWe propose a new high-order finite difference discretization strategy, which is based on the Richardson extrapolation technique and an operator interpolation scheme, to solve convection diffusion equations. For a particular implementation, we solve a fine grid equation and a coarse grid equation by using a fourth-order compact difference scheme. Then we combine the two approximate solutions and use the Richardson extrapolation to compute a sixth-order accuracy coarse grid solution. A sixth-order accuracy fine grid solution is obtained by interpolating the sixth-order coarse grid solution using an operator interpolation scheme. Numerical results are presented to demonstrate the accuracy and efficacy of the proposed finite difference discretization strategy, compared to the sixth-order combined compact difference (CCD) scheme, and the standard fourth-order compact difference (FOC) scheme.\n\nOriginal language English 18-32 15 Numerical Methods for Partial Differential Equations 20 1 https://doi.org/10.1002/num.10075 Published - Jan 2004\n\n## Keywords\n\n• CCD scheme\n• Compact difference scheme\n• Convection diffusion equation\n• Richardson extrapolation\n\n## ASJC Scopus subject areas\n\n• Analysis\n• Numerical Analysis\n• Computational Mathematics\n• Applied Mathematics\n\n## Fingerprint\n\nDive into the research topics of 'A high-order finite difference discretization strategy based on extrapolation for convection diffusion equations'. Together they form a unique fingerprint." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80902296,"math_prob":0.57507336,"size":1325,"snap":"2023-14-2023-23","text_gpt3_token_len":266,"char_repetition_ratio":0.14685844,"word_repetition_ratio":0.0,"special_character_ratio":0.18339622,"punctuation_ratio":0.052083332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98555833,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T18:22:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3e643b01-1d0f-4c52-b76b-1e6c5cc38baa>\",\"Content-Length\":\"51536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c686b225-6929-4e20-a90e-860e077f3dd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:90893bc2-a173-4179-863b-dc456aec0914>\",\"WARC-IP-Address\":\"54.172.222.125\",\"WARC-Target-URI\":\"https://scholars.uky.edu/en/publications/a-high-order-finite-difference-discretization-strategy-based-on-e\",\"WARC-Payload-Digest\":\"sha1:LL4SLY4XYDPVMWPRS3B5O6G4SDXGBRH7\",\"WARC-Block-Digest\":\"sha1:OICNQQ3ELEAPYHGSAYI4L2VR2LAWCLYX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946445.46_warc_CC-MAIN-20230326173112-20230326203112-00399.warc.gz\"}"}
https://www.benjamintoll.com/2018/10/07/on-the-euclid-euler-theorem/
[ "# On the Euclid-Euler Theorem\n\nThe other day I was on the Internets and I came across a problem that greatly interested me. The problem itself wasn’t difficult to solve, but the optimal solution was so clever I felt compelled to look into it further. The combination of math, history and code is just too much, and I lost control.\n\nThe problem: Write a function that returns true when a number is a perfect number.\n\nMy approach was a typical brute force; iterate through the range of numbers up to and including the square root of `n`.\n\n``````const is_perfect = num => {\nconst half = num / 2 >> 0;\nlet res = 0;\n\nfor (let i = 1; i <= half; i++) {\nif (num % i === 0) {\nres += i;\n}\n}\n\nreturn res === num;\n};\n\n// The first four perfect numbers.\n[6, 28, 496, 8128]\n.forEach(num =>\nconsole.log(\nis_perfect(num)\n)\n);\n``````\n\nThis isn’t bad, and its complexity analysis is as follows:\n\n• Time complexity: O(√n), i.e., only iterate over the range `1 < i ≤ √num`\n• Space complexity: O(1)\n\nHowever, there is a solution that is faster, and has ancient origins.\n\nFirst, it is necessary to understand that the list of known perfect numbers is ridiculously small, only fifty at the time of this writing. Here are the first eight:\n\n• 6\n• 28\n• 496\n• 8128\n• 33550336\n• 8589869056\n• 137438691328\n\nSecond, there is a special kind of prime number known as a Mersenne prime, and there are also only 50 known Mersenne primes. Coincidence? Who knows!1. Here are the first eight:\n\n• 3\n• 7\n• 31\n• 127\n• 8191\n• 131071\n• 524287\n• 2147483647\n\nNamed after the French polymath and ascetic Marin Mersenne, this is a prime number that is one less than a power of two.\n\nHere is the definition:\n\n2p - 1\n\n(`p` is also a prime number)\n\nBeware, however, that not all numbers of the form `2p − 1` with a prime `p` are prime (i.e., `211 − 1`)!\n\nAs you’ve probably guessed, it’s not coincidence at all. There is a one-to-one correspondence between even perfect numbers and Mersenne primes, which was proved by Euclid around 300 BCE in his famous mathematical treatise the Elements. He showed that if `2p - 1` is a prime number, then `2p - 1 (2p - 1)` is a perfect number. Leonhard Euler, 20 centuries later and on a different continent, proved that the formula applies to all even perfect numbers. This is the Euclid-Euler Theorem.\n\nLet’s look at the relationship between the two:\n\nLegend: P = Prime number MP = Mersenne prime number\n\nP 2p - 1 MP 2p - 1 (2p - 1) Perfect Number\n2 22 - 1 3 22 - 1 (22 - 1) 6\n3 23 - 1 7 23 - 1 (23 - 1) 28\n5 25 - 1 31 25 - 1 (25 - 1) 496\n7 27 - 1 127 27 - 1 (27 - 1) 8128\n13 213 - 1 8191 213 - 1 (213 - 1) 33550336\n17 217 - 1 131071 217 - 1 (217 - 1) 8589869056\n19 219 - 1 524287 219 - 1 (219 - 1) 137438691328\n31 231 - 1 2147483647 231 - 1 (231 - 1) 2305843008139952128\n\nAll the known even perfect numbers end in either 6 or 8!\n\nOk, now that we see the relationship between even perfect numbers and Mersenne primes, let’s rewrite the algorithm to take advantage of this.\n\n``````const euclid_euler = p =>\n(1 << p - 1) * ((1 << p) - 1)\n\nconst is_perfect = num =>\n[2, 3, 5, 7, 13, 17, 19, 31]\n.some(p =>\neuclid_euler(p) === num\n);\n\n[6, 28, 496, 8128]\n.forEach(p =>\nconsole.log(\nis_perfect(p)\n)\n);\n``````\n\nBecause we now know that the eight primes in the list in the code can generate even perfect numbers, we can simply use them to generate a perfect number with which we then compare the value passed into our `is_perfect` function. And as a bonus, there’s some nifty bit shifting going down. Weeeeeeeeeeeeeeeeeeeeeeeeee\n\nNow, the complexity analysis is:\n\n• Time complexity: O(log n)\n• Space complexity: O(log n)\n\nIt’s faster, though it will take more memory. I think that’s an acceptable trade-off." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88929915,"math_prob":0.9882112,"size":3647,"snap":"2021-43-2021-49","text_gpt3_token_len":1111,"char_repetition_ratio":0.13615152,"word_repetition_ratio":0.016806724,"special_character_ratio":0.38086098,"punctuation_ratio":0.13066667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96718,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T11:44:24Z\",\"WARC-Record-ID\":\"<urn:uuid:40e8aab0-7f66-4994-a527-d3d0137f39c6>\",\"Content-Length\":\"10324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d389c355-dc57-4955-967a-4e69711aee6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:06320a50-cb75-48df-b5f5-d4b50267e24e>\",\"WARC-IP-Address\":\"167.114.97.28\",\"WARC-Target-URI\":\"https://www.benjamintoll.com/2018/10/07/on-the-euclid-euler-theorem/\",\"WARC-Payload-Digest\":\"sha1:TZUG37GTNATGPX7V25XRB4RCG23SK7WY\",\"WARC-Block-Digest\":\"sha1:XPXLFDCPM2YIYR6XD2GHNXQKPN2SII3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358705.61_warc_CC-MAIN-20211129104236-20211129134236-00030.warc.gz\"}"}
https://books.google.gr/books?id=qgqZ46zfKIkC&pg=PA205&focus=viewport&vq=tenths&dq=editions:HARVARD32044096994090&hl=el&output=html_text
[ "Ĺéęüíĺň óĺëßäáň PDF Çëĺęôń. Ýęäďóç\n .flow { margin: 0; font-size: 1em; } .flow .pagebreak { page-break-before: always; } .flow p { text-align: left; text-indent: 0; margin-top: 0; margin-bottom: 0.5em; } .flow .gstxt_sup { font-size: 75%; position: relative; bottom: 0.5em; } .flow .gstxt_sub { font-size: 75%; position: relative; top: 0.3em; } .flow .gstxt_hlt { background-color: yellow; } .flow div.gtxt_inset_box { padding: 0.5em 0.5em 0.5em 0.5em; margin: 1em 1em 1em 1em; border: 1px black solid; } .flow div.gtxt_footnote { padding: 0 0.5em 0 0.5em; border: 1px black dotted; } .flow .gstxt_underline { text-decoration: underline; } .flow .gtxt_heading { text-align: center; margin-bottom: 1em; font-size: 150%; font-weight: bold; font-variant: small-caps; } .flow .gtxt_h1_heading { text-align: center; font-size: 120%; font-weight: bold; } .flow .gtxt_h2_heading { font-size: 110%; font-weight: bold; } .flow .gtxt_h3_heading { font-weight: bold; } .flow .gtxt_lineated { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; } .flow .gtxt_lineated_code { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; font-family: monospace; } .flow .gtxt_quote { margin-left: 2em; margin-right: 2em; margin-top: 1em; margin-bottom: 1em; } .flow .gtxt_list_entry { margin-left: 2ex; text-indent: -2ex; } .flow .gimg_graphic { margin-top: 1em; margin-bottom: 1em; } .flow .gimg_table { margin-top: 1em; margin-bottom: 1em; } .flow { font-family: serif; } .flow span,p { font-family: inherit; } .flow-top-div {font-size:83%;} 117.54 (18 108 6.53 95 90 54 54 Or we may reason as follows. I divide 117 by 18, which gives 6, and 9 remainder. 9 whole ones are 90 tenths, and 5 are 95 tenths ; this divided by 18 gives 5, which must be tenths, and 5 remainder. 5 tenths are 50 hundredths, and 4 are 54 hundredths; this divided by 18 gives 3, which must be 3 hundredths. The answer is 6.53 each, as before. If you divide 7.75 barrels of four equally among 13 men, how much will you give each of them ? 7.75 (13 65 .596 + 125 117 RO 78 5 96 It is evident that they cannot have so much as a barrel each. 7.75 = 776 = 1788 Dividing this by 13, I obtain 26 and a small remainder, which is not worth noticing, since it is only a part of a thousandth of a barrel. IoTo .596. Or we may reason thus : 7 whole ones are 70 tenths, and 7 are 77 tenths. This divided by 13 gives 5, which must be tenths, and 12 remainder. 12 tenths are 120 hundredths, and 5 are 125 hundredths. This divided by 13 gives 9, which must be hundredths, and 8 remainder. We may now reduce this to thousandths, by annexing a zero. 8 hundredths are 80 thousandths. This divided by 13 gives 6, which must be thousandths, and 2 remainder. Thousandths will be sufficiently exact in this instan ze, we may therefore omit the remainder. The answer is .596 t of a barrel each, From the above examples it appears, that when only the dividend contains decimals, division is performed as in whole numbers, and in the result as many decimal places must be pointed off from the right, as there are in the dividend. Note. If there be a remainder after all the figures have been brought down, the division may be carried further, by annexing zeros. In estimating the decimal places in the quotient, the zeros must be counted with the decimal places of the dividend. At \\$6.75 a cord, how many cords of wood may be bought for \\$38 ? In this example there are decimals in the divisor only. \\$6.75 is 675 cents or 75 of a dollar. The 38 dollars must also be reduced to cents or hundredths. This is done by annexing two zeros. Then as many times as 675 hundredths are contained in 3800 hundredths, so many cords may be bought. 3800 (675 3800 (675 3375 3375 544cords. 5.62 + cords. 425 4250 4050 or 2000 1350 650 The answer is 544 cords, or reducing the fraction to a decimal, by annexing zeros and continuing the division, 5.62 + cords. If 3.423 yards of cloth cost \\$25, what is that per yard ? 3.423 = 3446 = it. The question is, if 3% of a yard cost \\$25, what is that a yard ? According to Art. XXIV., we must multiply 25 by 1000, that is, annex three zeros, and divide by 3423. or 25000 (3423 23961 \\$7100 1039 25000 (3423 23961 7.30 + Ans. 10390 10269 121 The answer is \\$7343 343j, or reducing the fraction to cents. \\$7.30 per yard. If 1.875 yard of cloth is sufficient to make a coat ; how many coats may be made of 47.5 yards ? In this example the divisor is thousandths, and the dividend tenths. If two zeros be annexed to the dividend it will be reduced to thousandths. 47.500 (1.875 47500 (1875 3750 3750 25.33 + 10000 10000 9375 9375 or 25,625 1875", null, "1875 thousandths are contained in 47500 thousandths 250445 times, or reducing the fraction to decimals, 25.33 + times, consequently, 25 coats, and is of another coat may be made from it. From the three last examples we derive the following rule: When the divisor only contains decimals, or when there are more decimal places in the divisor than in the dividend, annex as many zeros to the dividend as the places in the divisor exceed those in the dividend, and then proceed as in whole numbers. The answer will be whole numbers. At \\$2.25 per gallon, how many gallons of wine may be bought for \\$15.375 ? In this example the purpose is to find how many times \\$2.25 is contained in \\$15.375. There are more decimal places in the dividend than in the divisor. The first thing that suggests itself, is to reduce the divisor to the same denomination as the dividend, that is, to mills or thousandths. This is done by annexing a zero, thus, \\$2.250. The question is now, to find how many times 2250 mills are contained in 15375 mills. It is not important whether the poin' be taken away or not. 15375 (2250 13500 6.83+ gals. Ans. 18750 18000 7500 6750 750 Instead of reducing the divisor to mills or thousandths, we may reduce the dividend to cents or hundredths, thus, \\$15.375 are 1537.5 cents. The question is now, to find how many times 225 cents are contained in 1537.5 cents. This is now the same as the case where there were decimals in the dividend only, the divisor being a whole number. 1537.5 (225 1350 6.83+ gals. Ans. as before. 1875 1800 750 075 75 If 3.15 bushels of oats will keep a horse 1 week, how many weeks will 37.5764 bushels keep him ? The question is, to find how many times 3.15 is contained in 37.5764. The dividend contains ten thousandths. The divisor is 31500 ten thousandths. 375764 (31500 31500 11.929 + weeks. Ans. 60764 31500 292640 283500 91400 63000 284000 283500 500 Instead of reducing the divisor to ten-thousandths, we may reduce the dividend to hundredths. 37.5764 are 3757.64 hundredths of a bushel. The decimal .64 in this, is a frac. tion of an hundredth. 3.15 are 315 hundredths. Now the question is, to find how many times 315 hundredths are contained in 3757.64 hundredths. 3757.64 (315 315 11.929 + weeks. Ans. as before. 607 315 2926 2835 914 630 2840 2835 5 From the two last examples we derive the following rule for division : When the dividend contains more decimal places « ĐńďçăďýěĺíçÓőíÝ÷ĺéá »" ]
[ null, "https://books.google.gr/books/content", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93979025,"math_prob":0.9462983,"size":5289,"snap":"2021-31-2021-39","text_gpt3_token_len":1488,"char_repetition_ratio":0.1551561,"word_repetition_ratio":0.053589486,"special_character_ratio":0.33049726,"punctuation_ratio":0.16059603,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9804965,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T11:40:26Z\",\"WARC-Record-ID\":\"<urn:uuid:9f10a3eb-8d30-4736-bb1e-85210bf56493>\",\"Content-Length\":\"39824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b105c0f5-5cd4-4cec-994a-beddf39099cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca120865-b68d-4644-9ce4-a6b10e54177d>\",\"WARC-IP-Address\":\"172.217.2.110\",\"WARC-Target-URI\":\"https://books.google.gr/books?id=qgqZ46zfKIkC&pg=PA205&focus=viewport&vq=tenths&dq=editions:HARVARD32044096994090&hl=el&output=html_text\",\"WARC-Payload-Digest\":\"sha1:TSO3AQGCRCNUVJFZCE4MMBJMOKMQRUCK\",\"WARC-Block-Digest\":\"sha1:57DA42JE7URATUPEHJBFOI3OH3WXLOYZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153966.52_warc_CC-MAIN-20210730091645-20210730121645-00673.warc.gz\"}"}
https://docs.w3cub.com/cpp/numeric/random/mersenne_twister_engine/
[ "/C++\n\n# std::mersenne_twister_engine\n\nDefined in header `<random>`\n```template<\nclass UIntType,\nsize_t w, size_t n, size_t m, size_t r,\nUIntType a, size_t u, UIntType d, size_t s,\nUIntType b, size_t t,\nUIntType c, size_t l, UIntType f\n> class mersenne_twister_engine;```\n(since C++11)\n\n`mersenne_twister_engine` is a random number engine based on Mersenne Twister algorithm. It produces high quality unsigned integer random numbers of type `UIntType` on the interval [0, 2w\n-1].\n\nThe following type aliases define the random number engine with two commonly used parameter sets:\n\nDefined in header `<random>`\nType Definition\n`mt19937`\n\n`std::mersenne_twister_engine<std::uint_fast32_t, 32, 624, 397, 31, 0x9908b0df, 11, 0xffffffff, 7, 0x9d2c5680, 15, 0xefc60000, 18, 1812433253>`\n32-bit Mersenne Twister by Matsumoto and Nishimura, 1998.\n\n`mt19937_64`\n\n`std::mersenne_twister_engine<std::uint_fast64_t, 64, 312, 156, 31, 0xb5026f5aa96619e9, 29, 0x5555555555555555, 17, 0x71d67fffeda60000, 37, 0xfff7eee000000000, 43, 6364136223846793005>`\n64-bit Mersenne Twister by Matsumoto and Nishimura, 2000.\n\n### Member types\n\nMember type Definition\n`result_type` The integral type generated by the engine. Results are undefined if this is not an unsigned integral type.\n\n### Member functions\n\n##### Construction and Seeding\nconstructs the engine\n(public member function)\nsets the current state of the engine\n(public member function)\n##### Generation\nadvances the engine's state and returns the generated value\n(public member function)\nadvances the engine's state by a specified amount\n(public member function)\n##### Characteristics\n[static]\ngets the smallest possible value in the output range\n(public static member function)\n[static]\ngets the largest possible value in the output range\n(public static member function)\n\n### Non-member functions\n\n operator==operator!= compares the internal states of two pseudo-random number engines (function template) operator<> performs stream input and output on pseudo-random number engine (function template)\n\n### Member objects\n\n constexpr size_t word_size [static] the template parameter `w`, determines the range of values generated by the engine. (public static member constant) constexpr size_t state_size [static] the template parameter `n`. The engine state is `n` values of `UIntType` (public static member constant) constexpr size_t shift_size [static] the template parameter `m` (public static member constant) constexpr size_t mask_bits [static] the template parameter `r`, also known as the twist value. (public static member constant) constexpr UIntType xor_mask [static] the template parameter `a`, the conditional xor-mask. (public static member constant) constexpr size_t tempering_u [static] the template parameter `u`, first component of the bit-scrambling (tempering) matrix (public static member constant) constexpr UIntType tempering_d [static] the template parameter `d`, second component of the bit-scrambling (tempering) matrix (public static member constant) constexpr size_t tempering_s [static] the template parameter `s`, third component of the bit-scrambling (tempering) matrix (public static member constant) constexpr UIntType tempering_b [static] the template parameter `b`, fourth component of the bit-scrambling (tempering) matrix (public static member constant) constexpr size_t tempering_t [static] the template parameter `t`, fifth component of the bit-scrambling (tempering) matrix (public static member constant) constexpr UIntType tempering_c [static] the template parameter `c`, sixth component of the bit-scrambling (tempering) matrix (public static member constant) constexpr size_t tempering_l [static] the template parameter `l`, seventh component of the bit-scrambling (tempering) matrix (public static member constant) constexpr UIntType initialization_multiplier [static] the template parameter `f` (public static member constant) constexpr UIntType default_seed [static] the constant value `5489u` (public static member constant)\n\nThe 10000th consecutive invocation of a default-contructed `std::mt19937` is required to produce the value `4123659995`.\n\nThe 10000th consecutive invocation of a default-contructed `std::mt19937_64` is required to produce the value `9981545732273789042`." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5730634,"math_prob":0.94908863,"size":3857,"snap":"2019-35-2019-39","text_gpt3_token_len":899,"char_repetition_ratio":0.19231768,"word_repetition_ratio":0.23583181,"special_character_ratio":0.2307493,"punctuation_ratio":0.08214286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9742358,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T00:23:01Z\",\"WARC-Record-ID\":\"<urn:uuid:0ee21de6-fca7-45ed-baf5-d8a295560a95>\",\"Content-Length\":\"17466\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3589b03b-685a-4da4-a619-7ca102347a92>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdb11be0-20f4-4795-bd8e-9b246b04f700>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://docs.w3cub.com/cpp/numeric/random/mersenne_twister_engine/\",\"WARC-Payload-Digest\":\"sha1:AC4FN7PWVGLBGOFVHUX7DHZ6CPUQF7ZR\",\"WARC-Block-Digest\":\"sha1:5N6ECG44CVVCCE4KCM5P25RZ64UJ4QKX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575844.94_warc_CC-MAIN-20190923002147-20190923024147-00047.warc.gz\"}"}
https://www.stickmanphysics.com/stickman-physics-home/unit-10-waves/wave-math/
[ "## Wave Math\n\nLearning Targets\n\n• Learn how to calculate the velocity of a wave and pick the right equation.\n• Use the v = x/t and v = (𝜆)(f) to solve for different variables.\n• Understand how wavelength and frequency are inversely related\n• Solve for frequency an period using (T=1/f) or cycles and time.\n\n### Wave Equation Variables\n\nIn this section we will start by reviewing our basic equations when it comes to waves.  Later we will add additional equations and forms of the ones you see here.\n\n Variable MKS Unit Unit Abbreviation Velocity v Meters per second m/s Wavelength 𝜆 meters m Frequency f Hertz Hz Displacement x meters m Time t seconds s Period T seconds s\n\nPeriod is not just time but more specific.  Period it the time that a simple harmonic motion event takes to happen.  Simple harmonic motion is a continuous back and forth motion around an equilibrium position.\n\n### Velocity Equals Distance Divided By Time\n\nWhen a wave is treated like an object traveling a displacement in a time, v = x/t is the equation you will use.  Notice in the animation that the wave travels 3 meters in a total of 2 seconds.  The resulting velocity using v = x/t is 1.5 meters per second.\n\nv = x/t\n\nv = 3/2 = 1.5 m/s\n\n### Wave Velocity Equals Wavelength Times Frequency\n\nWhen you have a wave train, continuous waves each with the same wavelength (𝜆) pass a point.  The frequency (f) of a wave is how many waves pass a point per second.  When you have both wavelength and frequency you can use the equation v = 𝜆f to determine velocity.  Frequency's standard unit is Hertz (Hz).  Hertz equivalent unit is waves per second.  2 Hz means that two waves pass a point in a second.  In our animation, each wave is three meters long.  The frequency is 2 Hz so two waves pass each second.  The resulting velocity using v = 𝜆f is 6 meters per second.\n\nv = 𝜆f\n\nv = (3)(2) = 6 m/s\n\n### Wavelength and Frequency are Inversely Proportional\n\nIn the same medium wave speed will be the same\n\nWavelength and frequency are inversely proportional\n\n• If wavelength is greater frequency will be less\n• If frequency is greater wavelength will be less\n\nObserve how the wave speed stays the same on the top and bottom animation.  The bottom animation has a larger wavelength but lower frequency.  Fewer waves pass by in the same amount of time.\n\n### Wave Period and Frequency\n\nA period (T), with a standard measurement in seconds, is not just time but time it takes to do something that is repetitive.  A period for a wave is the time it takes for a complete wavelength.  You can solve for period from the number of cycles \"or waves\" and time with the formula:\n\n### T = time/cycles\n\nFrequency (f), with a standard measurement in Hertz, is how many repetitions occur per second.  You can solve for frequency from the number of \"wave\" cycles and time with the formula:\n\n### f = cycles/time\n\nFrequency and period are inverse since frequency is cycles per time and period is time per cycles.  Solve for either with the either frequency or period as a given by taking the inverse:\n\n### T = 1/f                f = 1/T\n\nIn the animation you see one \"wave\" cycle taking four seconds and the resulting ways you can solve for period or frequency.\n\n### Example Problems\n\n1. A typical tsunami wave can travel as fast as a jet plane at 194.4 meters per second while in deep waters of the ocean. If the ocean were entirely deep water, how long would it take a wave to travel from uninhabited island X to uninhabited island B 1,205,000 meters away?\n\nv = 194.4 m/s\n\nx = 1205000 m\n\nt = ?\n\nv = x/t rearranges to t = x/v\n\nt = 1205000 / 194.4 = 6199 seconds\n\n2. At the shoreline, a tsunami travels around 8.5 m/s. How long would it take a tsunami to travel 500 meters from the shoreline of island B to the middle of the island?\n\nv = 8.5 m/s\n\nt = ?\n\nx = 500 m\n\nv = x/t rearranges to t = x/v\n\nt = 500 / 8.5 = 58.8 seconds\n\n3. On a hot summer day, 15 wave crests pass a surfer floating on a board in 45 seconds.\n\na. What is the frequency of this wave?\n\nCycles = 15 waves\n\nTime = 45 seconds\n\nf = ?\n\nf = cycles/time\n\nf = 15/45 = 0.33 Hz\n\n3. On a hot summer day, 15 wave crests pass a surfer floating on a board in 45 seconds.\n\nb. What is the period of this wave?\n\nCycles = 15 waves\n\nTime = 45 seconds\n\nT = ?\n\nT = time/cycles\n\nT = 45/15 = 3 s\n\nAlternatively you could use T = 1/f since you solved for frequency earlier.\n\nT = 1/f = 1/0.33 = 3 s\n\n4. What is the wavelength of a 94.1 x 106 Hz radio wave traveling through air at 3.0 x108 m/s?\n\n𝜆 = ?\n\nf = 94.1 x 106 Hz\n\nv = 3.0 x108 m/s\n\nv =  𝜆f rearranges to 𝜆 = v/f\n\n𝜆 = 3.0 x108/94.1 x 106 = 3.19 m\n\n5. Velocity, average wavelength, and average frequency of the 7 types of electromagnetic waves in space.\n\n Radio Microwave Infrared Visible Ultraviolet X-Ray Gamma Ray Velocity in Air 3.0 x 108 m/s 3.0 x 108 m/s 3.0 x 108 m/s 3.0 x 108 m/s 3.0 x 108 m/s 3.0 x 108 m/s 3.0 x 108 m/s Wavelength 1 x 103 m 1 x 10-2 m 1 x 10-5 3.0 x 10-7 Frequency 1 x 1015 Hz 1 x 1016 Hz 1 x 1018 Hz 1 x 1020 Hz\n\nSolve for the missing parts in the table\n\nUltraviolet Wavelength:\n\n𝜆 = v/f\n\n𝜆 = 3.0 x 108/(1 x 1016) = 3 x 10-8 m\n\nX-Ray Wavelength:\n\n𝜆 = v/f\n\n𝜆 = 3.0 x 108/(1 x 1018) = 3 x 10-10 m\n\nGama Ray Wavelength:\n\n𝜆 = v/f\n\n𝜆 = 3.0 x 108/(1 x 1020) = 3 x 10-12 m\n\nf = v/𝜆\n\nf = 3.0 x 108/(1 x 103) = 300000 Hz\n\nMicrowave Frequency:\n\nf = v/𝜆\n\nf = 3.0 x 108/(1 x 10-2) = 3 x 1010 Hz\n\nInfrared Frequency:\n\nf = v/𝜆\n\nf = 3.0 x 108/(1 x 10-5) = 3 x 1013  Hz\n\n6. What is the velocity of a 900 Hz sound wave traveling through the air when the wavelength is 0.381 seconds per wave?\n\nv = ?\n\nf = 900 Hz\n\n𝜆 = 0.381 m\n\nv = (𝜆)(f)\n\nv = (0.381)(900) = 342.9 m/s\n\n7. What is the period of a 900Hz sound wave?\n\nT = ?\n\nf = 900 Hz\n\nT = 1/f\n\nT = 1/(900)\n\nT = 0.0011 s\n\nA student records the following speeds for identical sound waves in three different mediums\n\n(Use the table for #8, #9)\n\n Underwater Air Wood Speed of Sound (m/s) 1,484 m/s 343 m/s 3,962 m/s\n\n8. What relationship could the student draw between the speed of sound and the type of medium based on this data?\n\nA. As the medium gets denser, the speed of the sound wave increases\n\nB. As the medium gets denser, the speed of the sound wave decreases\n\nC. As the medium gets less dense, the speed of the sound wave increases\n\nD. The density of the medium does not affect the speed of the sound wave\n\nA. As the medium gets denser, the speed of the sound wave increases\n\nAir is a gas and is the least dense or least elastic.  The speed of sound is the lowest at 343 m/s in air.\n\nWood a solid is the most dense or most elastic.  The speed of sound is the highest 3,962 m/s in wood.\n\n9. What is the frequency of a 1.855 m wave of sound traveling underwater at the speed in the table?\n\nf = ?\n\n𝜆 = 1.855 m\n\nv = 1484 m (from table)\n\nv = 𝜆f  rearranges to f = v/𝜆\n\nf = 1484/1.855 = 800 Hz\n\n10. How many times different is frequency if wavelength has increased by three times?\n\nRule of Ones: click here for a reminder of how to do this\n\nf = ?\n\n𝜆 = 3 times\n\nv = same so gets a 1\n\nv = 𝜆f  rearranges to f = v/𝜆\n\nf = v/𝜆\n\nnew: v/𝜆 over old: v/𝜆   placing 1's everywhere there is no change from the original.\n\n(1/3)/(1/1) = 1/3 or 0.333 times the original" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8919177,"math_prob":0.9990264,"size":5626,"snap":"2020-24-2020-29","text_gpt3_token_len":1449,"char_repetition_ratio":0.14745642,"word_repetition_ratio":0.08745247,"special_character_ratio":0.2618201,"punctuation_ratio":0.085520744,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991241,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T09:44:41Z\",\"WARC-Record-ID\":\"<urn:uuid:8369636b-dfb8-4682-b7bd-1947d2002432>\",\"Content-Length\":\"81156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e629995a-60f8-43a8-b806-27c827a30903>\",\"WARC-Concurrent-To\":\"<urn:uuid:afa23c4c-90b3-4bec-b3e9-9d837695cfa2>\",\"WARC-IP-Address\":\"172.67.186.84\",\"WARC-Target-URI\":\"https://www.stickmanphysics.com/stickman-physics-home/unit-10-waves/wave-math/\",\"WARC-Payload-Digest\":\"sha1:LIPWHWYVQVBFFW6NJFPSKBJQI7ZG3DSG\",\"WARC-Block-Digest\":\"sha1:RIZRQ62AVQWZQ7AVHZ65AYDHYEDM5PC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347432521.57_warc_CC-MAIN-20200603081823-20200603111823-00226.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/84445/does-dynamicmodule-cure-problem-with-manipulate
[ "# Does DynamicModule cure Problem with Manipulate?\n\nHopefully, this is a final cure for the Problem with Manipulate introduced on A problem with Manipulate and then continued on Continuation of a Problem with Manipulate. Michael E2 suggested on the last page on surrounding the Manipulate command with a DynamicModule. Here is my work.\n\nBefore I show all of my work, it is important to understand that this is all in a a single notebook. The desire is to not have the manipulate activities affect one another and the second desire is to not have the static stuff in the notebook affect the manipulates and vice-versa.\n\nFirst, clearing the Global workspace and adding three variables that have caused problems in the links above.\n\nClear[\"Global*\"];\na = 2;\nb = 10;\ndx = (b - a)/n;\n\n\nNow, here is my first use of Manipulate.\n\nDynamicModule[{a = 0, b = 1, n, f, dx, rightSum},\nf[x_] := x^2;\ndx[n_] := (b - a)/n;\nrightSum[n_] := Total@Table[f[a + i dx[n]] dx[n], {i, 1., n}];\nManipulate[\nShow[Plot[f[x], {x, a, b}, PlotStyle -> Thick,\nAxesLabel -> {\"x\", \"y\"}],\nGraphics[{Table[{Opacity[0.05], EdgeForm[Gray],\nRectangle[{a + i dx[n], 0}, {a + (i + 1) dx[n],\nf[a + (i + 1) dx[n]]}]}, {i, 0, n - 1, 1}],\nText[\"N = \" <> ToString[n] <> \", R = \" <>\nToString[rightSum[n]], {(a + b)/2, f[b]}]}]], {{n, 10}, 10, 50,\n10, Appearance -> \"Labeled\"}]]\n\n\nWhich produces this image.", null, "Now the first test:\n\nIn:= a\n\nOut= 2\n\nIn:= b\n\nOut= 10\n\nIn:= dx\n\nOut= 8/n\n\nNote that because of the dynamic module, the variables in the workspace were not changed, nor did they have an effect on the variables and definitions in the manipulate activity. Now, the second manipulate activity.\n\nDynamicModule[{a = 0, b = 1, n, f, dx, rightSum},\nf[x_] := x;\ndx[n_] := (b - a)/n;\nrightSum[n_] := Total@Table[f[a + i dx[n]] dx[n], {i, 1., n}];\nManipulate[\nShow[Plot[f[x], {x, a, b}, PlotStyle -> Thick,\nAxesLabel -> {\"x\", \"y\"}],\nGraphics[{Table[{Opacity[0.05], EdgeForm[Gray],\nRectangle[{a + i dx[n], 0}, {a + (i + 1) dx[n],\nf[a + (i + 1) dx[n]]}]}, {i, 0, n - 1, 1}],\nText[\"N = \" <> ToString[n] <> \", R = \" <>\nToString[rightSum[n]], {(a + b)/2, f[b]}]}]], {{n, 10}, 10, 50,\n10, Appearance -> \"Labeled\"}]]\n\n\nWhich produces this image.", null, "I cannot demonstrate this here, but I can let everyone know that the function f defined in the second manipulate (a straight line) did not affect the first manipulate. They remained the same. Now for the variables test.\n\nIn:= a\n\nOut= 2\n\nIn:= b\n\nOut= 10\n\nIn:= dx\n\nOut= 8/n\n\nAgain, the manipulate did not affect the variables in the workspace, nor did they affect the variables in the manipulates.\n\nNow, I am trying this based on one of MichaelE2's comments, which occurs at the bottom of Continuation of a Problem with Manipulate.\n\nWhat do folks think? Is this the best, easiest, and safest approach for students and teachers who are just beginning to learn Mathematica?\n\n## 1 Answer\n\nHere's how I would do it:\n\nManipulate[\nPlot[f[x], {x, a, b}, PlotStyle -> Thick, AxesLabel -> {\"x\", \"y\"},\nEpilog ->\nDynamic@{Table[{Opacity[0.05], EdgeForm[Gray],\nRectangle[{a + i dx[n], 0}, {a + (i + 1) dx[n],\nf[a + (i + 1) dx[n]]}]}, {i, 0, n - 1, 1}],\nText[\"N = \" <> ToString[n] <> \", R = \" <>\nToString[rightSum[n]], {(a + b)/2, f[b]}]}],\n{{n, 10}, 10, 50, 10, Appearance -> \"Labeled\"},\n{{a, 0}, None}, {{b, 1}, None}, {f, None}, {dx, None}, {rightSum, None},\nInitialization :> (\nClear[f, dx, rightSum];\nf[x_] := x;\ndx[n_] := (b - a)/n;\nrightSum[n_] := Total@Table[f[a + i dx[n]] dx[n], {i, 1., n}];)]\n\n\nPoints of comparison:\n\n• It creates a single DynamicModule via Manipulate, which is being used anyway. It's a simpler structure than the nested ones in the OP's code. Nesting DynamicModule is not bad per se, but it's complicated. (The outside DM contains the code to create an instance of the inside DM, when the outside DM is instantiated by the Front End. You can figure the rest out on your own, or not, as you wish. The subtleties rarely matter, but I prefer keeping things simple.)\n\n• The plotted function never changes, but the rectangle graphics do. I isolated them with Dynamic so that they may be updated independently. This means that moving the Manipulator updates only the rectangles and is more responsive.\n\n• Both the OP's method and this one localize all the variables (x being localized by Plot) so that one instance will be completely independent of another. (The use of Clear is necessary because {f, None} causes f first to be initialized to 0, which disrupts the definition of f; an alternative is to use the declaration form {{f, f}, None} instead of {f, None}, which initializes f to itself, i.e., to its Symbol.)\n\nFurther discussion of localization and Manipulate may be found here:\n\n• This is going to seriously wonderful lesson! Thanks for the great effort. – David Aug 7 '15 at 4:50\n• What code would I enter to time your new version vs. my older version? And I would be probably timing the difference after moving the n slide? – David Aug 7 '15 at 15:31\n• In the body of the Manipulate, I would use SessionTime[] and calculate the difference each update; you'd have have to save the time in a variable, use TrackedSymbols to manage the updating, and display the difference somewhere. But since Manipulate` is mainly about human perception of responsiveness, just try it both ways. It will be noticeable if the plot takes a tenth of a second or more to compute; if it takes more than a few tenths, then it will seem a definite improvement. Graphics generally involves the kernel, the front end, and the GPU -- rather complicated to analyze precisely. – Michael E2 Aug 7 '15 at 15:51" ]
[ null, "https://i.stack.imgur.com/nI9rm.png", null, "https://i.stack.imgur.com/wtGEK.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7840906,"math_prob":0.97114843,"size":2828,"snap":"2021-21-2021-25","text_gpt3_token_len":862,"char_repetition_ratio":0.118626066,"word_repetition_ratio":0.3677686,"special_character_ratio":0.35643566,"punctuation_ratio":0.20187794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99532175,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T04:58:04Z\",\"WARC-Record-ID\":\"<urn:uuid:b4d0f57e-35b6-4a9b-a6fe-6551f470df04>\",\"Content-Length\":\"173682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28982b30-23fd-45eb-a833-36fc8c1f7497>\",\"WARC-Concurrent-To\":\"<urn:uuid:351a7456-4f62-42af-a62e-50dfc6e71c7b>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/84445/does-dynamicmodule-cure-problem-with-manipulate\",\"WARC-Payload-Digest\":\"sha1:PNBAKCJBX6G4TQC2YORPPK3TN77DIISD\",\"WARC-Block-Digest\":\"sha1:IKRTQWMEPWHUXZESCQJN77MKR6M3GM47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487616657.20_warc_CC-MAIN-20210615022806-20210615052806-00606.warc.gz\"}"}
https://physics.stackexchange.com/questions/80668/what-is-the-relation-between-n-2-super-yang-mills-and-its-twist
[ "What is the relation between N=2 super Yang-Mills and its twist\n\nMy question is what is the relation between N=2 super Yang-Mills and its twisted version topological field theory? After twisting N=2 super Yang-Mills, i.e. diagonally embedding $SU(2)'_R$ into $SU(2)_R \\times SU(2)_I$, we get a topological field theory. My question is since N=2 SYM and TQFT are different i.e. one is physical and the other is topological. Why can we use TQFT to calculate partition of N=2 SYM? What are the same for these two different theories?\n\nUpdate: From the second paper of Trimok, the authors claim that SYM under twist are just redefination. How to understand it?\n\n• Not a expert, but they are certainly not the same. Following the original paper ($2.14 \\to 2.18$ ) or this one ($2.3$,$2.4$,$2.30$), the first thing to do is to decompose supersymmetric charges and fields under the representations of the new twisted symmetry group $SU(2)_L \\times SU(2)'_R \\times U(1)_R$, and to write some action. The $N=2$ twisted action is then shown to be equivalent to a topological Yang-Mills action (+ extra-terms) – Trimok Oct 14 '13 at 10:03\n• The main difference is that the observables in the topological field theory are only a subset of the observables in the physical theory. The subset consists of $Q$-closed operators, where $Q$ is the scalar supersymmetry charge present in the twisted theory. – suresh Feb 2 '14 at 1:13" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9320003,"math_prob":0.9927373,"size":589,"snap":"2019-43-2019-47","text_gpt3_token_len":155,"char_repetition_ratio":0.11282051,"word_repetition_ratio":0.0,"special_character_ratio":0.2359932,"punctuation_ratio":0.11904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954477,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T02:42:46Z\",\"WARC-Record-ID\":\"<urn:uuid:d98cc448-d081-45bb-a120-cdee20309ac2>\",\"Content-Length\":\"132086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d87a1624-c16e-40b4-a788-8cc444f51c1a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7458b4f1-e221-4f1c-a28a-173ec73d25a8>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/80668/what-is-the-relation-between-n-2-super-yang-mills-and-its-twist\",\"WARC-Payload-Digest\":\"sha1:QXPNM7OU73RJ546QMBB4V2WUO5JJ4ESK\",\"WARC-Block-Digest\":\"sha1:TXVHRGBOG4IIKG6VZ5JAIPVEAZTEBHWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986672548.33_warc_CC-MAIN-20191017022259-20191017045759-00557.warc.gz\"}"}
https://forum.vectorworks.net/index.php?/topic/57766-extract-worksheet-column-values/&tab=comments
[ "Jump to content\nDeveloper Wiki and Function Reference Links Read more... ×\n\n## Recommended Posts\n\nHi,\n\nI'm creating a script that convert RGB to CMYK, the Calc it's OK. But, I need to know, how can I extract the information (Values in numbers) contained in worksheet cell (R, G, B Column) to put in VAR (r,g,b)  and after the calc process, paste the result of Cstr, Mstr, Ystr and Kstr in other Worksheet Cell Like (C, M, Y, K Column).\n\nThe worksheet name is: CMYK,\n\nPROCEDURE RGBToCMYK;\n{ ------------------------------------------------------ }\n\nCONST { ONLY TO TEST}\nr = 127;\ng = 255;\nb = 212;\n{ ------------------------------------------------------ }\n\nVAR\nc, m, y, k : REAL;\n\n{r, g, b : LONGINT}\nCstr, Mstr, Ystr, Kstr : STRING;\n\nBEGIN\nc := 1 - ( r / 255 );\nm := 1 - ( g / 255 );\ny := 1 - ( b / 255 );\nk := 1;\n\nIF ( c < k ) THEN k := c;\nIF ( m < k ) THEN k := m;\nIF ( y < k ) THEN k := y;\nc := ( c - k ) / ( 1 - k );\nm := ( m - k ) / ( 1 - k );\ny := ( y - k ) / ( 1 - k );\n\nCstr := NUM2STR(4, c);\nMstr := NUM2STR(4, m);\nYstr := NUM2STR(4, y);\nKstr := NUM2STR(4, k);\n\nMESSAGE(CONCAT(Cstr, ' ', Mstr, ' ', Ystr, ' ', Kstr));\nEND;\nRUN(RGBToCMYK);", null, "", null, "#### Share this post\n\n##### Link to post\n\nYou are not going to use Cut/Copy and Paste. You are going to have to read the cell, do the calculations and then write the data to the correct cell.\n\nCheck out the GetWSCellValue in the Function Reference for getting values from cell.\n\nUse SetWSCellFormula to store a value into a cell.  The name is a little misleading, but a number is considered a valid \"formula\" in a cell.\n\n•", null, "1\n\n#### Share this post\n\n##### Link to post\n\nHi Pat, How Are you?\n\nThanks, I appreciate your help.\n\nBefore I reading your message, I did it that way below:\n\nPROCEDURE RGBToCMYK;\n{ ------------------------------------------------------ }\n\n{CONST {TESTE}\n{r = 127;\ng = 255;\nb = 212;}\n{ ------------------------------------------------------ }\n\nVAR\nc, m, y, k : REAL;\nvred, vgreen, vblue : LONGINT;\nCstr, Mstr, Ystr, Kstr : STRING;\nhplan : HANDLE;\nnumRows, numColumns : INTEGER;\nvlin : INTEGER;\n\nBEGIN\n\nhplan := GetObject('_Plan_Teste);\nIF (hplan <> NIL) THEN GetWSRowColumnCount(hplan, numRows, numColumns);\n\nvlin := 2;\n\nREPEAT\n\nvred := Str2Num(GetCellStr(hplan, vlin, 4));\nvgreen := Str2Num(GetCellStr(hplan, vlin, 5));\nvblue := Str2Num(GetCellStr(hplan, vlin, 6));\n\nc := 1 - ( vred / 255 );\nm := 1 - ( vgreen / 255 );\ny := 1 - ( vblue / 255 );\nk := 1;\n\nIF ( c < k ) THEN k := c;\nIF ( m < k ) THEN k := m;\nIF ( y < k ) THEN k := y;\nc := ( c - k ) / ( 1 - k );\nm := ( m - k ) / ( 1 - k );\ny := ( y - k ) / ( 1 - k );\n\nCstr := NUM2STR(4, c);\nMstr := NUM2STR(4, m);\nYstr := NUM2STR(4, y);\nKstr := NUM2STR(4, k);\n\nLoadCell(vlin, 7, Cstr);\nLoadCell(vlin, 8, Mstr);\nLoadCell(vlin, 9, Ystr);\nLoadCell(vlin, 10, Kstr);\n\nvlin := vlin+1;\n\nmessage('Linha:', vlin);\n\nUNTIL(vlin>numRows);\n\nEND;\n\nRUN(RGBToCMYK);\n\nAny comments in this code?\n\n## Create an account or sign in to comment\n\nYou need to be a member in order to leave a comment\n\n## Create an account\n\nSign up for a new account in our community. It's easy!\n\nRegister a new account\n\n## Sign in\n\nAlready have an account? Sign in here.\n\nSign In Now\n\n7150 Riverwood Drive, Columbia, Maryland 21046, USA   |   Contact Us:   410-290-5114\n\n© 2018 Vectorworks, Inc. All Rights Reserved. Vectorworks, Inc. is part of the Nemetschek Group.\n\n×\n\n• KBASE" ]
[ null, "https://forum.vectorworks.net/uploads/monthly_2018_08/985162492_CapturadeTela2018-08-13as15_42_40.thumb.png.0b458e551c4c8e89ee8e2d3d747b9373.png", null, "https://forum.vectorworks.net/uploads/monthly_2018_08/1170085831_CapturadeTela2018-08-13as15_41_35.thumb.png.473416cd31ce1e16763d7db1599b2bbf.png", null, "https://forum.vectorworks.net/uploads/reactions/thumbs-up-sign_1f44d.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.527969,"math_prob":0.91857386,"size":289,"snap":"2019-35-2019-39","text_gpt3_token_len":87,"char_repetition_ratio":0.12631579,"word_repetition_ratio":0.10526316,"special_character_ratio":0.28373703,"punctuation_ratio":0.22222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953513,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-20T12:54:41Z\",\"WARC-Record-ID\":\"<urn:uuid:3c5028bb-d68b-4f60-bfc5-14acbb80135a>\",\"Content-Length\":\"80277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a05b06e-0791-4a21-b1fc-b8b946d46ba6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b470997c-e9f5-407c-b99c-5a9028e96bc3>\",\"WARC-IP-Address\":\"54.210.203.144\",\"WARC-Target-URI\":\"https://forum.vectorworks.net/index.php?/topic/57766-extract-worksheet-column-values/&tab=comments\",\"WARC-Payload-Digest\":\"sha1:7N4XYQ7V4EU2NBXLMRNDOH3R7AOHH7XH\",\"WARC-Block-Digest\":\"sha1:WIAOWXRDCBS4TMANRLYCCEN2YEKPJQUX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315329.55_warc_CC-MAIN-20190820113425-20190820135425-00299.warc.gz\"}"}
https://community.rstudio.com/t/multivariable-logistic-model-risk-differences-and-multicollinearity/67949
[ "", null, "# Multivariable logistic model - risk differences and multicollinearity\n\nI have made a multivariable logistic model in R using the glm-function.\nThe dependent variable is, of course, binary.\nI have 10 independent variables which are dummy, categorical and numerical variables.\n\nI have 2 questions. One regarding the conversion of the results to risk differences and one regarding multicollinearity.\n\n1) The conversion of the results to risk differences\nWith the summary(glm) I get estimates in log(odds) for each variable and I can calculate odds-ratios (OR). But I am interested in reporting risks and ultimately risk differences.\n\nI would like to do that by calculating an average baseline risk and then \"manipulating\" one variable at a time (e.g. smoking 0 in one calculation and 1 in another calculation) to find the risk difference if a patient smokes.\nI calculate the average baseline risk by using the mean observed value of the dummy and categorical values and multiply them with their estimate and by using the median observed valued of the numerical values and multiply them with their estimate.\n\nThis is all great (I think??) and I can do it manually. But is there a faster way than doing it one at a time since I have 10 variables and it would then be a long piece of code?\n\nreference: https://www.bmj.com/content/348/bmj.f7450\n\n2) Multicollinearity\nI have calculated variance inflation factor (VIF) for my independent variables using the vif function from the faraway package.\nShould I plot my independent variables into a linear regression (lm) and do vif(lm) or use my glm model and do vif(glm)?\nAnd can I use VIF at all when I have those 3 different types of variables? I do get an output, so I guess it is okay then?\nA reference would be great!\n\nWow. I hope it makes sense! Unfortunately I am not allowed to share the data with you guys. But I hope it makes a little sense." ]
[ null, "https://community.rstudio.com/uploads/default/original/3X/5/d/5dc960154a129282ba4283771da2fab6fde146fb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9110827,"math_prob":0.90389174,"size":1829,"snap":"2020-34-2020-40","text_gpt3_token_len":409,"char_repetition_ratio":0.12547944,"word_repetition_ratio":0.032258064,"special_character_ratio":0.21487151,"punctuation_ratio":0.09392265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900045,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T20:57:06Z\",\"WARC-Record-ID\":\"<urn:uuid:584b996d-96a9-424e-9817-6d0047ece70c>\",\"Content-Length\":\"16664\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f413a1ab-ded6-40c0-8b2c-730c8d93d168>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cd6c49c-bc0d-4e1a-b8e8-aff502c88923>\",\"WARC-IP-Address\":\"167.99.20.217\",\"WARC-Target-URI\":\"https://community.rstudio.com/t/multivariable-logistic-model-risk-differences-and-multicollinearity/67949\",\"WARC-Payload-Digest\":\"sha1:WHZMFLDRNYJO52YLZS4FBEIKLA4KKZBC\",\"WARC-Block-Digest\":\"sha1:ZJBJE62EXRSAUKGXRUENWW2OWVYCCFSH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400206763.24_warc_CC-MAIN-20200922192512-20200922222512-00282.warc.gz\"}"}
http://lib.mexmat.ru/books/6215
[ "Электронная библиотека Попечительского советамеханико-математического факультета Московского государственного университета\n Главная    Ex Libris    Книги    Журналы    Статьи    Серии    Каталог    Wanted    Загрузка    ХудЛит    Справка    Поиск по индексам    Поиск    Форум", null, "Авторизация", null, "Поиск по указателям", null, "", null, "", null, "", null, "", null, "Haug H., Jauho A.-P. — Quantum kinetics in transport and optics of semiconductors", null, "Обсудите книгу на научном форуме", null, "Нашли опечатку?Выделите ее мышкой и нажмите Ctrl+Enter Название: Quantum kinetics in transport and optics of semiconductors Авторы: Haug H., Jauho A.-P. Аннотация: This monograph deals with the quantum kinetics for transport in low-dimensional microstructures and for ultrashort laser pulse spectroscopy. The nonequilibrium Green function theory is described and used for the derivation of the quantum kinetic equations. Numerical methods for the solution of the retarded quantum kinetic equations are discussed and results are presented for quantum high-field transport and for mesoscopic transport phenomena. Quantum beats, polarization decay and non-Markovian behaviour are treated for femtosecond spectroscopy on a microscopic basis. Язык:", null, "Рубрика: Физика/Физика твёрдого тела/Приложения/ Статус предметного указателя: Готов указатель с номерами страниц ed2k: ed2k stats Год издания: 1996 Количество страниц: 315 Добавлена в каталог: 12.09.2005 Операции: Положить на полку | Скопировать ссылку для форума | Скопировать ID", null, "Предметный указатель", null, "-structure      30", null, "-point      201 Absorption spectrum      229 Accumulation region      161 Admittance, linear-response      195 Aharonov — Bohm effect      158 Analytic continuation      56 65—68 91 Analytic continuation, weak localization self-energy      105 Ansatz, Generalized Kadanoff — Baym      88—91 145 262 Ansatz, Generalized Kadanoff — Baym, for two bands      211 Ansatz, Kadanoff — Baym      143 212 Anticommutator rule      36 85 Balance equation approach      120 Barker — Ferry equation      146 Binomial coefficients      22 Bloch equations, optical      202 205 Bloch vector      205 Bloch vector, time-development      248 Boltzmann equation, applied to", null, "-structure      30 Boltzmann equation, connection to quantum kinetics      217 Boltzmann equation, derivation of      3—5 Boltzmann equation, eigenfunction expansion of      8—11 Boltzmann equation, elastic impurities      21 78 114 Boltzmann equation, integral form      149 Boltzmann equation, linearization of      8—11 Boltzmann equation, Monte Carlo solution of      11 Boltzmann equation, numerical integration of      11 Broadening, inhomogeneous      206 231 Catch, related to two-step process      73 chemical potential      167 Coherent back-scattering      104 Coherent potential approximation (CPA)      135 Collision broadening      148—151 Collision frequencies      13 Collision term, as an integral operator      9 Collision term, electron-phonon systems      145 Collision term, gauge-invariant      83 Collision term, Gaussian white noise model      112 Collision term, invariants of      13 Collision term, linearization of      96 Collision term, memory effects      212 Collision term, quantum      73 Collision term, quantum, non — Markovian nature      212 Collision term, resonant-level model      143 Commutator rule      37 Complex-time contour      91 Conductivity electrical, Boltzmann result, electrical, nonlinear      154 Conductivity, electrical, Boltzmann result      104 Conductivity, electrical, Boltzmann result, electrical, Drude      108 Conductivity, electrical, Boltzmann result, electrical, linear d.c      102 Conductivity, electrical, Boltzmann result, electrical, linear for elastic impurities      95 98—104 Conservation law, Boltzmann equation      4 Conservation law, energy      217 274 Conservation law, momentum      122 Continuity equation, static      30 Continuity equation, time-dependent      183 Contour, deformation of      66 Correlation function", null, ", approximate for Coulomb island      172 Correlation function", null, ", Gaussian white noise model      111 Correlation function", null, ", relation to observables      87 Correlation function, density-density      294 297 Correlation function, equation-of-motion      72 Correlation function, retarded current-current      98 Correlation function, time-ordered current-current      99 Coulomb island      170 Coulomb potential, advanced      263 266 Coulomb potential, retarded      263 266 Coulomb potential, screened      12 263 Coulomb potential, screened in RPA      266 Coulomb potential, screened, static      12 Coulomb potential, screened, time-dependent      297 Coulomb potential, two-dimensional Fourier transform      12 Coulomb scattering, in mean-field approximation      227 Coupling, adiabatic      65 Current conservation      99 Current standard      179 Current, density      87 Current, interacting model      167 Current, resonant-level model      166 Current, time-averaged      183—184 Current, time-averaged, for resonant-level model      186 Current, time-dependent      182 Current, time-dependent, linear-response      193—195 Current-voltage characteristic      157 Current-voltage characteristic, experimental      168 Current-voltage characteristic, resonant tunneling device      167 Damping constants      205 Damping, Markovian      213 Damping, non — Markovian      213 Density of states, field-dependent, three dimensions      125 Density of states, field-dependent, two dimensions      125 Density of states, in terms of spectral function      41 Density of states, resonant-level model      135 Density of states, time-dependent      129 Density-matrix methods      120 Density-matrix, thermal equilibrium      59 depletion region      161 Detailed balance      14 Detuning      205 Diagrams, crossed      54 105 Diagrams, disconnected      46 65 Diagrams, Feynman      46 64 Diagrams, ladder      99 Diagrams, maximally crossed      105 Diagrams, rain-bow      54 Diamagnetic term      101 Dielectric breakdown      119 Dielectric function, plasmon pole approximation      297 Dielectric function, time-dependent      297 Diffusion, constant      106 Diffusion, Gaussian white noise model      115 Dipole matrix element      201 Disorder averaging      109 Disorder averaging in external fields      133 Distribution function, Bose      8 Distribution function, drifted Maxwellian      32 Distribution function, Fermi      7 Distribution function, local equilibrium      30 Distribution function,, generalized      73 Drift-velocity, quantum      155 Driving term, gauge-invariant      82 Driving term, gauge-invariant, with magnetic field      82 Driving term, generalized      73 Driving term, re-normalization of      73 Driving term, scalar potential gauge      81 Driving term, vector potential gauge      82 Dyson equation for contour-ordered Green function      65 Dyson equation for inter-band Green function      203 213 Dyson equation, complex time      74 91 Dyson equation, eigenfunction representation of      153 Dyson equation, elastic impurities      100 Dyson equation, electron-phonon systems      50 Dyson equation, Gaussian white noise model      110 Dyson equation, integral form      267 Dyson equation, nonequilibrium      85 91 Dyson equation, resonant-level model      166 Dyson equation, resonant-level model, concentration of impurities      133 Dyson equation, resonant-level model, time-dependent      184 Dyson equation, retarded Coulomb potential      264 270 Eigenfunctions for linearized collision term      9 16 Eigenfunctions, norm      9 Eigenfunctions, scalar product      9 Eigenvalues, density of      14 Eigenvalues, spectrum of      14 Einstein summation convention      79 Electron-hole plasma      261 Electron-phonon interaction      161 Energy re-normalization      43 Energy relaxation      115 envelope function      201 Equation-of-motion for Bloch vector      205 Equation-of-motion for nonequilibrium Green functions      71 Equation-of-motion for reduced interband density matrix      277 Equation-of-motion technique      65 163 Equation-of-motion technique for reduced density matrix      199 Equation-of-motion technique, Coulomb island      170 Equation-of-motion technique, elastic impurity problem      52 Equation-of-motion technique, resonant-level model      47 131 Excitonic effects      225 Fano model      47 Fermi’s Golden Rule      4 20 Fermi’s golden rule, applied to polaron scattering      214 Feynman diagrams for elastic impurity problem      53 Feynman diagrams, electron-phonon system      49 Feynman diagrams, two-particle Green function      51 Feynman path integral method      120 Field operator      37 Field operator for quantum-well structure      201 Field operator in terms of Bloch waves      200 Fluctuating energy-levels      195—196 Fluctuation-dissipation theorem      96 171 Fluctuation-dissipation theorem, derivation of      41—44 Fock space      35 Fock space, completeness relation of      36 Fokker — Planck equation      11 Four-wave mixing      229 Four-wave mixing, calculation of      255—258 Four-wave mixing, experiments      258—260 Four-wave mixing, time-resolved      204 232 235 238 253 Fr", null, "hlich coupling      257 Fractional quantum Hall effect      49 Franz — Keldysh effect      125 129 Fredholm iteration      223 Free induction decay      206 Functional differentiation      65 Gauge invariance      79—85 92 Gauge invariance, transformation of functions      79 Gauge transformation      80 Gauge, scalar potential      81 Gauge, vector potential      81 Gaussian white noise model, GWN,      109 Gell — Mann and Low theorem      45 Gradient expansion      75—77 79 Gradient expansion, derivation of      76—77 Gradient operator      76 Green function, advanced, definition of      40 63 Green function, advanced, elastic impurities      54 Green function, advanced, for two bands      211 Green function, advanced, non-interacting contacts      182 Green function, antitime-ordered, definition of      63 Green function, causal      62 99 Green function, causal, definition of      38 63 Green function, contour-ordered      59—68 162 Green function, contour-ordered, definition of      62 Green function, contour-ordered, perturbation expansion for      64 Green function, correlation function, definition of      63 Green function, equilibrium theory of      35—56 Green function, finite temperature, definition of      39 Green function, free-particle      46—47 Green function, free-particle, differential equation for      39 Green function, gauge-invariant      120 123 Green function, greater, definition of      40 63 Green function, higher-order      65 Green function, in scalar potential gauge      81 Green function, in vector potential gauge      81 Green function, inter-band, definition of      202 Green function, lesser, definition of      40 63 Green function, lesser, non-interacting contacts      182 Green function, perturbation expansion of      44 46 Green function, phonon, equilibrium      68 145 Green function, retarded, analytic structure      123 Green function, retarded, approximate for Coulomb island      171 Green function, retarded, definition of      40 63 Green function, retarded, disorder averaged for RLM      133 Green function, retarded, elastic impurities      54 Green function, retarded, field-dependent, scalar potential gauge      120—123 Green function, retarded, field-dependent, vector potential gauge      110 Green function, retarded, for two bands      211 Green function, retarded, gauge-invariant      85 Green function, retarded, Gaussian white noise model      109—111 Green function, retarded, non-interacting contacts      182 Green function, retarded, time-dependent      125 Green function, retarded, time-dependent resonant-level model      185 Green function, time-ordered, definition of      38 63 Green function, two-particle      50 95 99 Green function, two-particle, causal impurity-averaged      100 Green function, two-particle, factorization of      99 Green function, two-particle, integral equation for      101 H-function      6 H-Theorem      5—8 Hamilton — Jacobi equation      288 Hartree — Fock approximation      173 227 High temperature superconductors      49 Hilbert space      9 35 Impurity averaging      51—55 99 Impurity averaging, nonequilibrium      133 Impurity averaging, prescription for      51 133 Initial correlations      66 Interaction, carrier - classical light field      200 201 Interaction, electron - classical light field, in rotating-wave approximation      202 Interaction, electron-phonon, in second quantization      37 Interaction, Fr", null, "hlich      210 Interaction, one-body, in second quantization      37 Interaction, two-particle, in second quantization      37 Interband polarization      204 Intra-collisional field-effect      115 131 Intra-collisional field-effect,      148 149 151 Intraband polarization function      263 inversion      205 Irreversibility, Boltzmann equation      73 Irreversibility, quantum kinetic equation      73 Jellium model      261 Joule heating      142 Kadanoff — Baym equation      72 81 91 111 Kadanoff — Baym equation for inter-band Green function      209 Kadanoff — Baym equation, derivation of      71—73 Keldysh equation      89 91 111 166 171 Keldysh equation for resonant-level model      185 Keldysh equation, derivation of      73—74 Kinetic energy, in terms of a Green function      39 Kinetic equations, numerical solution, stochastic      20 Kinetics, free-carrier      202 Kinetics, linear polarization      218 Kinetics, linearized Coulomb      12—20 Kinetics, Markovian      235 Kinetics, optical inter-band      204 Kinetics, quantum      235 Kinetics, quantum, coupled electron-phonon      241 Kinetics, quantum, exciton      227 Kinetics, quantum, interband      199 Kondo phenomenon      49 162 172 Kondo temperature      174 Kubo formula      95 Landau damping      270 295 Landauer formula      159 165 Landauer formula for time-averaged current      184 Langreth theorem      66 Level-shift function      167 Level-width function      163 167 Level-width function, energy-dependence      164 Level-width function, field-dependence      133 Level-width function, generalized      182 Level-width, elastic      167 Level-width, inelastic      167 Level-width, total      167 Limit, Boltzmann      75—78 Limit, completed collisions of      149 217 232 273 Limit, Markov      223 Limit, wide-band (WBL)      183 185 Limit, zero-field limit      122\n1 2", null, "Реклама", null, "", null, "", null, "", null, "", null, "© Электронная библиотека попечительского совета мехмата МГУ, 2004-2022", null, "|", null, "|", null, "О проекте" ]
[ null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/img/main/8.jpg", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/covers/default.gif", null, "http://dxdy.ru/80x15.png", null, "http://lib.mexmat.ru/img/ico/en.png", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/math_tex/d65bd1d3de542f0aafa2edb5d5bdd6c882.gif", null, "http://lib.mexmat.ru/math_tex/b2af456716f3117a91da7afe7075804182.gif", null, "http://lib.mexmat.ru/math_tex/d65bd1d3de542f0aafa2edb5d5bdd6c882.gif", null, "http://lib.mexmat.ru/math_tex/92068436ade1816e99d3c0a813a59e4c82.gif", null, "http://lib.mexmat.ru/math_tex/92068436ade1816e99d3c0a813a59e4c82.gif", null, "http://lib.mexmat.ru/math_tex/92068436ade1816e99d3c0a813a59e4c82.gif", null, "http://lib.mexmat.ru/math_tex/c6ee8694b0c71a67de2c9cdbc91e36bd82.gif", null, "http://lib.mexmat.ru/math_tex/c6ee8694b0c71a67de2c9cdbc91e36bd82.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://lib.mexmat.ru/z.gif", null, "http://d5.cf.bb.a0.top.mail.ru/counter", null, "http://mc.yandex.ru/watch/1659949", null, "http://lib.mexmat.ru/img/ico/libmexmat_80x15.png", null, "http://lib.mexmat.ru/img/ico/valid_html401.png", null, "http://lib.mexmat.ru/img/ico/valid_css.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6469804,"math_prob":0.9317109,"size":11706,"snap":"2022-27-2022-33","text_gpt3_token_len":3167,"char_repetition_ratio":0.18791659,"word_repetition_ratio":0.026710099,"special_character_ratio":0.2580728,"punctuation_ratio":0.14248839,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97664195,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T14:14:06Z\",\"WARC-Record-ID\":\"<urn:uuid:f3590226-5cc7-4954-adaa-3535cfede8e6>\",\"Content-Length\":\"106589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5636e9b7-cca5-4c70-9472-b479f50ed37f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4976546c-f120-4659-b6e0-3b26730a3c92>\",\"WARC-IP-Address\":\"85.89.126.67\",\"WARC-Target-URI\":\"http://lib.mexmat.ru/books/6215\",\"WARC-Payload-Digest\":\"sha1:RNV24WE5SDZNEGAGVP3FGGXW42XJCHYV\",\"WARC-Block-Digest\":\"sha1:2HDPPKP5X42X3X7GRXRO2HAHQXG7B4VF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103269583.13_warc_CC-MAIN-20220626131545-20220626161545-00605.warc.gz\"}"}
https://1st-in-babies.com/how-long-is-129-minutes-in-hours-new/
[ "How Long Is 129 Minutes In Hours? New\n\n# How Long Is 129 Minutes In Hours? New\n\nLet’s discuss the question: how long is 129 minutes in hours. We summarize all relevant answers in section Q&A of website 1st-in-babies.com in category: Blog MMO. See more related questions in the comments below.\n\n## How many hours is 1 hour 45 minutes?\n\nTo convert time to just hours:\n\n45 minutes is 45 minutes * (1 hour / 60 minutes) = 45/60 hours = 0.75 hours. 45 seconds is 45 seconds * (1 hour / 3600 seconds) = 45/3600 hours = 0.0125 hours. Adding them all together we have 2 hours + 0.75 hours + 0.0125 hours = 2.7625 hours.\n\n## What is 120 minutes expressed in hours?\n\nFor example, 120 minutes equals 2 hours because 120/60=2.\n\n### Time Conversion (Hours | Minutes | Seconds) Math – Tutway\n\nTime Conversion (Hours | Minutes | Seconds) Math – Tutway\nTime Conversion (Hours | Minutes | Seconds) Math – Tutway\n\n## How long is 1 minute exactly?\n\nThe minute is a unit of time usually equal to 160 (the first sexagesimal fraction) of an hour, or 60 seconds.\n\n## How many hours is 15 minutes?\n\nTherefore, 15 minutes = 15/60 hour = ¼ hour.\n\n## What decimal of 10 hours is a minute?\n\nMinutes to Decimal Hours Calculator\nMinutes Decimal Hours\n9 0.150\n10 0.167\n11 0.183\n12 0.200\n\n## What is 1 hour and 20 minutes as a decimal?\n\nCommon Time to Hours, Minutes, and Seconds Decimal Values\nTime Hours Minutes\n01:00:00 1 hr 60 min\n01:10:00 1.167 hrs 70 min\n01:20:00 1.333 hrs 80 min\n01:30:00 1.5 hrs 90 min\n\n## How many hours are in one hour?\n\nAn hour (symbol: h; also abbreviated hr) is a unit of time conventionally reckoned as 1⁄24 of a day and scientifically reckoned as 3,599–3,601 seconds, depending on conditions. There are 60 minutes in an hour, and 24 hours in a day.\n\n## How long is a second?\n\nSince 1967, the second has been defined as exactly “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom” (at a temperature of 0 K and at mean sea level).\n\n## Why does a minute have 60 seconds?\n\nTHE DIVISION of the hour into 60 minutes and of the minute into 60 seconds comes from the Babylonians who used a sexagesimal (counting in 60s) system for mathematics and astronomy. They derived their number system from the Sumerians who were using it as early as 3500 BC.\n\n### Who decides how long a second is? – John Kitching\n\nWho decides how long a second is? – John Kitching\nWho decides how long a second is? – John Kitching\n\n## How do you say 45 minutes in English?\n\nAt minute 45, we say it’s “quarter to” the next hour. For example, at 5:45, we say it’s “quarter to six” (or 15 minutes before 6:00). At minute 30, we say it’s “half past”. So at 9:30, we would say it’s “half past nine” (or half an hour after 9:00).\n\n## What time is 5p in military time?\n\nMilitary Time / 24 Hour Time Conversion Chart\nRegular Time Military Time\n3:00 p.m. 1500 or 1500 hours\n4:00 p.m. 1600 or 1600 hours\n5:00 p.m. 1700 or 1700 hours\n6:00 p.m. 1800 or 1800 hours\n\n## What is .08 of an hour?\n\nDecimal Hours-to-Minutes Conversion Chart\nMinutes Tenths of an Hour Hundredths of an Hour\n48 .8 .80\n49 .8 .82\n50 .8 .84\n51 .8 .85\n\n## What time will it be 3/4 hour?\n\nThree fourths of an hour is 45 minutes.\n\n## What is the decimal for 45 minutes?\n\nMinute Conversion Chart\nMinutes Decimal Conversion\n45 0.75\n46 0.77\n47 0.78\n48 0.80\n\n## How do you write 45 minutes as a decimal?\n\nAnswer: 45 minutes in decimal is 0.75.\n\n## What is 30 minutes in decimals?\n\nDecimal Hours\n\nUsing our 7:30 example above, we intuitively know that 30 minutes is ‘half an hour. ‘ In decimal format one-half is expressed as ‘. 5’. So in decimal format this is expressed as 7.5 hours (7 and a half hours).\n\n## How long is a day?\n\nDay Length\n\nOn Earth, a solar day is around 24 hours. However, Earth’s orbit is elliptical, meaning it’s not a perfect circle. That means some solar days on Earth are a few minutes longer than 24 hours and some are a few minutes shorter.\n\n### How to Calculate Hours Worked in Excel\n\nHow to Calculate Hours Worked in Excel\nHow to Calculate Hours Worked in Excel\n\n## How many seconds are in a year?\n\none year would equal 365 times 24 times 60 times 60 seconds…or 31,536,000 seconds!\n\n## How did we get 24 hours in a day?\n\nOur 24-hour day comes from the ancient Egyptians who divided day-time into 10 hours they measured with devices such as shadow clocks, and added a twilight hour at the beginning and another one at the end of the day-time, says Lomb. “Night-time was divided in 12 hours, based on the observations of stars.\n\nRelated searches\n\n• how many hours in 130 minutes\n• how long is 1 000 minutes in hours\n• how long is 120 minutes in hours\n• how long is minutes\n• how long is 90 minutes in hours\n• how long is 30 minutes in hours\n• how long is 129 minutes in hours and minutes\n• how long is 129 hours\n• how long is 130 minutes in hours\n• how long is 129 days\n• 2 15 hours in minutes\n• how long is 119 minutes\n• how long is 128 minutes in hours\n\n## Information related to the topic how long is 129 minutes in hours\n\nHere are the search results of the thread how long is 129 minutes in hours from Bing. You can read more if you want.\n\nYou have just come across an article on the topic how long is 129 minutes in hours. If you found this article useful, please share it. Thank you very much." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9058865,"math_prob":0.89113235,"size":4960,"snap":"2022-27-2022-33","text_gpt3_token_len":1469,"char_repetition_ratio":0.19229217,"word_repetition_ratio":0.07348243,"special_character_ratio":0.3310484,"punctuation_ratio":0.12589286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97989225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T18:41:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7d2bf9ab-0303-46e9-a61f-d25045f389d7>\",\"Content-Length\":\"100702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4e3e7f3-8e93-4a4f-ae4b-0bed8ca712c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:057fa58e-17ed-4ed7-b12b-4f498266fe5e>\",\"WARC-IP-Address\":\"104.21.65.32\",\"WARC-Target-URI\":\"https://1st-in-babies.com/how-long-is-129-minutes-in-hours-new/\",\"WARC-Payload-Digest\":\"sha1:TPMCWR6KRPAHB53GU47VHBX22JJJPIKT\",\"WARC-Block-Digest\":\"sha1:NN5BPO66CRD6XKMHARZ7FCDYJ3S4W5F4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571982.99_warc_CC-MAIN-20220813172349-20220813202349-00120.warc.gz\"}"}
http://www.biomath.nyu.edu/rag/tutorial_rna_matrix
[ "### Program Description\n\nTo understand and use this program, there are a few simple concepts, each with a brief explanation, listed below that might be useful.\n\n### ct file:\n\nThe ct file contains data about the base pairs in a RNA secondary structure. The following is the structure for TRNA12 that was generated by mFold and the ct file that is associated with that structure:\n\n### RNA Secondary Structure", null, "### ct File", null, "In order for the program to run properly, the ct file that you download from the web (Zuker's mFold) or that you generate must appear exactly as the file above. The first line contains the total number of nucleotides in the structure, the energy associated with the fold, and the name of the file. The significance of the columns is as follows:\n\nColumn 1: List of the nucleotides from 1 to N (N = total number of nucleotides).\nColumn 2: List of the type of nucleotide (A, G, U, or C).\nColumn 3: List of the nucleotides increasing from zero to N - 1.\nColumn 4: List of the nucleotides from 2 to N and continuing the column with zeros to fill any empty spaces.\nColumn 5: List of the nucleotides that are paired to those listed in increasing order. Any zeros in the fifth column indicate that the particular nucleotide is unpaired.\nColumn 6: A repeat of column 1.\n\nClick on the following ct file if you would like to view the sample file displayed above as it actually appears in file form: TRNA12\n\n### Laplacian Matrix:\n\nThe Laplacian matrix (L) is a mathematical representation of the connectivity between the vertices in a RNA graph or topology. It's represented by diagonal (D) and adjacency (A) components. The diagonal matrix shows the number of connections each vertex makes with the other vertices along the diagonal of the matrix. The adjacency matrix specifies to which vertices each vertex is connected. In a graphical representation of a RNA structure, any labeling is fine. For example, the tRNA (NDB: TRNA12) structure shown above has the following tree graph structure with the vertices randomly labeled:", null, "The corresponding D and A values are as follows:\n\n### A\n\n 4 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1\n\n 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0\n\nEach column and row in the above matrices correspond to the graph's vertices. By looking at the diagonal of the diagonal matrix, you can see that vertex 1 is connected to 4 other vertices, vertex 2 is connected to 1 other vertex, and so on. The corresponding adjacency matrix specifies these connections explicitly.\nThe Laplacian matrix is defined from D and A as follows:\n\n### L = D - A\n\nFor the example above, we have as follows:\n\n### L\n\n 4 -1 -1 -1 -1 -1 1 0 0 0 -1 0 1 0 0 -1 0 0 1 0 -1 0 0 0 1\n\nThe Laplacian matrix is a square matrix. Each column and row in the above matrix represents the vertices in the tree graph.\nA value of -1 in the matrix element i,j indicates that vertices i and j are connected. For example, by looking across at row 1, it is apparent that vertex 1 is connected to vertex 2, 3, 4, and 5. By symmetry, the same information is provided by looking down column 1.\nZeros indicate no connectivity between corresponding vertices. For example, vertex 2 is not connected to vertex 4.\nThe diagonals of the Laplacian matrix are always positive integers. They represent the number of connections that the particular vertex makes. For example, vertex 1 is connected to 4 other vertices.\n\n### Laplacian Eigenvalues:\n\nThe Laplacian matrix is used to calculate its corresponding eigenvalues. The total number of vertices in a RNA secondary structure equals the total number of eigenvalues. The eigenvalue that helps to describe the RNA topology is the second eigenvalue. The second eigenvalue describes the compactness of a graph. The range of possible values for the second eigenvalue begin at zero and increase from there. The more compact a graph is, the higher its corresponding second eigenvalue." ]
[ null, "http://www.biomath.nyu.edu/rag/tutorial_images/trna12_pic.jpg", null, "http://www.biomath.nyu.edu/rag/tutorial_images/trna12_ct.jpg", null, "http://www.biomath.nyu.edu/rag/tutorial_images/topo5_graph.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.912382,"math_prob":0.95745695,"size":3822,"snap":"2023-40-2023-50","text_gpt3_token_len":858,"char_repetition_ratio":0.13724463,"word_repetition_ratio":0.025679758,"special_character_ratio":0.21219257,"punctuation_ratio":0.106206894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889908,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T19:25:02Z\",\"WARC-Record-ID\":\"<urn:uuid:f8934a21-a8f8-45f8-9f8a-b5a68e55e45f>\",\"Content-Length\":\"25958\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c22448b1-a205-4701-9cf0-3fb7401fc980>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb5697a9-2ec4-47e8-a4c7-ac816022a499>\",\"WARC-IP-Address\":\"128.122.250.111\",\"WARC-Target-URI\":\"http://www.biomath.nyu.edu/rag/tutorial_rna_matrix\",\"WARC-Payload-Digest\":\"sha1:JQK2BN4J4JPHFBJKQXJXCM5GOA7ZA6YG\",\"WARC-Block-Digest\":\"sha1:XFLUVRNU7IAZ74FQWZXYVB4LIXKCBZYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00094.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/45-60-and-84-20
[ "Solutions by everydaycalculation.com\n\n## Compare 45/60 and 84/20\n\n1st number: 45/60, 2nd number: 4 4/20\n\n45/60 is smaller than 84/20\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 60 and 20 is 60\n2. For the 1st fraction, since 60 × 1 = 60,\n45/60 = 45 × 1/60 × 1 = 45/60\n3. Likewise, for the 2nd fraction, since 20 × 3 = 60,\n84/20 = 84 × 3/20 × 3 = 252/60\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 45/60 < 252/60 or 45/60 < 84/20\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85033846,"math_prob":0.9924552,"size":715,"snap":"2020-10-2020-16","text_gpt3_token_len":257,"char_repetition_ratio":0.1673699,"word_repetition_ratio":0.0,"special_character_ratio":0.43356642,"punctuation_ratio":0.08387097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99337304,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T20:04:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c6ed4193-12b8-4423-baed-c039f71f1d3b>\",\"Content-Length\":\"6999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8695f65-1003-4291-a2d7-81881699b1ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb0a4408-0434-4c9c-af73-e7f5667a7b0c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/45-60-and-84-20\",\"WARC-Payload-Digest\":\"sha1:FQRJRFR46AA3GDOW3K2ZIMQUWVXXOIL4\",\"WARC-Block-Digest\":\"sha1:VXVJ4QNRII6SOKLAPRNS3W24OXYJ4STQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145713.39_warc_CC-MAIN-20200222180557-20200222210557-00223.warc.gz\"}"}
https://www.audiolabs-erlangen.de/resources/MIR/FMP/C1/C1S3_FrequencyPitch.html
[ "", null, "# Frequency and Pitch\n\nFollowing Section 1.3.2 of [Müller, FMP, Springer 2015], we cover in this notebook the relation between frequency and pitch.\n\n## Sinusoids¶\n\nA sound wave can be visually represented by a waveform. If the points of high and low air pressure repeat in an alternating and regular fashion, the resulting waveform is called periodic. In this case, the period of the wave is defined as the time required to complete a cycle. The frequency, measured in Hertz (Hz), is the reciprocal of the period. The simplest type of periodic waveform is a sinusoid, which is completely specified by its frequency, its amplitude (the peak deviation of the sinusoid from its mean), and its phase (determining where in its cycle the sinusoid is at time zero). The following figure shows a sinusoid with frequency $4~\\mathrm{Hz}$.", null, "## Audible Frequency Range¶\n\nThe higher the frequency of a sinusoidal wave, the higher it sounds. The audible frequency range for humans is between about $20~\\mathrm{Hz}$ and $20000~\\mathrm{Hz}$ ($20~\\mathrm{kHz}$). Other species have different hearing ranges. For example, the top end of a dog's hearing range is about $45~\\mathrm{kHz}$, a cat's is $64~\\mathrm{kHz}$, while bats can even detect frequencies beyond $100~\\mathrm{kHz}$. This is why one can use a dog whistle, which emits ultrasonic sound beyond the human hearing capability, to train and to command animals without disturbing nearby people.\n\nIn the following experiment, we generate a chirp signal which frequency increases by a factor of two (one octave) every second. Starting with $80~\\mathrm{Hz}$, the frequency raises to $20480~\\mathrm{Hz}$ over a total duration of $8$ seconds.\n\nIn :\nimport IPython.display as ipd\nimport numpy as np\nimport sys\n\nsys.path.append('..')\nimport libfmp.c1\n\nFs = 44100\ndur = 1\nfreq_start = 80 * 2**np.arange(8)\nfor f in freq_start:\nif f==freq_start:\nchirp, t = libfmp.c1.generate_chirp_exp_octave(freq_start=f, dur=dur, Fs=Fs, amp=1)\nelse:\nchirp_oct, t = libfmp.c1.generate_chirp_exp_octave(freq_start=f, dur=dur, Fs=Fs, amp=1)\nchirp = np.concatenate((chirp, chirp_oct))\n\nipd.display(ipd.Audio(chirp, rate=Fs))" ]
[ null, "https://www.audiolabs-erlangen.de/resources/MIR/FMP/data/C1_nav.png", null, "https://www.audiolabs-erlangen.de/resources/MIR/FMP/data/C1/FMP_C1_F19.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86145544,"math_prob":0.99394697,"size":2062,"snap":"2023-40-2023-50","text_gpt3_token_len":546,"char_repetition_ratio":0.12099125,"word_repetition_ratio":0.013422819,"special_character_ratio":0.2584869,"punctuation_ratio":0.1495098,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99561757,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T00:52:33Z\",\"WARC-Record-ID\":\"<urn:uuid:4367909b-afc5-476b-87f9-d65eea2f55dc>\",\"Content-Length\":\"1048871\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40857568-4230-45d2-ba9d-c950fec4bbdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:43f32957-7cbc-4aff-a3f1-a85552466f69>\",\"WARC-IP-Address\":\"131.188.16.208\",\"WARC-Target-URI\":\"https://www.audiolabs-erlangen.de/resources/MIR/FMP/C1/C1S3_FrequencyPitch.html\",\"WARC-Payload-Digest\":\"sha1:C3ZIQ5CEDQAT7FSWSZBSNWNVKV2SX2OM\",\"WARC-Block-Digest\":\"sha1:7ERZFKYWTDAJOTOHK7OTB6AHKWAQNL2B\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510942.97_warc_CC-MAIN-20231002001302-20231002031302-00576.warc.gz\"}"}
https://buildingmathematicians.wordpress.com/2016/07/07/is-that-even-a-problem/
[ "# Is That Even A Problem???\n\nAsk others what problem solving means with regard to mathematics.  Many will explain that a problem is when we put a real-world context to the mathematics being learned in class… others might explain a process of how we solve a problem (look at what you know and determine what you want to know, or some other set of strategies or a creative acronym that we have likely seen in school).  Sadly, much of what most others would point to as a problem is not really even a problem at all.\n\nI think we all need to consider the real notion of what it means to problem solve…\n\nGeorge Polya shared this:", null, "…Thus, to have a problem means: to search consciously for some action appropriate to attain a clearly conceived, but not immediately attainable, aim. To solve a problem means to find such action. … Some degree of difficulty belongs to the very notion of a problem: where there is no difficulty, there is no problem.\n\nWhat Polya is suggesting here is that if we show students how to do something, and then ask the students to practice that same thing in a context IT ISN’T A PROBLEM!!!  A problem in mathematics is like the DOING MATHEMATICS Tasks listed below.  Take a look at all 4 sections for a minute:", null, "My thoughts are simple… If a student can relatively quickly determine a course of action about how to get an answer, it isn’t a problem!   Even if the calculations are difficult or take a while.  The vast majority of what we call problems are actually just contextual practice of things we already knew.  Doing word problems IS NOT the same as problem solving!\n\nOn the other hand, if a student has to use REASONING skills, they are thinking, actively trying to figure something out, then and only then are they problem solving!!!\n\nMarian Small has written a short article on her thoughts about problem solving:  Marian Small – Problem Solving\n\nWhat are her main messages here?  Does this or Polya’s quote change your definition of a problem?\n\nI’ve already written about What does Day 1 Look Like where I shared the importance of starting with problems.  So, why should we start with problem solving?  If we don’t start there, we aren’t likely ever doing any problem solving at all!\n\nI also think that many hearing this might assume that this means we just hand students problems that they wouldn’t be successful with…  Ask everyone to attempt something that they wouldn’t know how to do.\n\nLet’s look at an example:\n\nTake a look at these two grade 8 expectations from Ontario curriculum:\n\n• determine the Pythagorean relationship, through investigation using a variety of tools (e.g., dynamic geometry software; paper and scissors; geoboard) and strategies;\n• solve problems involving right triangles geometrically\n\nMany teachers look at these expectations, think to themselves, Pythagorean Theorem… I can teach that… followed by explicit teaching on a Smartboard (showing a video, modeling how the pythagorean theorem works, followed by some examples for the class to work on together…), then finally some problems that students have to answer in a textbook like this:", null, "This progression neither shows how we know students learn, nor does it even get students to be able to do what is asked!\n\nLet’s take a step back and start to notice what the curriculum is saying more clearly:\n\n• determine the Pythagorean relationship, through investigation using a variety of tools (e.g., dynamic geometry software; paper and scissors; geoboard) and strategies;\n• solve problems involving right triangles geometrically, using the Pythagorean relationship;\n\nWhen we pull apart the verbs and the tools/strategies from the content, we start to notice what the curriculum is telling us our students should actually be doing that day!  Remember… these expectations are what OUR STUDENTS should be doing… NOT US!!!\n\nAbove I have colored the verbs blue (These are the actions our students should be doing that day) and tools/strategies orange (specifically HOW our students should be accomplishing the verb).\n\nNow let’s take a quick look at how this might actually play out in the classroom if we are starting with problems like our curriculum states:\n\nLet’s start with the first expectation.  Our curriculum often includes the statement “determine through investigation,” yet it is overlooked far too often!  Students need to determine this themselves!  We need to assess students’ ability to “determine the Pythagorean relationship through investigation.”\n\nThis doesn’t mean we tell students what the theorem is, nor does it mean that we expect everyone to reinvent the theorem… so what does it mean???\n\nWell, it could mean lots of things.  Here is one possible suggestion…", null, "Show the figure on the left.  Ask students the area of the blue square in the middle.  Possibly give a geoboard or paper and scissors for this task.  How might students come up with the area?  How many different ways might students accomplish this?\n\nShare different approaches as a group.  (By the way, some calculate the whole shape and subtract the 4 corner triangles.  Others calculate the 4 black rectangles and divide by 2 then add 1 for the middle… others rearrange the shapes to make it make sense).\n\nNow explore how the two pictures are similar / different.  What do you notice between the two pictures?\n\nIf the curriculum tells us to “determine through investigation” that is exactly the experience our students need to conceptualize the concept.  It is also what we need to assess.  This can’t be put on a test easily though, it needs to be observed!\n\nThat second expectation is quite interesting to me too.  At the beginning of this post we started talking about what is and isn’t a problem.  If our students have now understood what the Pythagorean Theorem is, how can we now make things problematic?\n\nShowing a bunch of diagrams with missing hypotenuses or legs isn’t really problematic!\n\nHowever, something like Dan Meyer’s Taco Cart would be!  If you haven’t seen the lesson, take a look:", null, "Oh… and by the way… problem solving isn’t always about answering a question… really, at it’s heart, problem solving is about making sense of things that we didn’t understand before… reasoning though things… noticing things we didn’t notice before… making conjectures and testing them out…  Problem solving is the process of LEARNING and DOING MATHEMATICS!\n\nSo I want to leave you with a problem for you to think about: what does this have to do with the Pythagorean Theorem?", null, "Advertisements\n\n## 5 thoughts on “Is That Even A Problem???”\n\n1.", null, "mikeollerton says:\n\n‪#MTBoS I only have one strategy to support the differentiated states students are inevitably in. This is to offer starting point tasks which are accessible and ‘easily’ extendable. Once I have posed a problem, such as‬ using the numbers 1,2,3,4 the + and the = sign, make two 2-digit numbers and find all the different possible totals. Over this and ensuing lessons I will have many conversations with individual, pairs or small groups of students. Because they will be in mixed-attainment groups (no ‘tracking’ or ‘setting-by-‘ability’ in my classrooms) I shall have many in-the-moment unplanned for though not unexpected interactions; it is through these my approach to supporting students, all of whom will, collectively, be in a differentiated state.\n\nIf anyone wishes to see the subsequent extension ideas for my “1,2,3,4,+,=“ problem just email me: [email protected]\n\nLike\n\n1.", null, "Mark Chubb says:\n\nI won hundred percent agree there’s not tracking or setting my ability. Offering different in the moment feedback or extensions is so much healthier than pre-assuming how much everyone could possibly learn.\n\nLike" ]
[ null, "https://buildingmathematicians.files.wordpress.com/2016/07/polya.jpg", null, "https://buildingmathematicians.files.wordpress.com/2016/07/math_task_analysis_guide-level-of-cognitive-demand1.png", null, "https://buildingmathematicians.files.wordpress.com/2016/07/pythagoreanladder1.gif", null, "https://buildingmathematicians.files.wordpress.com/2016/07/maxresdefault-1.jpg", null, "https://buildingmathematicians.files.wordpress.com/2016/07/dan-meyers-taco-cart-three-act-math-task.png", null, "https://pbs.twimg.com/media/CROU7hrUYAAufsn.jpg:large", null, "https://2.gravatar.com/avatar/e379ce308cc9acf778234f7bb6dd39f5", null, "https://0.gravatar.com/avatar/cb3329b4b5bf36cf2dafb59e0b0e04f1", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9447548,"math_prob":0.8770306,"size":7525,"snap":"2019-13-2019-22","text_gpt3_token_len":1574,"char_repetition_ratio":0.122058235,"word_repetition_ratio":0.041533545,"special_character_ratio":0.2013289,"punctuation_ratio":0.09979065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9721357,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T05:19:42Z\",\"WARC-Record-ID\":\"<urn:uuid:d44bedd4-d934-4133-b271-d4d2845d6e0d>\",\"Content-Length\":\"89721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8d1094f-7911-48f9-9f56-4cd4cd35436c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c355c060-b304-4e8e-af0b-26ab039721e2>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://buildingmathematicians.wordpress.com/2016/07/07/is-that-even-a-problem/\",\"WARC-Payload-Digest\":\"sha1:7XMS4HS25OXMNZVV42U3GRMV4B6XZ6XF\",\"WARC-Block-Digest\":\"sha1:6BYFLKVNPOEQTKLBPV4A53JKSFAWACFR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257889.72_warc_CC-MAIN-20190525044705-20190525070705-00035.warc.gz\"}"}
https://export.arxiv.org/abs/2001.05921
[ "cs.DM\n\n# Title: Generalized Fitch Graphs III: Symmetrized Fitch maps and Sets of Symmetric Binary Relations that are explained by Unrooted Edge-labeled Trees\n\nAbstract: Binary relations derived from labeled rooted trees play an import role in mathematical biology as formal models of evolutionary relationships. The (symmetrized) Fitch relation formalizes xenology as the pairs of genes separated by at least one horizontal transfer event. As a natural generalization, we consider symmetrized Fitch maps, that is, symmetric maps $\\varepsilon$ that assign a subset of colors to each pair of vertices in $X$ and that can be explained by a tree $T$ with edges that are labeled with subsets of colors in the sense that the color $m$ appears in $\\varepsilon(x,y)$ if and only if $m$ appears in a label along the unique path between $x$ and $y$ in $T$. We first give an alternative characterization of the monochromatic case and then give a characterization of symmetrized Fitch maps in terms of compatibility of a certain set of quartets. We show that recognition of symmetrized Fitch maps is NP-complete. In the restricted case where $|\\varepsilon(x,y)|\\leq 1$ the problem becomes polynomial, since such maps coincide with class of monochromatic Fitch maps whose graph-representations form precisely the class of complete multi-partite graphs.\n Subjects: Discrete Mathematics (cs.DM); Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS); Combinatorics (math.CO) MSC classes: 68R01, 05C05, 92D15 Cite as: arXiv:2001.05921 [cs.DM] (or arXiv:2001.05921v2 [cs.DM] for this version)\n\n## Submission history\n\nFrom: Marc Hellmuth [view email]\n[v1] Thu, 16 Jan 2020 16:14:36 GMT (51kb,D)\n[v2] Wed, 20 Jan 2021 08:51:04 GMT (87kb,D)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8690247,"math_prob":0.95836437,"size":1756,"snap":"2021-21-2021-25","text_gpt3_token_len":437,"char_repetition_ratio":0.11472603,"word_repetition_ratio":0.0,"special_character_ratio":0.23177676,"punctuation_ratio":0.111455105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9756105,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T00:00:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fc498415-18c4-456e-a885-40d3388164eb>\",\"Content-Length\":\"18329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c45eeff-814a-4106-be77-6b0fafc9037d>\",\"WARC-Concurrent-To\":\"<urn:uuid:56a6d40b-782e-406d-908e-b2aa2a54ce45>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"https://export.arxiv.org/abs/2001.05921\",\"WARC-Payload-Digest\":\"sha1:4CJP2TUBEEZLMAXMVGDFFRL5AXSTYZI6\",\"WARC-Block-Digest\":\"sha1:FUZTLEBND55BGHBMBPKAKN37UWJPIVLK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988828.76_warc_CC-MAIN-20210507211141-20210508001141-00112.warc.gz\"}"}
https://mpatacchiola.github.io/blog/2019/11/18/bayes-stairs.html
[ "", null, "Some of you have probably recognized the above image. That’s “Relativity” one of the most famous lithograph of the artist Maurits Cornelis Escher. In the Relativity world there is the intersection of multiple orthogonal sources of gravity. There are various stairways, and each stairway can be used to move between two different gravity sources. Another interesting Escher’s lithograph is “Ascending and Descending”, where two lines of anonymous men appear over an impossible staircase, one line ascending while the other descends, in a sort of ritual. Escher was inspired by the work of the psychiatrist Lionel Penrose, the father of the physicist Roger Penrose, who was working on the model of an impossible staircase, today known as Penrose stairs. The illusion was published by Lionel and Roger in 1958 as a scientific article “Impossible objects: A special type of visual illusion”. Escher discovered Penroses’ work in 1959 and, fascinated by the illusion, released the lithograph one year later. Now, here comes the mind twist. Roger Penrose (the son of Lionel) was introduced to Escher’s work in 1954, during a conference, and impressed by the artist drawings decided to realise one by his own. After multiple attempts Roger came out with the Penrose triangle. Rogers showed the triangle to his father Lionel who produced some variants, including the Penrose stairs. So the work of Escher could not be possible without the contribution of the Penroses, and viceversa. I find this loop between Escher and the Penroses fascinating, especially because connected with a loopy staircase.\n\n## The Bayes stairs\n\nTaking inspiration from the Penrose stairs I coined the term Bayes stairs to describe the different levels of inference one can manage in a Bayesian hierarchical model. Like the Penrose stairs the Bayes stairs bend backward in a recursive twist (more on this in the last step). The Bayes stairs have five steps:\n\n1. Maximum Likelihood (ML)\n2. Maximum a Posteriori (MAP)\n3. Maximum Likelihood type II (ML-II)\n4. Maximum a Posteriori type II (MAP-II)\n5. Fully Bayesian treatment\n\nI will show you how climbing the stairs allows us to get closer and closer to a fully Bayesian treatment, just to find ourselves at the very first step once we reach the top of the staircase. Having in mind the Bayes stairs is a useful trick to remember all the options available in Bayesian analysis. Moreover, the stairs can be used during an empirical approach as guidelines. Roughly speaking, the staircase is built such that “more integrals one performs, the more Bayesian one becomes” (Murphy, 2012). Reaching higher steps requires a major effort but also gives a larger payoff.\n\nThis post is mainly based on Chapters 5.5 and 5.6 of the book “Machine learning: a probabilistic perspective” by Murphy, and Chapters 5.5 and 5.6 of the book “Deep Learning” by Goodfellow et al. (yes, there is a loop even in the books chapters). Additional resources are reported at the end of the post.", null, "Prerequisites for a good understanding of the post are basic concepts of probability theory and statistics. For instance, I assume you are familiar with random variables, marginal and conditional distributions, Bayes’s rule, likelihood, and Gaussian distributions.\n\n## Step 1: Maximum Likelihood (ML)\n\nLet’s consider a simple probabilistic graphical model such as $$\\theta \\rightarrow x$$, where $$\\theta$$ are parameters representing our model (it can be a scalar or a vector) and $$\\mathcal{D} = \\{x_n\\}_{n=1}^{N}$$ is a dataset of $$N$$ samples drawn independently from an unknown data-generating distribution $$p(\\mathcal{D})$$. Our goal is to approximate $$p(\\mathcal{D})$$ through $$q(\\mathcal{D} \\vert \\theta)$$, meaning that we aim at minimizing the Kullback-Leibler divergence (KL) between the two distributions. Abusing the notation for the sake of clarity we write:\n\n$D_{\\mathrm{KL}}(p(\\mathcal{D}) \\| q(\\mathcal{D} \\vert \\theta))=\\int_{-\\infty}^{\\infty} p(\\mathcal{D}) \\log \\left(\\frac{p(\\mathcal{D})}{q(\\mathcal{D} \\vert \\theta)}\\right) d \\mathcal{D}.$\n\nWe can notice that $$p(\\mathcal{D})$$ is not function of the model parameters $$\\theta$$, meaning that we only need to consider $$q(\\mathcal{D} \\vert \\theta)$$ to minimize $$D_{\\mathrm{KL}}$$, and since $$q(\\mathcal{D} \\vert \\theta)$$ appears in the denominator it means we have to maximize it.\n\nWhat we came up with, following this reasoning, is a procedure known as Maximum Likelihood (ML) estimation. The objective of ML is to find\n\n$\\hat{\\theta}_{\\text{ML}} = \\text{argmax}_{\\theta} \\ q(\\mathcal{D} \\vert \\theta).$\n\nThis is generally done by taking the derivative of the log likelihood $$\\log q(\\mathcal{D} \\vert \\theta)$$ with respect to $$\\theta$$ and then maximizing via gradient ascent. Note that in ML we are doing a point estimate of the parameters $$\\theta$$.\n\nThe ML estimator can be easily adapted to deal with datasets $$\\mathcal{D} = \\{(x_n,y_n)\\}_{n=1}^{N}$$ of input-output pairs, and in fact this is the standard assumption in supervised learning. In this case we assume a model $$\\theta \\rightarrow x \\rightarrow y$$ where $$x$$ predicts $$y$$.\n\nExample: let’s suppose that our dataset $$\\mathcal{D} = \\{(x_n,y_n)\\}_{n=1}^{N}$$ is composed by real valued scalars for both input $$x$$ and output $$y$$. We are in the supervised regression case. We model the output $$y$$ as a Gaussian random variable meaning that $$y \\sim \\mathcal{N}(\\mu, \\sigma^2)$$, this is a reasonable assumption most of the times since it takes into account uncertainty in the data. Our model can be represented as a function $$\\mathcal{F}_{\\theta}(x) \\rightarrow \\hat{y}$$ mapping the inputs to some approximated output $$\\hat{y}$$. For instance, if we want to fit a line on the data (linear regression) then $$\\mathcal{F}_{\\theta}(x) = mx + b$$ with parameters $$\\theta = [m,b]$$ representing the slope and bias of a straight line. In this particular case the Gaussin on $$y$$ has mean given by $$\\mu = \\mathcal{F}_{\\theta}(x) = \\hat{y}$$, leading to the following objective\n\n$\\hat{\\theta}_{\\text{ML}} = \\text{argmax}_{\\theta} \\ \\prod_n p(y_n \\vert x_n, \\theta) = \\text{argmax}_{\\theta} \\ \\prod_n \\frac{1}{\\sqrt{2 \\sigma^2 \\pi}} \\ \\exp \\Bigg(-\\frac{(y_n - \\mathcal{F}_{\\theta}(x_n) )^2}{2 \\sigma^{2}} \\Bigg),$\n\nwhere I replaced the likelihood $$p(y \\vert x, \\theta)$$ with the Gaussian distribution $$\\mathcal{N}(y \\vert \\mu=\\mathcal{F}_{\\theta}(x), \\sigma^2)$$. In ML we are maximizing this expression, meaning that the normalization constant can be removed. Moreover, taking the logarithm (that cancels out with the exponential and turns products into sums) and considering the variance to be constant and equal to $$\\sigma^2=1$$, we end up with\n\n$\\hat{\\theta}_{\\text{ML}} = \\text{argmax}_{\\theta} \\ -\\frac{1}{N} \\sum_n (y_n - \\mathcal{F}_{\\theta}(x_n))^2.$\n\nThis is equivalent to the negative Mean Squared Error (MSE) between the data and the model prediction. Maximizing this quantity correspond to minimizing the MSE. Note that, if instead of a linear regressor we were using a neural network to model $$\\mathcal{F}_{\\theta}(x)$$ the results would not change, since backpropagation over the MSE loss has exactly the same meaning.\n\nProblems with ML: for long time ML has been the workhorse of machine learning. For instance, we are implicitly using ML every time we are doing supervised training of a neural network. However, ML is prone to overfitting. A huge model, with millions of parameters, will fit the data almost perfectly, resulting in a very high log likelihood. This means that ML tends to favor complex models against simple ones. A way to attenuate this issue is to introduce a regularization term, this is what we are going to do in the next step…\n\n## Step 2: Maximum a Posteriori (MAP)\n\nIf our goal is to get the point estimate of an unknown real valued quantity, what we can do is to compute the mean, median or mode of the posterior. Those statistics can be good descriptors of the unknown value. Among those quantities, the posterior mode is the most popular choice because finding the mode reduces to finding the maximum, a common optimization problem. This particular choice is called Maximum a Posteriori (MAP).\n\nIn order to move forward to the second step, we need two prerequisites. (i) We have to define a suitable prior distribution parameterized by $$\\theta$$ (we are still considering the case $$\\theta \\rightarrow \\mathcal{D}$$). A smart choice is a conjugate prior from the exponential family, that will help us with the second prerequisite. (ii) We need an analytical expression for the posterior distribution $$p(\\boldsymbol{\\theta} \\vert \\mathcal{D})$$ (since we will need to estimate its derivative). When the two prerequisites are satisfied, MAP consists in maximizing the posterior with respect to the parameters, where the posterior is obtained via Bayes’ rule:\n\n$p(\\boldsymbol{\\theta} | \\mathcal{D}) = \\frac{p(\\mathcal{D} | \\boldsymbol{\\theta}) p(\\boldsymbol{\\theta})}{p(\\mathcal{D})}.$\n\nSince we are interested in finding the mode of the posterior, and the mode does not change before and after normalization, we can get rid of the denominator and just consider the numerator:\n\n$p(\\boldsymbol{\\theta} | \\mathcal{D}) \\propto p(\\mathcal{D} | \\boldsymbol{\\theta}) p(\\boldsymbol{\\theta}).$\n\nThat’s the final expression we were looking for. Our goal now is to find $$\\hat{\\theta}$$ the value of $$\\theta$$ that maximize the objective, here defined as:\n\n$\\hat{\\theta}_{\\text{MAP}} = \\text{argmax}_{\\theta} \\ p(\\mathcal{D} | \\boldsymbol{\\theta}) p(\\boldsymbol{\\theta}).$\n\nIf you compare the MAP objective and the objective used in ML (see step 1), you will notice that here we are doing the same thing but we are weighting the likelihood by the prior over parameters $$p(\\boldsymbol{\\theta})$$. The effect of adding the prior is to shift the probability mass towards regions of the parameter space that are preferred a priori. Note that if we trivially assume an uniform prior over $$p(\\boldsymbol{\\theta})$$ we fall back to step 1, since the constant term would cancel out and only the likelihood would really matter. Another advantage of using the prior is its regularization effect, which was missing in setp 1. The prior is a constraint over the parameters and in gradient based learning it forces the weights to be updated in specific directions.\n\nExample: let’s continue the example given in the previous section, and let’s suppose that we want to impose a prior distribution over the parameters $$\\theta$$ of our generic model $$\\mathcal{F}_{\\theta}$$. It would be silly to use an uniform prior, because this is equivalent to ML (previous step). Our likelihood is a Gaussian distribution, therefore we can be smart and use another Gaussian as prior. This will ensure conjugacy if $$\\mathcal{F}_{\\theta}$$ is in an appropriate form. A good choice is a Gaussian with zero mean and variance $$\\tau^2$$\n\n$p(\\theta) = \\prod_i \\mathcal{N}(\\theta_i \\vert 0, \\tau^2) = \\prod_i \\frac{1}{\\sqrt{2 \\tau^2 \\pi}} \\ \\exp \\Bigg(-\\frac{(\\theta_i - 0 )^2}{2 \\tau^{2}} \\Bigg),$\n\nwere we have assumed that $$\\theta$$ is a vector. If we now apply the same considerations of the previous step (removing the normalization constant, taking the logarithm) we end up with\n\n$p(\\theta) \\propto -\\frac{\\lambda}{2} ||\\theta||_{2}^{2},$\n\nwhere $$\\lambda$$ is just the precision (reciprocal of the variance), and we have taken the norm of the vector $$\\theta$$ (since the logarithm turns the product into a sum over its components). Now recall that MAP consists in finding the mode of the posterior, where the posterior is given by the likelihood times the prior\n\n$\\hat{\\theta}_{\\text{MAP}} = \\text{argmax}_{\\theta} \\log p(y \\vert x, \\theta) p(\\theta) = \\text{argmax}_{\\theta} \\ -\\frac{1}{N} \\sum_n \\underbrace{(y_n - \\mathcal{F}_{\\theta}(x_n))^2}_{\\text{data-fit}} - \\underbrace{\\frac{\\lambda}{2} ||\\theta||_{2}^{2}}_{\\text{penalty}}.$\n\nThe above expression can be decomposed in a data-fit (likelihood) and a penalty (prior) term. The penalty in this case is just an $$l_2$$ regularizer also known as Tikhonov regularization or ridge regression. In general when a model is overfitting there are many large positive and negative values in $$\\theta$$. The regularization term encourages those parameters to be small (close to zero), resulting in a smoother approximation, by using a Gaussian prior with zero mean. Note that if we assign a constant value to $$\\lambda$$ we can tune the strength of the regularization term making the underlying Gaussian distribution more or lees peaked around the mean.\n\nChanging the Gaussian prior into a Laplace prior (or double-exponential prior) we are instead imposing an $$l_1$$ penalty, also known as Lasso regularization. This prior is sharply peaked at the origin and it strongly moves the parameters toward zero.\n\nProblems with MAP: we said that MAP corresponds to a point estimate of the posterior mode. It turns out that the mode is usually quite untypical of the distribution (unlike the mean or median that take the volume of the distribution into account), and this is why MAP can still give a rather poor approximation of the parameters. Another problem with MAP is that the prior can sometimes be an arbitrary choice left to the designer and this may influence the posterior in the low-data regime. As we said, using a uniform prior is pointless since we revert to an ML estimate.\n\n## Step 3: Maximum Likelihood type II (ML-II)\n\nStep 3 is known as an ML-II procedure since it corresponds to maximum likelihood but at a higher level. To be more precise, we are now considering a probabilistic graphical model with two levels $$\\eta \\rightarrow \\theta \\rightarrow \\mathcal{D}$$, where $$\\eta$$ are latent variables representing a hyperprior over the prior. This may sound confusing but if you think about that, it makes sense to model the parameter of our prior as another probability distribution instead of relying on a point estimate. This is an example of a hierarchical (or multi-level) Bayesian model. In order to access the third step, it is necessary to compute the posterior on multiple levels of latent variables.\n\nIn literature ML-II is also known as Empirical Bayes, since the hyperprior distribution is estimated from the data, meaning that the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Note that this assumption violates the principle that the prior should be chosen in advance, independently of the data, as needed in a rigorous Bayesian treatment. This is the price to pay at step 3, for having a computationally cheap approximation.\n\nThe main difference with respect to the previous steps is that here the objective is to focus on the set of parameters $$\\eta$$ used to model the hyperprior. The strategy we use to achieve this objective is to analytically marginalize out $$\\boldsymbol{\\theta}$$, leaving us with the simpler problem of just computing $$p(\\eta \\vert \\mathcal{D})$$:\n\n$\\hat{\\boldsymbol{\\eta}}_{\\text{ML-II}}=\\operatorname{argmax}_{\\boldsymbol{\\eta}} \\int p(\\mathcal{D} | \\boldsymbol{\\theta}) p(\\boldsymbol{\\theta} | \\boldsymbol{\\eta}) d \\boldsymbol{\\theta}.$\n\nThe above expression is also known as marginal distribution. Marginalizing out $$\\theta$$ is not always possible, for this reason we have to be clever and use conjugate priors when appropriate. Once this has been done we are left with finding $$\\hat{\\eta}$$, at step 3 this is done using ML. As in step 1 we simply take the derivative with respect to $$\\eta$$ and then we maximize via gradient ascent.\n\nProblems with ML-II: type II can be considered an improvement over type I. For instance when both the prior and the likelihood are Gaussian distributions, the empirical Bayes estimators (e.g. the James-Stein estimator) dominate the simpler maximum likelihood estimator in terms of quadratic loss and at the same time provide an artifice to avoid the drawback of a fully Bayesian treatment. As pointed out by Carlin and Louis, the empirical Bayes “offers a way to dangle one’s legs in the Bayesian water without having to jump completely into the pool” (Carlin and Louis, 2000). However, even if type II can be considered an improvement, we are still into an ill-formed Bayesian setting. In this regard, I agree with Dempster in saying that an empirical Bayesian is someone who “breaks the Bayesian egg but then declines to enjoy the Bayesian omelette” (Dempster, 1983).\n\n## Step 4: Maximum a Posteriori type II (MAP-II)\n\nLevel 4 is known as an MAP-II procedure because it corresponds to applying MAP at the higher hyperprior level. Similarly to MAP (see step 2) we need (i) to define a proper hyperprior distribution over $$\\eta$$, and (ii) to have an analytical form for the posterior over $$\\eta$$. Additionally, we also need to analytically marginalize out $$\\boldsymbol{\\theta}$$:\n\n$\\hat{\\boldsymbol{\\eta}}_{\\text{MAP-II}}=\\operatorname{argmax}_{\\boldsymbol{\\eta}} \\int p(\\mathcal{D} | \\boldsymbol{\\theta}) p(\\boldsymbol{\\theta} | \\boldsymbol{\\eta}) p(\\boldsymbol{\\eta}) d \\boldsymbol{\\theta}.$\n\nWhat are exactly the parameters $$\\eta$$? Well, it depends from the type of distribution associated to the prior. If our prior is a Gaussian distribution we need to set an hyperprior for the mean and variance of such a Gaussian. Moreover, we also need the posterior over the hyperprior in order to apply MAP-II. To get an analytical posterior it is necessary to use a conjugate hyperprior.\n\nWhere does the strength of type II inference come from? Hierarchies exist in many datasets and modelling them appropriately adds statistical power. The strength comes from borrowing statistical strength from the experience of others. This has been clearly pointed out for the James–Stein estimator, where given case 1, it is possible to learn from the experience of the other $$N-1$$ cases (see Efron, 2012, Chapter 1). The exact meaning of this passage will become evident in the example at the end of the post.\n\n## Step 5: Fully Bayesian\n\nWe are on top at the last step. Over here it is possible to perform inference at any level, and to estimate all the posterior distributions encountered so far. If you are thinking that this is too good to be true then you are right. A fully Bayesian treatment for non-trivial hierarchical models is only possible through sampling methods, such as Markov Chain Monte Carlo (MCMC). This is computationally expensive and it requires some experience in tuning the parameters of the sampler (e.g. warmup period).\n\nWhen to go for a fully Bayesian treatment? Hard to say, it depends from the problem at hand and the data available. We need a fully Bayesian treatment whenever we are not happy with a point estimate, in this case a fully Bayesian treatment would unlock the posterior.\n\nDuring the years have been proposed some methods that represent a compromise between a point estimate and a fully Bayesian treatment, such as the Laplace approximation, expectation propagation, and variational approximation. As the names suggest, all these methods perform an approximation of the posterior distribution. Whether this approximation is good or no depends from the shape of the posterior and the distribution used to approximate it. For instance, if the posterior is multi-modal and we are using a Gaussian to approximate it, then our approximation risks to be rather poor.\n\nNote that step 5 can be considered as step 0, like in Penrose stairs and Escher’s Ascending and Descending. In fact, we can decide to directly go for a fully Bayesian treatment from the very beginning, without having to climb the staircase at all. However, very often only an empirical approach will show which is the right method to use and this requires climbing the Bayes stairs more than once (hopefully not in an eternal loop).\n\n## Example: robotic arms\n\nThe startup Reactive-Bots is producing and selling robotic arms. Their latest product is a complex 6-dof cordless arm which can be used both in industry and academia for manufacturing and research. The distinctive characteristic of the new model is the use of an internal battery which allows deploying the arm in situations where a power socket is not available.\n\nYou have been recently hired in the R&D department and you have been asked to estimate the average power consumption of the arm. Your estimate is particularly important because it will be used to define a software failsafe trigger that protects the battery against an overload. From now on we will be concerned with the problem of finding the mean power consumption, assuming that the variance is given.\n\nStep 1: from a preliminary analysis you notice that there are large power fluctuations due to external and internal factors (e.g. workload, room temperature, etc) therefore to get a good estimate you decide to record the power for a period of several hours in laboratory conditions, and then estimate the mean over $$N$$ different arms.\n\nLet’s formalize the problem defining a dataset $$\\mathcal{D} = \\{x_n\\}_{n=1}^{N}$$ with $$x_n \\in \\mathbb{R}$$ and no labels (unsupervised). The shape of the underlying data generating distribution is unknown, but common sense suggest it may have a bell-like shape, we go for a Gaussian likelihood. The use of a Gaussian distribution is particularly well suited for the problem at hand, because if you get the mean and the standard deviation right, it will be possible to easily detect abnormal spikes and trigger the failsafe. More formally, let’s define $$\\mathcal{F}_{\\theta}$$ as a Gaussian distribution with parameters $$\\theta = [ \\mu, \\sigma^2 ]$$ representing the mean and variance.\n\nNow, we want to use $$\\mathcal{F}_{\\theta}$$ to approximate the data generating distribution. At the first step of the Bayes stairs this can be done through ML estimation. The parameter we need to estimate is the mean $$\\mu$$ (as said above we are not concerned about the variance), this has a closed form expression that can be easily obtained taking the logarithm of the Gaussian and then the derivative\n\n$\\hat{\\mu}_{\\text{ML}}=\\frac{1}{N} \\sum_{n=1}^{N} x_{n}.$\n\nIt turns out that ML estimation of a Gaussian just consists in the estimation of the empirical mean over the $$N$$ data points.\n\nStep 2: there is another information we should take into account in our estimation. The battery has an optimal functional range, respecting this range maximizes the operational life span. Following this line of thoughts you consult the datasheet of the battery and notice that the manufacturer has reported the optimal average and standard deviation that guarantees maximal life span, this can be used as a prior.\n\nTo ensure conjugacy the prior over the mean is defined as a Gaussian with parameters $$\\theta_0 = [ \\mu_0, \\sigma_{0}^{2} ]$$ representing the mean and variance taken from the datasheet. The posterior is given by the product between the prior and the likelihood\n\n$p(\\mu) p(x \\vert \\mu)=\\frac{1}{\\sqrt{2 \\pi} \\sigma_{0}} \\exp \\left(-\\frac{(\\mu-\\mu_{0})^{2}}{2\\sigma_{0}^2} \\right) \\prod_{n=1}^{N} \\frac{1}{\\sqrt{2 \\pi} \\sigma} \\exp \\left(-\\frac{(x_{n}-\\mu)^{2}}{2\\sigma^2}\\right).$\n\nUsing a few tricks (e.g. completing the square) we can get the following form for posterior mean\n\n$\\hat{\\mu}_{\\text{MAP}}=\\frac{\\sigma_{0}^{2} N}{\\sigma_{0}^{2} N+\\sigma^{2}}\\left(\\frac{1}{N} \\sum_{n=1}^{N} x_{n}\\right)+\\frac{\\sigma^{2}}{\\sigma_{0}^{2} N+\\sigma^{2}} \\mu_{0}.$\n\nThe last term shows that MAP estimation of the mean in the Gaussian model is just a linear interpolation between the sample mean (the term in parentheses) and the prior mean $$\\mu_{0}$$, both of them weighted by their variances.\n\nStep 3: Reactive-Bots has finished the design and test of the arms and went on with production and selling. For the second version of the arm there are important updates planned, those are based on the feedback of the customers. In particular it turned out that a single failsafe threshold is not so useful in practice and that it would be ideal to adapt it to the use case. For instance, some customers are using the arms for the fine grained pick and place of small objects, whereas others are using the arms for heavyweight manufacturing. It seems reasonable to decrease the failsafe threshold for the former (to identify anomalies in the low-power regime) and increase it for the latter (to avoid sudden drops in the high-power regime).\n\nEven though you have acquired data from $$N$$ different arms in controlled lab conditions, there is still a wide range of real-world scenarios you have not considered. The arm performance in those settings is not clear to you. It is necessary to investigate the problem acquiring the telemetry from each customer, then find a new estimate of the average power in each application and start working on the upgraded version of the arm controller.\n\nTo formalize the problem we suppose that the dataset $$\\mathcal{D}$$ is divided in chunks such that $$\\mathcal{D} = \\{\\mathcal{D}_m\\}_{m=1}^{M}$$ and that each chunk represents a customer, with the data samples $$\\mathcal{D}_m = \\{x_n\\}_{n=1}^{N_m}$$ being the power measurements acquired via telemetry. Note that each customer bought a different number of arms, indicated as $$N_m$$. As before we assume that the data generating distribution can be modelled with a Gaussian $$\\mathcal{N}(x \\vert \\mu_m, \\sigma^{2}) \\ \\forall x \\in \\mathcal{D}_m$$. In other words, the parametric function $$\\mathcal{F}_{\\theta}$$ is here a Gaussian, with $$\\theta = [ \\mu_m, \\sigma^2 ]$$.\n\nOne way to go would be to use a simple ML or MAP approach (1st and 2nd step of the stairs) to estimate the average power for each customer. However, there are some customers that just bought a few arms, and others that bought thousands of them. While it is easy to find the average power for the data-rich groups, it is significantly more difficult for data-poor groups. It would be great if we could somehow take into account the samples of data-rich groups when estimating the posterior distribution of data-poor groups. It turns out we can do that if we add another level of inference in our probabilistic model.\n\nWe assume that the parameters describing the Gaussian associated to each dataset have been drawn from a common hyper distribution. In our particular case we assume this distribution to be another Gaussian. Note that this is exactly the formulation we used to describe the hyperprior in step 3 of the Bayes stairs. Since we have a Gaussian describing each dataset and a Gaussian as hyperprior, this probabilistic model is often called Gaussian-Gaussian and it is here defined as $$\\eta \\rightarrow \\theta_{m=1}^{M} \\rightarrow x_{n=1}^{N_m}$$, with $$\\eta=[\\nu, \\tau^2]$$ being the mean and variance of the hyperprior. Given these assumptions we can estimate the joint distribution as follows:\n\n$p\\left(\\mu, \\mathcal{D} \\vert \\hat{\\eta}, \\sigma^{2}\\right)=\\prod_{m=1}^{M} \\mathcal{N}\\left(\\mu_{m} | \\hat{\\nu}, \\hat{\\tau}^{2}\\right) \\prod_{n=1}^{N_{m}} \\mathcal{N}\\left(x_{n m} | \\mu_{m}, \\sigma^{2}\\right).$\n\nWe have taken the point estimate of the hyperprior parameters $$\\hat{\\eta}$$ since we are in the ML-II setting. Now, we can simplify the above expression considering that the $$N_m$$ Gaussian measurements in group $$m$$ are equivalent to one measurement with mean and variance given by\n\n$\\bar{x}_m = \\frac{1}{N_m} \\sum_{n=1}^{N_m} x_{nm}, \\ \\ \\sigma^{2}_{m} = \\frac{\\sigma^2}{N_m}.$\n\nThe variance shrinks with the number of observations, since we get more and more confident about the true value. We now want to find the posterior distribution of the mean for a specific group $$m$$, this can be done as follows:\n\n$p(\\mu_m, \\vert \\hat{\\eta}, \\mathcal{D}) = \\mathcal{N}\\left(\\mu_m | \\hat{B}_{m} \\hat{\\nu}+\\left(1-\\hat{B}_{m}\\right) \\bar{x}_{m},\\left(1-\\hat{B}_{m}\\right) \\sigma_{m}^{2}\\right), \\ \\ \\text{with} \\ \\ \\hat{B}_{m} = \\frac{\\sigma_{m}^{2}}{\\sigma_{m}^{2}+\\hat{\\tau}^{2}}.$\n\nIt is worth spending some words to analyze the above expression in particular the shrinkage factor $$\\hat{B}_{m} \\in [0,1]$$. This factor controls the degree of shrinkage towards the hyperprior mean $$\\hat{\\nu}$$. If the sample size $$N_m$$ for group $$m$$ is large, then $$\\sigma^{2}_{m}$$ will be small in comparison to $$\\hat{\\tau}^{2}$$ reducing $$\\hat{B}_{m}$$. When $$\\hat{B}_{m}$$ is small (data-rich groups) then $$\\hat{\\nu}$$ is small and $$\\bar{x}_{m}$$ is large, meaning that we put more weight on the actual measurements respect to the hyperprior mean. When $$\\hat{B}_{m}$$ is large instead (data-poor groups), we get the opposite effect with the hyperprior mean having more weight over the posterior.\n\nThis is exactly what we wanted in our example, data-poor groups will have a small shrinkage factor with the hyperprior counting more. However, we also said that we wanted to take advantage of data-rich groups in the posterior of data-poor groups. How is this obtained? This is automatically done when we perform ML-II on the hyperprior parameters. Taking the derivative with respect to $$\\hat{\\nu}$$ we get that the ML estimate corresponds to\n\n$\\hat{\\nu} = \\frac{1}{M} \\sum_{m=1}^{M} \\bar{x}_m.$\n\nWhat does this expression is telling us? The mean of the hyperprior is given by the average over all samples, therefore groups with more samples will have a larger effect on $$\\hat{\\nu}$$ influencing the posterior of data-poor groups. Note that if we assume $$\\sigma_m = \\sigma$$ for all groups then we just have the James-Stein estimator.\n\nStep 4: if we have good reasons to assume that the hyperprior mean is close to a specific value then we can define a Gaussian prior over $$\\eta$$ embedding such an assumption (yes, that would be a prior over the hyperprior). This has the same regularization effect discussed for MAP type I at step 2, moving the average $$\\hat{\\nu}$$ toward the prior mean. In particular we would end up with a linear interpolation between two terms: (i) the prior mean, and (ii) the sample mean, both of them weighted by their respective variances. Also in this case assuming a uniform prior instead of a Gaussian one, we fall back to step 3 since we would be doing just ML-II. I will skip a detailed derivation here, which is left as an exercise for the reader.\n\nStep 5: at the beginning of this example we made the simplifying assumption that the posterior distribution over the power consumption of the robotic arms could be modelled relatively well by a Gaussian distribution. However, let’s remember that the Gaussian may be a rather poor approximation if the posterior is multimodal. Another reason why we choose a Gaussian is that there is a conjugate prior we can use to get the posterior in a closed form. Moving toward other distributions we can lose this helpful property ending up with an intractable posterior.\n\nIs there a way to avoid this oversimplification? Well yes, that is exactly what you can do in the last step of the Bayes stairs. Methods such as MCMC are distribution agnostic, meaning that you can go for a full Bayesian treatment without worrying to much about the shape of the posterior. However, this incur in a significant computational cost and it also requires a certain experience in tuning the hyperparameters of the sampler. A detailed discussion of this step is out of scope and it would require another blog post.\n\n## Conclusions\n\nIn this post I gave you an overview of Bayesian hierarchical models using the metaphor of the Bayes stairs. Climbing the stairs corresponds to performing Bayesian inference at different levels. It is necessary to have a deep insight into the problem at hand in order to understand which step should be considered the final one. Most of the time this requires an empirical approach, moving up and down, until the optimal solution is reached.\n\nThat’s all folks! If you liked the post please share it with your network and give me some feedback in the comment section below. I send you my greetings with the hope that you enjoyed climbing the Bayes stairs.\n\nCarlin, B. P., & Louis, T. A. (2000). Empirical Bayes: Past, present and future. Journal of the American Statistical Association, 95(452), 1286-1289.\n\nDempster A.P. (1983). Parametric empirical Bayes inference: theory and applications. Journal of the American statistical Association.\n\nEfron, B. (2012). Large-scale inference: empirical Bayes methods for estimation, testing, and prediction (Vol. 1). Cambridge University Press.\n\nGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.\n\nMurphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press." ]
[ null, "https://mpatacchiola.github.io/blog/images/headline_escher_stairs.png", null, "https://mpatacchiola.github.io/blog/images/books_statistical_ml_deep_learning.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8912065,"math_prob":0.99684966,"size":32480,"snap":"2022-27-2022-33","text_gpt3_token_len":7847,"char_repetition_ratio":0.14462988,"word_repetition_ratio":0.01167942,"special_character_ratio":0.24451971,"punctuation_ratio":0.08237452,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995439,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T08:48:53Z\",\"WARC-Record-ID\":\"<urn:uuid:67708509-cc12-4153-a6ec-4357c2e7c190>\",\"Content-Length\":\"45177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eaf62c91-d009-4f03-bac7-77c5213e2648>\",\"WARC-Concurrent-To\":\"<urn:uuid:51498fb4-d2da-405d-bccc-d7f3be202f68>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mpatacchiola.github.io/blog/2019/11/18/bayes-stairs.html\",\"WARC-Payload-Digest\":\"sha1:FYHYNJ4IW54I35Y5JTBSNJUKPW7KCUBJ\",\"WARC-Block-Digest\":\"sha1:GMNVYADF3JF7BSGXPCWG43Z6CIUEGNCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103329963.19_warc_CC-MAIN-20220627073417-20220627103417-00753.warc.gz\"}"}
https://mathoverflow.net/questions/353950/aleksandrovs-proof-of-the-second-order-differentiability-of-convex-functions
[ "# Aleksandrov's proof of the second order differentiability of convex functions\n\nAleksandrov [A], proved a remarkable property of convex functions.\n\nTheorem. If $$f:\\mathbb{R}^n\\to\\mathbb{R}$$ is convex, then for almost every $$x\\in\\mathbb{R}^n$$ there is $$Df(x)\\in\\mathbb{R}^n$$ and a symmetric $$(n\\times n)$$ matrix $$D^2f(x)$$ such that $$\\lim_{y\\to x} \\frac{|f(y)-f(x)-Df(x)(y-x)-\\frac{1}{2}(y-x)^TD^2f(x)(y-x)|}{|y-x|^2}=0.$$\n\nI know two proofs of this result. One based on the theory of maximal monotone functions and one based on the fact that the second order distributional derivatives of a convex function are Radon measure. Both proofs are mentioned in Second order differentiability of convex functions. Since these proofs use relatively modern techniques not available during Aleksandrov's time, his argument must have been very different.\n\nQuestion 1. Can you briefly explain what was the idea of the original proof due to Aleksandrov?\n\nMy guess would be that his proof was based on methods of differnetial geometry. What else could he use in those days?\n\nQuestion 2. In there any textbook where I can find the original proof due to Aleksandrov?\n\n[A] A. D. Alexandroff, Almost everywhere existence of the second differential of a convex function and some properties of convex surfaces connected with it. (Russian) Leningrad State Univ. Annals [Uchenye Zapiski] Math. Ser. 6, (1939), 3–35.\n\n• There's a paper by Bianchi, Colesanti, and Pucci titled \"On the second differentiability of convex surfaces\" which gives brief synopses of different approaches to the theorem of Alexandroff (including the preceding two dimensional case due to Busemann and Feller). Maybe instead of reading Russian/German the proof can be pieced together from the descriptions there? Mar 2, 2020 at 3:21\n• @WillieWong If you turn it into an answer, I will accept it (unless someone writes a more detailed one). Mar 2, 2020 at 14:16\n\nThe paper gives also a new proof of the theorem, which is claimed to be in the same spirit of the original arguments of Busemann-Feller and Alexandroff. The authors considered the second order difference quotient of the convex function based at a point $$x$$, which they show has a limit a.e. as a convex function. This new convex function is related to the argument of Busemann-Feller in that the indicatrices constructed by Busemann-Feller are the 1-level-sets of this limited convex function." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9072949,"math_prob":0.97122836,"size":1309,"snap":"2022-05-2022-21","text_gpt3_token_len":360,"char_repetition_ratio":0.11340996,"word_repetition_ratio":0.021276595,"special_character_ratio":0.25210086,"punctuation_ratio":0.103846155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99843985,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T09:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:6e7f2925-57c5-4ad3-b3ac-5b7befa39f88>\",\"Content-Length\":\"109203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:368e2770-5a7d-4ee4-8100-b9925d77befe>\",\"WARC-Concurrent-To\":\"<urn:uuid:3af79ab0-a851-4f48-96a0-28a2e06b40bd>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/353950/aleksandrovs-proof-of-the-second-order-differentiability-of-convex-functions\",\"WARC-Payload-Digest\":\"sha1:TPZFELKVEYQOKFQSUDSJFBBMY2WKQP2Z\",\"WARC-Block-Digest\":\"sha1:LXD2CSXAE7KVSK72XHNKAAFDDPLC5WFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663048462.97_warc_CC-MAIN-20220529072915-20220529102915-00591.warc.gz\"}"}
http://ricksci.com/che/chea_atomic_pretest.htm
[ "Class Copy\n\n# Atomic Structure Pretest\n\n atom nucleus protons neutrons electrons electron cloud electron shell isotopes atomic mass units atomic number mass number atomic mass\n\n1. Negatively charged subatomic particle found in the electron cloud.\n2. Positively charged subatomic particle found in the nucleus.\n3. Neutral subatomic particle found in the nucleus.\n4. Central part of an atom made up of the protons and neutrons.\n5. Atoms of the same element having different numbers of neutrons.\n6. Number of protons in an atom.\n7. Average of the masses of naturally occurring isotopes of an element, weighted by abundance.\n8. The space around the nucleus of the atom through which the electrons move.\n9. Mass of an atom in atom mass units it is the sum of the number of protons and neutrons.\n10. Energy level of an e- that we model as the distance an e- circles the nucleus.\n11. Tiniest particle of an element that has all the properties of the element.\n12. Unit to describe the mass of atoms, molecules and subatomic particles.\n\n1. What element is in group 4A, period 2?\n2. Find silicon on the periodic table. How many protons are in silicon?\n3. What is the atomic number of silicon?\n4. What is the atomic mass of silicon?\n5. What is the symbol for silicon?\n6. How many neutrons are in the most common isotope of silicon?\n7. In what group is silicon ?\n8. In what period is silicon?\n9. On the back of your answer sheet, draw a Bohr diagram for silicon." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8402263,"math_prob":0.92694074,"size":1431,"snap":"2019-13-2019-22","text_gpt3_token_len":331,"char_repetition_ratio":0.1513665,"word_repetition_ratio":0.062992126,"special_character_ratio":0.22361985,"punctuation_ratio":0.099236645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97252434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T05:49:33Z\",\"WARC-Record-ID\":\"<urn:uuid:07032df5-9e9e-4996-8114-2b8ca50cb14e>\",\"Content-Length\":\"7371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95293588-d8a6-4a7a-ad88-e86b7520de50>\",\"WARC-Concurrent-To\":\"<urn:uuid:b041c9aa-4520-4ede-b0a2-f5d2d8d24c1f>\",\"WARC-IP-Address\":\"198.54.115.229\",\"WARC-Target-URI\":\"http://ricksci.com/che/chea_atomic_pretest.htm\",\"WARC-Payload-Digest\":\"sha1:5BB4B7QFELWB34YVNCLBWEFIEXUMBJVQ\",\"WARC-Block-Digest\":\"sha1:AA35VZRE4YEZXLPL3AF3OBAQY7ILVG3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201904.55_warc_CC-MAIN-20190319052517-20190319074517-00368.warc.gz\"}"}
https://calculatorsonline.org/what-is-31-percent-off-68
[ "# 31 percent off 68\n\nHere you will see step by step solution to calculate 31 percent off by 68. What is final price if original price is 68 and percentage is 31? The final price is 46.92, and the discount is 21.08. Check the detailed explanation of answer given below.\n\n## Answer: 31 percent off 68 is\n\n= 46.92\n\n### How to calculate the number 31 percent off 68?\n\nWith the help of given formula we can get the the percent off value -\n\nFormula 1: Discount = n × P% / 100, P = Discount(off) Percent, n = Number(Orig_price)\n\nHere we have, n = 68, P = 31%\nFormula 2: Result = n - Discount\n\n#### What is 31 percent off 68?\n\nGiven number n = 68, P = 31%\n\n• Put the n and P values in the given formula 1:\n• => 68 × 31%\n=> 68 × 31/100\n\n• Now we need to simplify the fraction by multiply 68 with 31 then divide it by 100\n• => 68 × 31/100 = 2108/100 = 21.08\n• 31% off for 68 = 21.08\n• Now we will use formula 2 to get the final price of 31% off 68\n• = 68 - 21.08\n= 46.92\n\nTherefore, result is for 31% (percent) off 68 is 46.92 and difference is 21.08.\n\n#### Queries related to 31% off 68\n\nWhat is 31% off 68?\n\n21.08 is what percent off 68\\$?\n\nWhat is the final price of \\$68 item when it has 31 percent discount?\n\nWhat is\n%(Percent) off\nNum:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87634516,"math_prob":0.9990681,"size":1159,"snap":"2022-40-2023-06","text_gpt3_token_len":377,"char_repetition_ratio":0.16536796,"word_repetition_ratio":0.024590164,"special_character_ratio":0.405522,"punctuation_ratio":0.116981134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T11:18:38Z\",\"WARC-Record-ID\":\"<urn:uuid:70a5824e-163b-4e53-874a-2205a21dd914>\",\"Content-Length\":\"16799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99d6c0c1-fcc3-4c2c-8801-77385c8a478d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5b04e96-7535-4606-ba1d-66f4dd210d76>\",\"WARC-IP-Address\":\"172.67.209.140\",\"WARC-Target-URI\":\"https://calculatorsonline.org/what-is-31-percent-off-68\",\"WARC-Payload-Digest\":\"sha1:S4NNFUCSUYIK7PYFKE3TCG4NWGAMSDZC\",\"WARC-Block-Digest\":\"sha1:YFNRZKR5OFCC26OBGIEYCJQ5U3A6S7FE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00344.warc.gz\"}"}
https://community.dataiku.com/t5/General-Discussion/Retrieve-result-of-an-execute-sql-step-of-a-scenario-and-assign/m-p/11644/highlight/true
[ "# Retrieve result of an execute sql step of a scenario and assign to variable\n\nSolved!", null, "Level 2\n###### Retrieve result of an execute sql step of a scenario and assign to variable\n\nI want to execute an sql in a scenario step: 'select count(*) from table ' and store the result in a variable.\n\nHow can I retrieve the result from the json file of the result set ?\n\n```Exited the exec sql step {\n\"endedOn\": 0,\n\"success\": true,\n\"updatedRows\": 0,\n\"totalRowsClipped\": false,\n\"totalRows\": 1,\n\"log\": \"\",\n\"columns\": [\n{\n\"name\": \"cnt\",\n\"type\": \"int\",\n\"dssType\": \"INT\",\n\"sqlType\": 4\n}\n],\n\"rows\": [\n[\n\"183448174\"\n]\n],\n\"hasResultset\": true ```\n\n2 Solutions", null, "Dataiker\n\nHi,\n\nwith a SQL step setup like this one:", null, "then you can retrieve the value of the first column on the first row with a \"Define scenario variables\" step like", null, "(note the \".join(',')\" at the end of the expression: this is caused by the fact that getPath(...) evaluates a JSONPath expression and thus returns an array of values)\n\nIf the SQL step has a name that is not friendly with variable names (like: contains spaces), you can also use a \"Execute Python code\" step to do the same, with\n\n``````from dataiku.scenario import Scenario\ns = Scenario()\noutputs = s.get_previous_steps_outputs()\nsql_output = [o['result'] for o in outputs if o[\"stepName\"] == 'the_sql_step']\ns.set_scenario_variables(the_first_value = sql_output['rows'])\n``````", null, "Dataiker\n\nnote that for your use case, you can probably use a \"set project variables\" step directly (in place of the \"define variables\" + \"execute python code\")\n\n4 Replies", null, "Dataiker\n\nHi,\n\nwith a SQL step setup like this one:", null, "then you can retrieve the value of the first column on the first row with a \"Define scenario variables\" step like", null, "(note the \".join(',')\" at the end of the expression: this is caused by the fact that getPath(...) evaluates a JSONPath expression and thus returns an array of values)\n\nIf the SQL step has a name that is not friendly with variable names (like: contains spaces), you can also use a \"Execute Python code\" step to do the same, with\n\n``````from dataiku.scenario import Scenario\ns = Scenario()\noutputs = s.get_previous_steps_outputs()\nsql_output = [o['result'] for o in outputs if o[\"stepName\"] == 'the_sql_step']\ns.set_scenario_variables(the_first_value = sql_output['rows'])\n``````", null, "Level 2\nAuthor\n\nThank You. This helped . After defining the scenario variables , I am using the variables in a custom python step and based on the value update the project variable.\n\nSo , the value can be used in other scenarios. Hope, I am in the right way.", null, "Python Step :\n\ns=Scenario()\nvar1 = s.get_all_variables()['table_count']\nvar2 = s.get_all_variables()['duplicate_count']\n\n#update the project variable with the variables of the scenario\nproject1 = client.get_project('PROJ')\nproject_variables = project1.get_variables()\nproject_variables[\"standard\"][\"cnt\"] = var1\nproject_variables[\"standard\"][\"dup\"] = var2\nproject1.set_variables(project_variables)", null, "Dataiker\n\nnote that for your use case, you can probably use a \"set project variables\" step directly (in place of the \"define variables\" + \"execute python code\")", null, "Level 2\nAuthor\n\nThank you so much 😊", null, "", null, "" ]
[ null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar11/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar24/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/image-id/2209i8372D95389D4E2F7/image-size/large/is-moderation-mode/true", null, "https://community.dataiku.com/t5/image/serverpage/image-id/2210i70EEBF0571D106CF/image-size/large/is-moderation-mode/true", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar24/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar24/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/image-id/2209i8372D95389D4E2F7/image-size/large/is-moderation-mode/true", null, "https://community.dataiku.com/t5/image/serverpage/image-id/2210i70EEBF0571D106CF/image-size/large/is-moderation-mode/true", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar11/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/image-id/2212i92E1FEB89FEF44B5/image-size/large/is-moderation-mode/true", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar24/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/t5/image/serverpage/avatar-name/Avatar11/avatar-theme/candy/avatar-collection/Dataiku/avatar-display-size/message/version/2", null, "https://community.dataiku.com/skins/images/2831D38A13C0210445A1A188613DA608/dataiku/images/icon_anonymous_message.png", null, "https://community.dataiku.com/skins/images/2831D38A13C0210445A1A188613DA608/dataiku/images/icon_anonymous_message.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66786784,"math_prob":0.85893583,"size":3081,"snap":"2023-40-2023-50","text_gpt3_token_len":737,"char_repetition_ratio":0.14624634,"word_repetition_ratio":0.6059322,"special_character_ratio":0.26841936,"punctuation_ratio":0.13824058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9842002,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,6,null,6,null,null,null,null,null,6,null,6,null,null,null,3,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T06:30:56Z\",\"WARC-Record-ID\":\"<urn:uuid:7ff16ef8-0f35-4306-b094-ca5fb065fd84>\",\"Content-Length\":\"664575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c17fafd-7cb6-4f4e-9da6-a46fbc1e412a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7affe192-c50d-47d5-97fc-0220be893cf8>\",\"WARC-IP-Address\":\"18.160.10.18\",\"WARC-Target-URI\":\"https://community.dataiku.com/t5/General-Discussion/Retrieve-result-of-an-execute-sql-step-of-a-scenario-and-assign/m-p/11644/highlight/true\",\"WARC-Payload-Digest\":\"sha1:ILNOIIJU7TLXRZ364Y4ZNFTMI76PH7OC\",\"WARC-Block-Digest\":\"sha1:BZMACIXHEVYNMZZJOY4UTPX5NX4NXOVP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510781.66_warc_CC-MAIN-20231001041719-20231001071719-00099.warc.gz\"}"}
https://forums.wolfram.com/mathgroup/archive/2005/Oct/msg00869.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: ParametricPlot3D\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg61717] Re: [mg61693] ParametricPlot3D\n• From: Igor Antonio <igora at wolf-ram.com>\n• Date: Thu, 27 Oct 2005 05:01:44 -0400 (EDT)\n• Organization: Wolfram Research, Inc.\n• References: <[email protected]> <[email protected]>\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Maurits Haverkort wrote:\n> Dear all\n>\n> I want to plot several surfaces in one plot and used for that\n> ParametricPlot3D. Everything works fine if I enter the definitions for the\n> functions inline. If I define however a variable (list) that holds the\n> different functions it does not work anymore. Does anybody know a way out?\n>\n> The functions I want to plot are for example two spherse of unit radia\n> centered at 000 and 100. (I need in the end to plot funcitons (R[t,f]\n> centered at different orrigens)\n>\n> If I write it inline it works fine\n> ParametricPlot3D[{{Cos[f] Sin[t], Sin[t] Sin[f], Cos[t]}, {Cos[f] Sin[t] +\n> 1, Sin[t] Sin[f], Cos[t]}}, {t, 0, 3.1415}, {f, 0, 2 3.1415}];\n>\n> If however I first define my functions and then try to plot:\n> ToPlot = {{Cos[f] Sin[t], Sin[t] Sin[f], Cos[t]}, {Cos[f] Sin[t] + 1, Sin[t]\n> Sin[f], Cos[t]}};\n> ParametricPlot3D[ToPlot, {t, 0, 3.1415}, {f, 0, 2 3.1415}];\n> I get the error:\n> ParametricPlot3D::ppfun : Argument ToPlot is not a list with three or four\n> elements. More.\n>\n> Since the function I need to plot is a result of a calculation I rather need\n> to input it as a variable. Any sugestions?\n>\n> Thanks,\n> Maurits\n\nMaurits,\n\nParametricPlot3D has the Attribute HoldAll, which means it any of its arguments\nbefore launching:\n\nMathematica 5.2 for Microsoft Windows\n-- Terminal graphics initialized --\n\nIn:= Attributes@ParametricPlot3D\n\nOut= {HoldAll, Protected}\n\nYou need to do an Evaluate[] on the argument of ParametricPlot3D so that ToPlot\nis \"converted\" to its values before being passed to ParametricPlot3D, like this:\n\nToPlot = {{Cos[f], Sin[t], Sin[t] Sin[f], Cos[t]}, {Cos[f] Sin[t] + 1, Sin[t]\nSin[f], Cos[t]}};\n\nParametricPlot3D[Evaluate@ToPlot, {t, 0, 3.1415}, {f, 0, 2 3.1415}];\n\n--\n\nIgor C. Antonio\nWolfram Research, Inc.\nhttp://www.wolfram.com\n\nTo email me personally, remove the dash.\n\n```\n\n• Prev by Date: Re: Re: Language vs. Library why it matters\n• Next by Date: Re: MLPutRealList vs. sequence of MLPutDouble\n• Previous by thread: Re: ParametricPlot3D\n• Next by thread: Re: ParametricPlot3D" ]
[ null, "https://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "https://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/5.gif", null, "https://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6155352,"math_prob":0.7147158,"size":2487,"snap":"2021-43-2021-49","text_gpt3_token_len":805,"char_repetition_ratio":0.141764,"word_repetition_ratio":0.06266318,"special_character_ratio":0.33574587,"punctuation_ratio":0.21631879,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98921305,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T20:38:42Z\",\"WARC-Record-ID\":\"<urn:uuid:b35dd1f3-3f3d-4802-bacc-ac530a035663>\",\"Content-Length\":\"46679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bcb538a-2169-413d-924f-e026857a812b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9950d61-f2f2-45ff-b021-ec3bb532b0e1>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"https://forums.wolfram.com/mathgroup/archive/2005/Oct/msg00869.html\",\"WARC-Payload-Digest\":\"sha1:CG5BXIMCPX6GQIBYIITHIF7TBXJOJINA\",\"WARC-Block-Digest\":\"sha1:LEFBRBF4CEUIE2YARMYHU4XRKX3HXB44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359073.63_warc_CC-MAIN-20211130201935-20211130231935-00389.warc.gz\"}"}
https://tutorialspoint.com/julia/julia_flow_control.htm
[ "# Julia - Flow Control\n\n#### Julia Programming For Beginners: Learn Julia Programming\n\n73 Lectures 4 hours\n\n#### Julia Programming Language - From Zero to Expert\n\n24 Lectures 3 hours\n\n#### Hello Julia: Learn the New Julia Programming Language\n\n29 Lectures 2.5 hours\n\nAs we know that each line of a program in Julia is evaluated in turn hence it provides many of the control statements (familiar to other programming languages) to control and modify the flow of evaluation.\n\nFollowing are different ways to control the flow in Julia programming language −\n\n• Ternary and compound expressions\n\n• Boolean switching expressions\n\n• If elseif else end (conditional evaluation)\n\n• For end (iterative evaluation)\n\n• While end (iterative conditional evaluation)\n\n• Try catch error throw (exception handling)\n\n• Do blocks\n\n## Ternary expressions\n\nIt takes the form expr ? a : b. It is called ternary because it takes three arguments. The expr is a condition and if it is true then a will be evaluated otherwise b. Example for this is given below −\n\njulia> A = 100\n100\n\njulia> A < 20 ? \"Right\" : \"wrong\"\n\"wrong\"\n\njulia> A > 20 ? \"Right\" : \"wrong\"\n\"Right\"\n\n\n## Boolean Switching expressions\n\nAs the name implies, the Boolean switching expression allows us to evaluate an expression if the condition is met, i.e., the condition is true. There are two operators to combine the condition and expression −\n\n### The && operator (and)\n\nIf this operator is used in the Boolean switching expression, the second expression will be evaluated if the first condition is true. If the first condition is false, the expression will not be evaluated and only the condition will be returned.\n\nExample\n\njulia> isodd(3) && @warn(\"An odd Number!\")\n┌ Warning: An odd Number!\n└ @ Main REPL:1\n\njulia> isodd(4) && @warn(\"An odd Number!\")\nfalse\n\n\n### The || operator (or)\n\nIf this operator is used in the Boolean switching expression, the second expression will be evaluated only if the first condition is false. If the first condition is true, then there is no need to evaluate the second expression.\n\nExample\n\njulia> isodd(3) || @warn(\"An odd Number!\")\ntrue\n\njulia> isodd(4) || @warn(\"An odd Number!\")\n┌ Warning: An odd Number!\n└ @ Main REPL:1\n\n\n## If, elseif and else\n\nWe can also use if, elseif, and else for conditions execution. The only condition is that all the conditional construction should finish with end.\n\n### Example\n\njulia> fruit = \"Apple\"\n\"Apple\"\n\njulia> if fruit == \"Apple\"\nprintln(\"I like Apple\")\nelseif fruit == \"Banana\"\nprintln(\"I like Banana.\")\nprintln(\"But I prefer Apple.\")\nelse\nprintln(\"I don't know what I like\")\nend\n\nI like Apple\n\njulia> fruit = \"Banana\"\n\"Banana\"\n\njulia> if fruit == \"Apple\"\nprintln(\"I like Apple\")\nelseif fruit == \"Banana\"\nprintln(\"I like Banana.\")\nprintln(\"But I prefer Apple.\")\nelse\nprintln(\"I don't know what I like\")\nend\n\nI like Banana.\nBut I prefer Apple.\n\n\n## for loops\n\nSome of the common example of iteration are −\n\n• working through a list or\n\n• set of values or\n\n• from a start value to a finish value.\n\nWe can iterate through various types of objects like arrays, sets, dictionaries, and strings by using “for” loop (for…end construction). Let us understand the syntax with the following example −\n\njulia> for i in 0:5:50\nprintln(i)\nend\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n\n\nIn the above code, the variable ‘i’ takes the value of each element in the array and hence will step from 0 to 50 in steps of 5.\n\n### Example (Iterating over an array)\n\nIn case if we iterate through array, it is checked for change each time through the loop. One care should be taken while the use of ‘push!’ to make an array grow in the middle of a particular loop.\n\njulia> c = \njulia> 1-element Array{Int64,1}:\n1\n\njulia> for i in c\npush!(c, i)\n@show c\nsleep(1)\nend\n\nc = [1,1]\nc = [1,1,1]\nc = [1,1,1,1]\n...\n\n\nNote − To exit the output, press Ctrl+c.\n\n## Loop variables\n\nLoop variable is a variable that steps through each item. It exists only inside the loop. It disappears as soon as the loop finishes.\n\n### Example\n\njulia> for i in 0:5:50\n\nprintln(i)\nend\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n\njulia> i\nERROR: UndefVarError: i not defined\n\n\n### Example\n\nJulia provides global keyword for remembering the value of the loop variable outside the loop.\n\njulia> for i in 1:10\nglobal hello\nif i % 3 == 0\nhello = i\nend\nend\n\njulia> hello\n9\n\n\n## Variables declared inside a loop\n\nSimilar to Loop Variable, the variables declared inside a loop won’t exist once the loop is finished.\n\n### Example\n\njulia> for x in 1:10\ny = x^2\nprintln(\"$(x) squared is$(y)\")\nend\n\n\n### Output\n\n1 squared is 1\n2 squared is 4\n3 squared is 9\n4 squared is 16\n5 squared is 25\n6 squared is 36\n7 squared is 49\n8 squared is 64\n9 squared is 81\n10 squared is 100\n\njulia> y\nERROR: UndefVarError: y not defined\n\n\n## Continue Statement\n\nThe Continue statement is used to skip the rest of the code inside the loop and start the loop again with the next value. It is mostly used in the case when on a particular iteration you want to skip to the next value.\n\n### Example\n\njulia> for x in 1:10\nif x % 4 == 0\ncontinue\nend\nprintln(x)\nend\n\n\n### Output\n\n1\n2\n3\n5\n6\n7\n9\n10\n\n\n## Comprehensions\n\nGenerating and collecting items something like [n for n in 1:5] is called array comprehensions. It is sometimes called list comprehensions too.\n\n### Example\n\njulia> [X^2 for X in 1:5]\n5-element Array{Int64,1}:\n1\n4\n9\n16\n25\n\n\nWe can also specify the types of elements we want to generate −\n\n### Example\n\njulia> Complex[X^2 for X in 1:5]\n5-element Array{Complex,1}:\n1 + 0im\n4 + 0im\n9 + 0im\n16 + 0im\n25 + 0im\n\n\n## Enumerated arrays\n\nSometimes we would like to go through an array element by element while keeping track of the index number of every element of that array. Julia has enumerate() function for this task. This function gives us an iterable version of something. This function will produce the index number as well as the value at each index number.\n\n### Example\n\njulia> arr = rand(0:9, 4, 4)\n4×4 Array{Int64,2}:\n7 6 5 8\n8 6 9 4\n6 3 0 7\n2 3 2 4\n\njulia> [x for x in enumerate(arr)]\n4×4 Array{Tuple{Int64,Int64},2}:\n(1, 7) (5, 6) (9, 5) (13, 8)\n(2, 8) (6, 6) (10, 9) (14, 4)\n(3, 6) (7, 3) (11, 0) (15, 7)\n(4, 2) (8, 3) (12, 2) (16, 4)\n\n\n## Zipping arrays\n\nUsing the zip() function, you can work through two or more arrays at the same time by taking the 1st element of each array first and then the 2nd one and so on.\n\nFollowing example demonstrates the usage of zip() function −\n\n### Example\n\njulia> for x in zip(0:10, 100:110, 200:210)\nprintln(x)\nend\n(0, 100, 200)\n(1, 101, 201)\n(2, 102, 202)\n(3, 103, 203)\n(4, 104, 204)\n(5, 105, 205)\n(6, 106, 206)\n(7, 107, 207)\n(8, 108, 208)\n(9, 109, 209)\n(10, 110, 210)\n\n\nJulia also handle the issue of different size arrays as follows −\n\njulia> for x in zip(0:15, 100:110, 200:210)\nprintln(x)\nend\n(0, 100, 200)\n(1, 101, 201)\n(2, 102, 202)\n(3, 103, 203)\n(4, 104, 204)\n(5, 105, 205)\n(6, 106, 206)\n(7, 107, 207)\n(8, 108, 208)\n(9, 109, 209)\n(10, 110, 210)\n\njulia> for x in zip(0:10, 100:115, 200:210)\nprintln(x)\nend\n(0, 100, 200)\n(1, 101, 201)\n(2, 102, 202)\n(3, 103, 203)\n(4, 104, 204)\n(5, 105, 205)\n(6, 106, 206)\n(7, 107, 207)\n(8, 108, 208)\n(9, 109, 209)\n(10, 110, 210)\n\n\n## Nested loops\n\nNest a loop inside another one can be done with the help of using a comma (;) only. You do not need to duplicate the for and end keywords.\n\n### Example\n\njulia> for n in 1:5, m in 1:5\n@show (n, m)\nend\n(n, m) = (1, 1)\n(n, m) = (1, 2)\n(n, m) = (1, 3)\n(n, m) = (1, 4)\n(n, m) = (1, 5)\n(n, m) = (2, 1)\n(n, m) = (2, 2)\n(n, m) = (2, 3)\n(n, m) = (2, 4)\n(n, m) = (2, 5)\n(n, m) = (3, 1)\n(n, m) = (3, 2)\n(n, m) = (3, 3)\n(n, m) = (3, 4)\n(n, m) = (3, 5)\n(n, m) = (4, 1)\n(n, m) = (4, 2)\n(n, m) = (4, 3)\n(n, m) = (4, 4)\n(n, m) = (4, 5)\n(n, m) = (5, 1)\n(n, m) = (5, 2)\n(n, m) = (5, 3)\n(n, m) = (5, 4)\n(n, m) = (5, 5)\n\n\n## While loops\n\nWe use while loops to repeat some expressions while a condition is true. The construction is like while…end.\n\n### Example\n\njulia> n = 0\n0\n\njulia> while n < 10\nprintln(n)\nglobal n += 1\nend\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n\n\n## Exceptions\n\nExceptions or try…catch construction is used to write the code that checks for the errors and handles them elegantly. The catch phrase handles the problems that occur in the code. It allows the program to continue rather than grind to a halt.\n\n### Example\n\njulia> str = \"string\";\njulia> try\nstr = \"p\"\ncatch e\nprintln(\"the code caught an error: \\$e\")\nprintln(\"but we can easily continue with execution...\")\nend\nthe code caught an error: MethodError(setindex!, (\"string\", \"p\", 1), 0x0000000000006cba)\nbut we can easily continue with execution...\n\n\n## Do block\n\nDo block is another syntax form similar to list comprehensions. It starts at the end and work towards beginning.\n\n### Example\n\njulia> Prime_numbers = [1,2,3,5,7,11,13,17,19,23];\n\njulia> findall(x -> isequal(19, x), Prime_numbers)\n1-element Array{Int64,1}:\n9\n\n\nAs we can see from the above code that the first argument of the find() function. It operates on the second. But with a do block we can put the function in a do…end block construction.\n\njulia> findall(Prime_numbers) do x\nisequal(x, 19)\nend\n1-element Array{Int64,1}:\n9" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72058487,"math_prob":0.96802336,"size":9669,"snap":"2022-40-2023-06","text_gpt3_token_len":3104,"char_repetition_ratio":0.13998966,"word_repetition_ratio":0.12851627,"special_character_ratio":0.35381114,"punctuation_ratio":0.14859053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98907447,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T04:01:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f80c1fd6-6b76-4cad-ad14-cb4af8208340>\",\"Content-Length\":\"42785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4511ac91-7d73-4b71-bafa-733b5571c2fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:5698af32-d81c-4959-9498-20607f237f8f>\",\"WARC-IP-Address\":\"168.119.212.138\",\"WARC-Target-URI\":\"https://tutorialspoint.com/julia/julia_flow_control.htm\",\"WARC-Payload-Digest\":\"sha1:GGURJVV6E5MY4JARJUQQ4B5PPTDJ5DAH\",\"WARC-Block-Digest\":\"sha1:4BZ3KURQRERV37MMARY2OYGUY7YAZQUD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335059.43_warc_CC-MAIN-20220928020513-20220928050513-00748.warc.gz\"}"}
https://alkeshghorpade.me/post/leetcode-remove-nodes-from-linked-list
[ "", null, "", null, "# LeetCode - Remove Nodes From Linked List\n\n### Problem statement\n\nYou are given the `head` of a linked list.\n\nRemove every node which has a node with a strictly greater value anywhere to the right side of it.\n\nReturn the `head` of the modified linked list.\n\nExample 1:", null, "``````Input: head = [5, 2, 13, 3, 8]\nOutput: [13, 8]\nExplanation: The nodes that should be removed are 5, 2 and 3.\n- Node 13 is to the right of node 5.\n- Node 13 is to the right of node 2.\n- Node 8 is to the right of node 3.\n``````\n\nExample 2:\n\n``````Input: head = [1, 1, 1, 1]\nOutput: [1, 1, 1, 1]\nExplanation: Every node has value 1, so no nodes are removed.\n``````\n\nConstraints:\n\n``````- The number of the nodes in the given list is in the range [1, 10^5].\n- 1 <= Node.val <= 10^5\n``````\n\n### Explanation\n\n#### Brute Force\n\nThe easiest approach is to run two loops. The outer loop picks one node at a time. The inner loop check if there exists a node greater than the current node. If it exists, we delete the current node.\n\nThe time-complexity of the above approach is O(n^2), and the space complexity is O(1).\n\n#### Using Reverse\n\nThe time-complexity can be reduced to O(n) by reversing the linked list.\n\nThe algorithm looks as below:\n\n``````// reverse linked list method\n- set ListNode* previous = null\n\n- loop while current != null\n- set temp = current->next\ncurrent->next = previous\nprevious = current\ncurrent = temp\n- loop end\n\n- return previous\n\n// removeNodes method\n\n- set ListNode* current = reverse(head)\nint val = current->val\n\n- loop while current != null && current->next != null\n- loop while current != null && current->next != null && current->next->val < val\n- set current->next = current->next->next\n- while end\n\n- if current != null && current->next != null\n- set val = max(val, current->next->val)\ncurrent = current->next\n- if end\n- while end\n\n``````\n\nThe time-complexity of the above approach is O(n), and the space complexity is O(1).\n\nLet's check our algorithm in C++, Golang, and Javascript.\n\n#### C++ solution\n\n``````class Solution {\npublic:\nListNode* previous = NULL;\n\nwhile(current != NULL){\nListNode* temp = current->next;\ncurrent->next = previous;\nprevious = current;\ncurrent = temp;\n}\n\nreturn previous;\n}\n\n}\n\nint val = current->val;\n\nwhile(current != NULL && current->next != NULL){\nwhile(current != NULL && current->next != NULL && current->next->val < val){\ncurrent->next = current->next->next;\n}\n\nif(current != NULL && current->next != NULL){\nval = max(val, current->next->val);\ncurrent = current->next;\n}\n\n}\n\n}\n};``````\n\n#### Golang solution\n\n``````func max(a, b int) int {\nif a > b {\nreturn a\n}\n\nreturn b\n}\n\nvar previous *ListNode\n\nfor current != nil {\ntemp := current.Next\ncurrent.Next = previous\nprevious = current\ncurrent = temp\n}\n\nreturn previous\n}\n\n}\n\nval := current.Val\n\nfor current != nil && current.Next != nil {\nfor current != nil && current.Next != nil && current.Next.Val < val {\ncurrent.Next = current.Next.Next\n}\n\nif current != nil && current.Next != nil {\nval = max(val, current.Next.Val)\ncurrent = current.Next\n}\n}\n\n}``````\n\n#### Javascript solution\n\n``````var reverse = function(head) {\nlet previous = null;\n\nwhile(current) {\nlet temp = current.next;\ncurrent.next = previous;\nprevious = current;\ncurrent = temp;\n}\n\nreturn previous;\n};\n\n}\n\nlet val = current.val;\n\nwhile(current != null && current.next != null) {\nwhile(current != null && current.next != null && current.next.val < val){\ncurrent.next = current.next.next;\n}\n\nif(current != null && current.next != null){\nval = Math.max(val, current.next.val);\ncurrent = current.next;\n}\n}\n\n};``````\n\nLet's dry-run our algorithm to see how the solution works.\n\n``````Input: head = [5, 2, 13, 3, 8]\n\nfalse\n\nStep 2: ListNode* current = reverse(head)\n\nreverse will return the linked list as [8, 3, 13, 2, 5]\ncurrent = 8\n\n= 8\n\nint val = current->val\n= 8\n\nStep 3: loop while current != NULL && current->next != NULL\n8 != NULL && 8->next != NULL\n8 != NULL && 3 != NULL\ntrue\n\nloop while current != NULL && current->next != NULL && current->next->val < val\n8 != NULL && 8->next != NULL && 8->next->val < 8\n8 != NULL && 3 != NULL && 3 < 8\ntrue\n\ncurrent->next = current->next->next\n8->next = 8->next->next\n8->next = 3->next\n= 13\n\nThe updated linked list is [8, 13, 2, 5]\n\nloop while current != NULL && current->next != NULL && current->next->val < val\n8 != NULL && 8->next != NULL && 8->next->val < 8\n8 != NULL && 13 != NULL && 13 < 8\nfalse\n\nif current != NULL && current->next != NULL\n8 != NULL && 13 != NULL\ntrue\n\nval = max(val, current->next->val)\n= max(8, 8->next->val)\n= max(8, 13)\n= 13\n\ncurrent = current->next\n= 8->next\n= 13\n\nStep 4: loop while current != NULL && current->next != NULL\n13 != NULL && 13->next != NULL\n13 != NULL && 2 != NULL\ntrue\n\nloop while current != NULL && current->next != NULL && current->next->val < val\n13 != NULL && 13->next != NULL && 13->next->val < 13\n13 != NULL && 2 != NULL && 2 < 8\ntrue\n\ncurrent->next = current->next->next\n13->next = 13->next->next\n13->next = 13->next\n= 2\n\nThe updated linked list is [8, 13, 5]\n\nloop while current != NULL && current->next != NULL && current->next->val < val\n13 != NULL && 13->next != NULL && 13->next->val < 13\n13 != NULL && 5 != NULL && 5 < 8\ntrue\n\ncurrent->next = current->next->next\n13->next = 13->next->next\n13->next = 5->next\n= NULL\n\nThe updated linked list is [8, 13]\n\nloop while current != NULL && current->next != NULL && current->next->val < val\n13 != NULL && 13->next != NULL\n13 != NULL && NULL != NULL\nfalse\n\nif current != NULL && current->next != NULL\n13 != NULL && 13->next != NULL\n13 != NULL && NULL != NULL\nfalse\n\nStep 5: loop while current != NULL && current->next != NULL\n13 != NULL && 13->next != NULL\n13 != NULL && NULL != NULL\nfalse" ]
[ null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27100%27%20height=%27100%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://alkeshghorpade.me/remove-nodes.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69404536,"math_prob":0.99344635,"size":6473,"snap":"2022-40-2023-06","text_gpt3_token_len":1851,"char_repetition_ratio":0.30592054,"word_repetition_ratio":0.3074913,"special_character_ratio":0.35872084,"punctuation_ratio":0.18864909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98238945,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T22:44:47Z\",\"WARC-Record-ID\":\"<urn:uuid:2e45303c-9317-4b66-a8ed-69ba38be11a0>\",\"Content-Length\":\"90159\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0bd217b-ad86-4cd2-990e-7b6e06dd820c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2906823-1139-44db-b3f4-c08b36c14552>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://alkeshghorpade.me/post/leetcode-remove-nodes-from-linked-list\",\"WARC-Payload-Digest\":\"sha1:EZU2CBV3UXF5MKR4YAIFGXHZUZRW2E6H\",\"WARC-Block-Digest\":\"sha1:IEYTYWXK7PCOEXWYMDAE3IXRQREUSSVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500154.33_warc_CC-MAIN-20230204205328-20230204235328-00498.warc.gz\"}"}
https://edward-mj.com/archives/108
[ "# [NOI2005 day1 维护数列]Splay、平衡树上的线段树**\n\n【问题描述】\n\n1. 插入\n\nINSERT_posi_tot_c1_c2_…_ctot\n\n2. 删除\n\nDELETE_posi_tot\n\n3. 修改\n\nMAKE-SAME_posi_tot_c\n\ntot 个数字统一修改为 c\n\n4. 翻转\n\nREVERSE_posi_tot\n\ntot 个数字,翻转后放入原来的位置\n\n5. 求和\n\nGET-SUM_posi_tot\n\n6. 求和最\n\nMAX-SUM\n\n【输入格式】\n\n【输出格式】\n\n【输入样例】\n\n9 82 -6 3 5 1 -5 -3 6 3GET-SUM 5 4MAX-SUM INSERT 8 3 -5 7 2DELETE 12 1MAKE-SAME 3 3 2REVERSE 3 6GET-SUM 5 4MAX-SUM\n\n【输出样例】\n\n-110110\n\n【样例说明】\n\n2 -6 3 5 1 -5 -3 6 3\n\n2 -6 3 5 1 -5 -3 6 3\n\n2 -6 3 5 1 -5 -3 6 -5 7 2 3\n\n2 -6 3 5 1 -5 -3 6 -5 7 2\n\n2 -6 3 5 1 -5 -3 6 -5 7 2\n\n2 -6 2 2 2 -5 -3 6 -5 7 2\n\n2 -6 2 2 2 -5 -3 6 -5 7 2\n\n2 -6 6 -3 -5 2 2 2 -5 7 2\n\n2 -6 6 -3 -5 2 2 2 -5 7 2\n\n【评分方法】\n\n• 如果你的程序能在输出文件正确的位置上打印 GET-SUM 操作的答案,你可以得到该测试点 60%的分数;\n• 如果你的程序能在输出文件正确的位置上打印 MAX-SUM 操作的答案,你可以得到该测试点 40%的分数;\n• 以上两条的分数可以叠加,即如果你的程序正确输出所有 GET-SUM 和MAX-SUM 操作的答案,你可以得到该测试点 100%的分数。\n\n【数据规模和约定】\n\n• 你可以认为在任何时刻,数列中至少有 1 个数。\n• 输入数据一定是正确的,即指定位置的数在数列中一定存在。\n• 50%的数据中,任何时刻数列中最多含有 30 000 个数;\n• 100%的数据中,任何时刻数列中最多含有 500 000 个数。\n• 100%的数据中,任何时刻数列中任何一个数字均在[-1 000, 1 000]内。\n• 100%的数据中,M ≤20 000,插入的数字总数不超过 4 000 000 个,输入文件大小不超过 20MBytes。\n\n【题目大意】\n\n【算法分析】\n\n【其它】\n\n【CODE】\n\n#include #include using namespace std;\nconst int inf=500000000;\nint a;\n\nstruct Splaytype{\nstruct Node{\nNode *l,*r,*fa;\nint s,ms,ml,mr,sum,sn,key; // sn for samenum    mx for max_x\nbool same,rev;\n};\nNode *root,*null,sp;\nint tot;\n\nvoid update(Node *x){\nif (x==null) return;\nbool l=false,r=false;\nif (x->l!=null) {l=true; x->l->fa=x; pushdown(x->l);}\nif (x->r!=null) {r=true; x->r->fa=x; pushdown(x->r);}\nx->s=x->l->s+x->r->s+1;\nx->sum=x->l->sum+x->r->sum+x->key;\nx->ms=x->key;\nif (l) x->ms=max(x->ms,x->l->ms);\nif (r) x->ms=max(x->ms,x->r->ms);\nif (l) x->ms=max(x->ms,x->l->mr+x->key);\nif (r) x->ms=max(x->ms,x->r->ml+x->key);\nif (l && r) x->ms=max(x->ms,x->l->mr+x->r->ml+x->key);\nx->ml=x->l->sum+x->key;\nif (l) x->ml=max(x->ml,x->l->ml);\nif (r) x->ml=max(x->ml,x->l->sum+x->key+x->r->ml);\nx->mr=x->r->sum+x->key;\nif (r) x->mr=max(x->mr,x->r->mr);\nx->mr=max(x->mr,x->r->sum+x->key+x->l->mr);\n}\n\nvoid pushdown(Node *x){\nif (x==null) return;\nif (x->rev){\nx->rev=false;\nswap(x->l,x->r);\nif (x->l!=null) x->l->rev=!x->l->rev;\nif (x->r!=null) x->r->rev=!x->r->rev;\nswap(x->ml,x->mr);\n}\nif (x->same){\nx->same=false;\nif (x->l!=null) {x->l->same=true; x->l->sn=x->sn;}\nif (x->r!=null) {x->r->same=true; x->r->sn=x->sn;}\nx->key=x->sn;\nx->sum=x->s*x->key;\nx->ms=x->ml=x->mr=max(x->sum,x->key);\n}\n}\n\nvoid rotate(Node *x,char op){\nNode *y=x->fa;\nif (op==’l’){y->r=x->l; x->l=y;}\nelse{y->l=x->r; x->r=y;}\nif (y->fa->l==y) y->fa->l=x;\nelse y->fa->r=x;\nx->fa=y->fa;\nupdate(y);\nupdate(x);\n}\n\nNode* newnode(int key){\nNode *p=&sp[++tot];\np->l=null; p->r=null; p->fa=null;\np->s=1; p->ms=p->ml=p->mr=p->sum=p->key=key;\np->sn=0; p->same=false; p->rev=false;\nreturn p;\n}\n\nvoid init(){\nNode *p,*q;\ntot=0;\nnull=&sp;\nnull->l=null; null->r=null; null->fa=null;\nnull->s=null->ms=null->ml=null->mr=null->sum=null->sn=0;\nnull->same=null->rev=false; null->key=0;\nroot=newnode(-inf);\np=newnode(-inf);\nq=newnode(-inf);\nroot->l=q;\nq->l=p;\nupdate(q);\nupdate(root);\n}\n\nvoid Splay(Node *x,Node *pos){\nif (x==null || x==pos) return;\npushdown(x);\nNode *y,*z;\nwhile (x->fa!=pos){\ny=x->fa; z=y->fa;\nif (z==pos)\nif (y->l==x) rotate(x,’r’);\nelse rotate(x,’l’);\nelse if (z->l==y)\nif (y->l==x) {rotate(y,’r’); rotate(x,’r’);}\nelse {rotate(x,’l’); rotate(x,’r’);}\nelse\nif (y->r==x) {rotate(y,’l’); rotate(x,’l’);}\nelse {rotate(x,’r’); rotate(x,’l’);}\n}\n}\n\nNode* findpos(int pos){\nint done=0;\nNode *p=root;\nfor (;;){\npushdown(p);\nif (done+p->l->s+1==pos) return p;\nif (done+p->l->s+1                               else p=p->l;\n}\n}\n\nvoid ins(int st,int n){\nst=min(st,root->s-2);\nNode *p=findpos(st),*q=findpos(st+1),*tmp;\nSplay(p,root);\nSplay(q,p);\nfor (int i=1;i<=n;i++){\ntmp=newnode(a[i]);\nif (i        }\nfor (int i=1;i            tmp=&sp[tot-i];\nupdate(tmp);\n}\nq->l=tmp;\nupdate(q);\nupdate(p);\nupdate(root);\n}\n\nvoid del(int l,int r){\nr=min(r,root->s-2);\nNode *p=findpos(l-1),*q=findpos(r+1);\nSplay(p,root);\nSplay(q,p);\nq->l=null;\nupdate(q);\nupdate(p);\nupdate(root);\n}\n\nvoid makesame(int l,int r,int C){\nr=min(r,root->s-2);\nNode *p=findpos(l-1),*q=findpos(r+1);\nSplay(p,root);\nSplay(q,p);\nq->l->same=true;\nq->l->sn=C;\nupdate(q);\nupdate(p);\nupdate(root);\n}\n\nvoid reverse(int l,int r){\nr=min(r,root->s-2);\nNode *p=findpos(l-1),*q=findpos(r+1);\nSplay(p,root);\nSplay(q,p);\nq->l->rev=true;\nupdate(q);\nupdate(p);\nupdate(root);\n}\n\nint getsum(int l,int r){\nr=min(r,root->s-2);\nNode *p=findpos(l-1),*q=findpos(r+1);\nSplay(p,root);\nSplay(q,p);\npushdown(q->l);\nreturn q->l->sum;\n}\n}Splay;\n\nint main(){\nfreopen(\"sequence.in\",\"r\",stdin);\nfreopen(\"sequence.out\",\"w\",stdout);\nSplay.init();\nchar op; int n,m,pos;\nscanf(\"%d%d\",&n,&m);\nfor (int i=1;i<=n;i++) scanf(\"%d\",&a[i]);\nSplay.ins(1,n);\nfor (int i=1;i<=m;i++){\nscanf(\"%s\",&op);\nswitch (op){\ncase ‘I’:\nscanf(\"%d%d\",&pos,&n);\nfor (int i=1;i<=n;i++) scanf(\"%d\",&a[i]);\nSplay.ins(pos+1,n);\nbreak;\ncase ‘D’:\nscanf(\"%d%d\",&pos,&n);\nSplay.del(pos+1,pos+n);\nbreak;\ncase ‘M’:\nif (op==’K’){\nint C;\nscanf(\"%d%d%d\",&pos,&n,&C);\nSplay.makesame(pos+1,pos+n,C);\n}\nelse{\nSplay.pushdown(Splay.root);\nprintf(\"%dn\",Splay.root->ms);\n}\nbreak;\ncase ‘R’:\nscanf(\"%d%d\",&pos,&n);\nSplay.reverse(pos+1,pos+n);\nbreak;\ncase ‘G’:\nscanf(\"%d%d\",&pos,&n);\nprintf(\"%dn\",Splay.getsum(pos+1,pos+n));\nbreak;\n}\n}\n}\n\n## 加入对话\n\n1.", null, "4条评论\n\n1. cjf空间里也有这题\n\n2. 回复dikem比mutombo:cjf是谁有眼不识泰山了。。。\n\n3. 回复dikem比mutombo:额,和很多神牛没什么交集,就不认识了。World Final的都不一定知道,何况是OI的金牌。。。" ]
[ null, "https://secure.gravatar.com/avatar/", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5588885,"math_prob":0.98701376,"size":6823,"snap":"2023-14-2023-23","text_gpt3_token_len":3834,"char_repetition_ratio":0.118932396,"word_repetition_ratio":0.119463086,"special_character_ratio":0.41755825,"punctuation_ratio":0.1835694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976964,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T00:40:22Z\",\"WARC-Record-ID\":\"<urn:uuid:10d80d5a-5030-4961-90e7-bdd2aa7b4661>\",\"Content-Length\":\"80625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7428539-2bff-4e13-b5c3-38b383e12a5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a3070cd-cfc6-435c-9fe6-c48e1efc74c7>\",\"WARC-IP-Address\":\"141.193.213.10\",\"WARC-Target-URI\":\"https://edward-mj.com/archives/108\",\"WARC-Payload-Digest\":\"sha1:BNKU52WTXIXYCU5HKXFODYKIY25W3WVV\",\"WARC-Block-Digest\":\"sha1:KKHUHBFN2Z27PFLREEMQ3ZPEOZVJJBCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946584.94_warc_CC-MAIN-20230326235016-20230327025016-00376.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/cody/problems/42644-matlab-basic-rounding-iv/solutions/1645544
[ "Cody\n\n# Problem 42644. MATLAB Basic: rounding IV\n\nSolution 1645544\n\nSubmitted on 14 Oct 2018 by Ruben Reji\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = -8.8; y_correct = -8; assert(isequal(round_x(x),y_correct))\n\n2   Pass\nx = -8.4; y_correct = -8; assert(isequal(round_x(x),y_correct))\n\n3   Pass\nx = 8.8; y_correct = 9; assert(isequal(round_x(x),y_correct))\n\n4   Pass\nx = 8.4; y_correct = 9; assert(isequal(round_x(x),y_correct))\n\n5   Pass\nx = 8.49; y_correct = 9; assert(isequal(round_x(x),y_correct))\n\n6   Pass\nx = 128.52; y_correct = 129; assert(isequal(round_x(x),y_correct))\n\n7   Pass\nx = pi; y_correct = 4; assert(isequal(round_x(x),y_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53755313,"math_prob":0.99778837,"size":822,"snap":"2020-45-2020-50","text_gpt3_token_len":275,"char_repetition_ratio":0.21393643,"word_repetition_ratio":0.042372882,"special_character_ratio":0.37469587,"punctuation_ratio":0.17575757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99970114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T10:29:32Z\",\"WARC-Record-ID\":\"<urn:uuid:f6afa831-1ad8-43d3-ab6e-d817759d835e>\",\"Content-Length\":\"82344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:622c9b64-bfcc-4992-9baf-6ac84be4f587>\",\"WARC-Concurrent-To\":\"<urn:uuid:8025724c-e2b3-4f76-80c2-71788785032a>\",\"WARC-IP-Address\":\"184.24.72.83\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/cody/problems/42644-matlab-basic-rounding-iv/solutions/1645544\",\"WARC-Payload-Digest\":\"sha1:W56JVCM3QMVGEN3XGBNJJBICVSRDFVGK\",\"WARC-Block-Digest\":\"sha1:4JPJUBGJHKS2MTFB76UXRHERFMS7WQKU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141197593.33_warc_CC-MAIN-20201129093434-20201129123434-00567.warc.gz\"}"}
https://answers.everydaycalculation.com/percent-is/300-100
[ "Solutions by everydaycalculation.com\n\n## 300 is what percent of 100?\n\n300 of 100 is 300%\n\n#### Steps to solve \"what percent is 300 of 100?\"\n\n1. 300 of 100 can be written as:\n300/100\n2. To find percentage, we need to find an equivalent fraction with denominator 100. Multiply both numerator & denominator by 100\n\n300/100 × 100/100\n3. = (300 × 100/100) × 1/100 = 300/100\n4. Therefore, the answer is 300%\n\nIf you are using a calculator, simply enter 300÷100×100 which will give you 300 as the answer.\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to work with percentages in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85455483,"math_prob":0.9962517,"size":469,"snap":"2022-27-2022-33","text_gpt3_token_len":137,"char_repetition_ratio":0.20430107,"word_repetition_ratio":0.0,"special_character_ratio":0.3880597,"punctuation_ratio":0.06315789,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99291164,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T21:22:56Z\",\"WARC-Record-ID\":\"<urn:uuid:447135c5-dcbb-4ba2-ba19-e7858e91460f>\",\"Content-Length\":\"6410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeff9eb7-2e3f-42e3-8436-b73cf5782e03>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdc54a9b-578b-4893-a964-dbcf0e2ad117>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/percent-is/300-100\",\"WARC-Payload-Digest\":\"sha1:LQDUR2LOH2JM5N7GXRNNKMILMSWBQI3F\",\"WARC-Block-Digest\":\"sha1:6P7WSUQEUX3JTDIMK7TUX2PDUFLTE2NO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572212.96_warc_CC-MAIN-20220815205848-20220815235848-00471.warc.gz\"}"}
https://terraria.fandom.com/wiki/Template:Navbox-5col
[ "## FANDOM\n\n5,026 Pages\n\nExample\n{{{r1a}}} {{{r1b}}} {{{r1c}}} {{{r1d}}} {{{r1e}}}\nTemplate documentation (for the above template, sometimes hidden or invisible)\n\nThis is a navbox variant with five columns and fifteen rows. It can contain 70 items.\n\n## Use\n\n```{{navbox-5col|Title\n|h1 = Header, appears above row 1\n|r1a = Row 1 content a\n|r1b = Row 1 content b\n|r1c = Row 1 content c\n|r1d = Row 1 content d\n|r1e = Row 1 content e\n|h2 = Header, appears above row 2\n|r2a = Row 2 content a\n|r2b = Row 2 content b\n|r2c = Row 2 content c\n|r2d = Row 2 content d\n|r2e = Row 2 content e\n|r3a = Row 3 content a\n|r3b = Row 3 content b\n|r3c = Row 3 content c\n|r3d = Row 3 content d\n|r3e = Row 3 content e\n|h8 = Header, appears above row 8\n|r8a = Row 8 content a\n}}\n```\n\nResults in...\n\nTitle\nRow 1 content a Row 1 content b Row 1 content c Row 1 content d Row 1 content e\nRow 2 content a Row 2 content b Row 2 content c Row 2 content d Row 2 content e\nRow 3 content a Row 3 content b Row 3 content c Row 3 content d Row 3 content e\nRow 8 content a\n\n### Blank\n\n```{{navbox-5col|\n|h1 =\n|r1a =\n|r1b =\n|r1c =\n|r1d =\n|r1e =\n|h2 =\n|r2a =\n|r2b =\n|r2c =\n|r2d =\n|r2e =\n|h3 =\n|r3a =\n|r3b =\n|r3c =\n|r3d =\n|r3e =\n|h4 =\n|r4a =\n|r4b =\n|r4c =\n|r4d =\n|r4e =\n|h5 =\n|r5a =\n|r5b =\n|r5c =\n|r5d =\n|r5e =\n|h6 =\n|r6a =\n|r6b =\n|r6c =\n|r6d =\n|r6e =\n|h7 =\n|r7a =\n|r7b =\n|r7c =\n|r7d =\n|r7e =\n|h8 =\n|r8a =\n|r8b =\n|r8c =\n|r8d =\n|r8e =\n|h9 =\n|r9a =\n|r9b =\n|r9c =\n|r9d =\n|r9e =\n|h10 =\n|r10a =\n|r10b =\n|r10c =\n|r10d =\n|r10e =\n|h11 =\n|r11a =\n|r11b =\n|r11c =\n|r11d =\n|r11e =\n|h12 =\n|r12a =\n|r12b =\n|r12c =\n|r12d =\n|r12e =\n|h13 =\n|r13a =\n|r13b =\n|r13c =\n|r13d =\n|r13e =\n|h14 =\n|r14a =\n|r14b =\n|r14c =\n|r14d =\n|r14e =\n|h15 =\n|r15a =\n|r15b =\n|r15c =\n|r15d =\n|r15e =\n}}\n```\n\nVisit Template:Navbox-5col/doc to edit this text! (How does this work?)\nCommunity content is available under CC-BY-SA unless otherwise noted." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5270773,"math_prob":0.99570394,"size":2076,"snap":"2020-10-2020-16","text_gpt3_token_len":908,"char_repetition_ratio":0.27799228,"word_repetition_ratio":0.015317286,"special_character_ratio":0.4677264,"punctuation_ratio":0.050666668,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99915373,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T16:49:43Z\",\"WARC-Record-ID\":\"<urn:uuid:d651c1eb-832d-4758-b360-db4b3a0b59a3>\",\"Content-Length\":\"205481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce8a2371-abd3-45db-91c8-fce81abeb9a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6fe24b3-2f68-490a-8bb6-4e10d3e2f8a3>\",\"WARC-IP-Address\":\"151.101.192.194\",\"WARC-Target-URI\":\"https://terraria.fandom.com/wiki/Template:Navbox-5col\",\"WARC-Payload-Digest\":\"sha1:FXJQS2Q263Y6BF4VCXTQPPJF7OAOOEYJ\",\"WARC-Block-Digest\":\"sha1:OOACX3F5GGRTBEKFT7XYEGG2VPPMWVNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144165.4_warc_CC-MAIN-20200219153707-20200219183707-00230.warc.gz\"}"}
https://link.springer.com/article/10.12942/lrr-2005-4
[ "# Time-Delay Interferometry\n\nA Later Version of this article was published on 05 August 2014\n\n## Abstract\n\nEqual-arm interferometric detectors of gravitational radiation allow phase measurements many orders of magnitude below the intrinsic phase stability of the laser injecting light into their arms. This is because the noise in the laser light is common to both arms, experiencing exactly the same delay, and thus cancels when it is differenced at the photo detector. In this situation, much lower level secondary noises then set the overall performance. If, however, the two arms have different lengths (as will necessarily be the case with space-borne interferometers), the laser noise experiences different delays in the two arms and will hence not directly cancel at the detector. In order to solve this problem, a technique involving heterodyne interferometry with unequal arm lengths and independent phase-difference readouts has been proposed. It relies on properly time-shifting and linearly combining independent Doppler measurements, and for this reason it has been called Time-Delay Interferometry (TDI). This article provides an overview of the theory and mathematical foundations of TDI as it will be implemented by the forthcoming space-based interferometers such as the Laser Interferometer Space Antenna (LISA) mission. We have purposely left out from this first version of our “Living Review” article on TDI all the results of more practical and experimental nature, as well as all the aspects of TDI that the data analysts will need to account for when analyzing the LISA TDI data combinations. Our forthcoming “second edition” of this review paper will include these topics.\n\n## Introduction\n\nBreakthroughs in modern technology have made possible the construction of extremely large interferometers both on ground and in space for the detection and observation of gravitational waves (GWs). Several ground based detectors are being constructed or are already operational around the globe. These are the LIGO and VIRGO interferometers, which have arm lengths of 4 km and 3 km, respectively, and the GEO and TAMA interferometers with arm lengths of 600 m and 300 m, respectively. These detectors will operate in the high frequency range of GWs of ∼ 1 Hz to a few kHz. A natural limit occurs on decreasing the lower frequency cut-off of 10 Hz because it is not practical to increase the arm lengths on ground and also because of the gravity gradient noise which is difficult to eliminate below 10 Hz. However, VIRGO and future detectors such as the advanced LIGO, the proposed LCGT in Japan, and the large European detector plan to go to substantially below 10 Hz. Thus, in any case, the ground based interferometers will not be sensitive below the limiting frequency of 1 Hz. But on the other hand, in the cosmos there exist interesting astrophysical GW sources which emit GWs below this frequency such as the galactic binaries, massive and super-massive black-hole binaries, etc. If we wish to observe these sources, we need to go to lower frequencies. The solution is to build an interferometer in space, where such noises will be absent and allow the detection of GWs in the low frequency regime. LISA is a proposed mission which will use coherent laser beams exchanged between three identical spacecraft forming a giant (almost) equilateral triangle of side 5 × 106 km to observe and detect low frequency cosmic GWs. The ground based detectors and LISA complement each other in the observation of GWs in an essential way, analogous to the way optical, radio, X-ray, γ-ray, etc. observations do for the electromagnetic spectrum. As these detectors begin to operate, a new era of gravitational astronomy is on the horizon and a radically different view of the universe is expected to emerge.\n\nThe astrophysical sources that LISA could observe include galactic binaries, extra-galactic super-massive black-hole binaries and coalescences, and stochastic GW background from the early universe. Coalescing binaries are one of the important sources in the LISA frequency band. These include galactic and extra galactic stellar mass binaries, and massive and super-massive blackhole binaries. The frequency of the GWs emitted by such a system is twice its orbital frequency. Population synthesis studies indicate a large number of stellar mass binaries in the frequency range below 2–3 mHz [4, 17]. In the lower frequency range (≤ 1 mHz) there is a large number of such sources in each of the frequency bins. Since GW detectors are omni-directional, it is impossible to resolve an individual source. These sources effectively form a stochastic GW background referred to as binary confusion noise.\n\nMassive black-hole binaries are interesting both from the astrophysical and theoretical points of view. Coalescences of massive black holes from different galaxies after their merger during growth of the present galaxies would provide unique new information on galaxy formation. Coalescence of binaries involving intermediate mass black holes could help to understand the formation and growth of massive black holes. The super-massive black-hole binaries are strong emitters of GWs and these spectacular events can be detectable beyond red-shift of z = 1. These systems would help to determine the cosmological parameters independently. And, just as the cosmic microwave background is left over from the Big Bang, so too should there be a background of gravitational waves. Unlike electromagnetic waves, gravitational waves do not interact with matter after a few Planck times after the Big Bang, so they do not thermalize. Their spectrum today, therefore, is simply a red-shifted version of the spectrum they formed with, which would throw light on the physical conditions at the epoch of the early universe.\n\nInterferometric non-resonant detectors of gravitational radiation (with frequency content 0 < f < fu) use a coherent train of electromagnetic waves (of nominal frequency ν0fu) folded into several beams, and at one or more points where these intersect, monitor relative fluctuations of frequency or phase (homodyne detection). The observed low frequency fluctuations are due to several causes:\n\n1. 1.\n\nfrequency variations of the source of the electromagnetic signal about ν0,\n\n2. 2.\n\nrelative motions of the electromagnetic source and the mirrors (or amplifying transponders) that do the folding,\n\n3. 3.\n\ntemporal variations of the index of refraction along the beams, and\n\n4. 4.\n\naccording to general relativity, to any time-variable gravitational fields present, such as the transverse-traceless metric curvature of a passing plane gravitational wave train.\n\nTo observe gravitational waves in this way, it is thus necessary to control, or monitor, the other sources of relative frequency fluctuations, and, in the data analysis, to use optimal algorithms based on the different characteristic interferometer responses to gravitational waves (the signal) and to the other sources (the noise) . By comparing phases of electromagnetic beams referenced to the same frequency generator and propagated along non-parallel equal-length arms, frequency fluctuations of the frequency reference can be removed, and gravitational wave signals at levels many orders of magnitude lower can be detected.\n\nIn the present single-spacecraft Doppler tracking observations, for instance, many of the noise sources can be either reduced or calibrated by implementing appropriate microwave frequency links and by using specialized electronics , so the fundamental limitation is imposed by the frequency (time-keeping) fluctuations inherent to the reference clock that controls the microwave system. Hydrogen maser clocks, currently used in Doppler tracking experiments, achieve their best performance at about 1000 s integration time, with a fractional frequency stability of a few parts in 10−16. This is the reason why these one-arm interferometers in space (which have one Doppler readout and a “3-pulse” response to gravitational waves ) are most sensitive to mHz gravitational waves. This integration time is also comparable to the microwave propagation (or “storage”) time 2L/c to spacecraft en route to the outer solar system (for example L≃ 5−8 AU for the Cassini spacecraft) .\n\nNext-generation low-frequency interferometric gravitational wave detectors in solar orbits, such as the LISA mission , have been proposed to achieve greater sensitivity to mHz gravitational waves. However, since the armlengths of these space-based interferometers can differ by a few percent, the direct recombination of the two beams at a photo detector will not effectively remove the laser frequency noise. This is because the frequency fluctuations of the laser will be delayed by different amounts within the two arms of unequal length. In order to cancel the laser frequency noise, the time-varying Doppler data must be recorded and post-processed to allow for arm-length differences . The data streams will have temporal structure, which can be described as due to many-pulse responses to δ-function excitations, depending on time-of-flight delays in the response functions of the instrumental Doppler noises and in the response to incident plane-parallel, transverse, and traceless gravitational waves.\n\nLISA will consists of three spacecraft orbiting the sun. Each spacecraft will be equipped with two lasers sending beams to the other two (∼ 0.03 AU away) while simultaneously measuring the beat frequencies between the local laser and the laser beams received from the other two spacecraft. The analysis of TDI presented in this article will assume a successful prior removal of any first-order Doppler beat notes due to relative motions , giving six residual Doppler time series as the raw data of a stationary time delay space interferometer. Following [27, 1, 6], we will regard LISA not as constituting one or more conventional Michelson interferometers, but rather, in a symmetrical way, a closed array of six one-arm delay lines between the test masses. In this way, during the course of the article, we will show that it is possible to synthesize new data combinations that cancel laser frequency noises, and estimate achievable sensitivities of these combinations in terms of the separate and relatively simple single arm responses both to gravitational wave and instrumental noise (cf. [27, 1, 6]).\n\nIn contrast to Earth-based interferometers, which operate in the long-wavelength limit (LWL) (arm lengths ≪ gravitational wavelength ∼ c/f0, where f0 is a characteristic frequency of the GW), LISA will not operate in the LWL over much of its frequency band. When the physical scale of a free mass optical interferometer intended to detect gravitational waves is comparable to or larger than the GW wavelength, time delays in the response of the instrument to the waves, and travel times along beams in the instrument, cannot be ignored and must be allowed for in computing the detector response used for data interpretation. It is convenient to formulate the instrumental responses in terms of observed differential frequency shifts — for short, Doppler shifts — rather than in terms of phase shifts usually used in interferometry, although of course these data, as functions of time, are interconvertible.\n\nThis first review article on TDI is organized as follows. In Section 2 we provide an overview of the physical and historical motivations of TDI. In Section 3 we summarize the one-arm Doppler transfer functions of an optical beam between two carefully shielded test masses inside each spacecraft resulting from (i) frequency fluctuations of the lasers used in transmission and reception, (ii) fluctuations due to non-inertial motions of the spacecraft, and (iii) beam-pointing fluctuations and shot noise . Among these, the dominant noise is from the frequency fluctuations of the lasers and is several orders of magnitude (perhaps 7 or 8) above the other noises. This noise must be very precisely removed from the data in order to achieve the GW sensitivity at the level set by the remaining Doppler noise sources which are at a much lower level and which constitute the noise floor after the laser frequency noise is suppressed. We show that this can be accomplished by shifting and linearly combining the twelve one-way Doppler data LISA will measure. The actual procedure can easily be understood in terms of properly defined time-delay operators that act on the one-way Doppler measurements. We develop a formalism involving the algebra of the time-delay operators which is based on the theory of rings and modules and computational commutative algebra. We show that the space of all possible interferometric combinations cancelling the laser frequency noise is a module over the polynomial ring in which the time-delay operators play the role of the indeterminates. In the literature, the module is called the module of syzygies . We show that the module can be generated from four generators, so that any data combination cancelling the laser frequency noise is simply a linear combination formed from these generators. We would like to emphasize that this is the mathematical structure underlying TDI in LISA.\n\nIn Section 4 specific interferometric combinations are then derived, and their physical interpretations are discussed. The expressions for the Sagnac interferometric combinations (α, β, γ, ζ) are first obtained; in particular, the symmetric Sagnac combination ζ, for which each raw data set needs to be delayed by only a single arm transit time, distinguishes itself against all the other TDI combinations by having a higher order response to gravitational radiation in the LWL when the spacecraft separations are equal. We then express the unequal-arm Michelson combinations (X, Y, Z) in terms of the α, β, γ, and ζ combinations with further transit time delays. One of these interferometric data combinations would still be available if the links between one pair of spacecraft were lost. Other TDI combinations, which rely on only four of the possible six inter-spacecraft Doppler measurements (denoted P, E, and U) are also presented. They would of course be quite useful in case of potential loss of any two inter-spacecraft Doppler measurements.\n\nTDI so formulated presumes the spacecraft-to-spacecraft light-travel-times to be constant in time, and independent from being up- or down-links. Reduction of data from moving interferometric laser arrays in solar orbit will in fact encounter non-symmetric up- and downlink light time differences that are significant, and need to be accounted for in order to exactly cancel the laser frequency fluctuations [24, 5, 25]. In Section 5 we show that, by introducing a set of non-commuting time-delay operators, there exists a quite general procedure for deriving generalized TDI combinations that account for the effects of time-dependence of the arms. Using this approach it is possible to derive “flex-free” expression for the unequal-arm Michelson combinations X1, and obtain the generalized expressions for all the TDI combinations .\n\nIn Section 6 we address the question of maximization of the LISA signal-to-noise-ratio (SNR) to any gravitational wave signal present in its data. This is done by treating the SNR as a functional over the space of all possible TDI combinations. As a simple application of the general formula we have derived, we apply our results to the case of sinusoidal signals randomly polarized and randomly distributed on the celestial sphere. We find that the standard LISA sensitivity figure derived for a single Michelson interferometer [7, 19, 21] can be improved by a factor of $$\\sqrt 2$$ in the low-part of the frequency band, and by more than $$\\sqrt 3$$ in the remaining part of the accessible band. Further, we also show that if the location of the GW source is known, then as the source appears to move in the LISA reference frame, it is possible to optimally track the source, by appropriately changing the data combinations during the course of its trajectory [19, 20]. As an example of such type of source, we consider known binaries within our own galaxy.\n\nThis first version of our “Living Review” article on TDI does not include all the results of more practical and experimental nature, as well as all the aspects of TDI that the data analysts will need to account for when analyzing the LISA TDI data combinations. Our forthcoming “second edition” of this review paper will include these topics. It is worth mentioning that, as of today, the LISA project has endorsed TDI as its baseline technique for achieving the desired sensitivity to gravitational radiation. Several experimental verifications and tests of TDI are being, and will be, performed at the NASA and ESA LISA laboratories. Although significant theoretical and experimental work has already been done for understanding and overcoming practical problems related to the implementation of TDI, more work on both sides of the Atlantic is still needed. Results of this undergoing effort will be included in the second edition of this living document.\n\n## Physical and Historical Motivations of TDI\n\nEqual-arm interferometer detectors of gravitational waves can observe gravitational radiation by cancelling the laser frequency fluctuations affecting the light injected into their arms. This is done by comparing phases of split beams propagated along the equal (but non-parallel) arms of the detector. The laser frequency fluctuations affecting the two beams experience the same delay within the two equal-length arms and cancel out at the photodetector where relative phases are measured. This way gravitational wave signals of dimensionless amplitude less than 10−20 can be observed when using lasers whose frequency stability can be as large as roughly a few parts in 10−13.\n\nIf the arms of the interferometer have different lengths, however, the exact cancellation of the laser frequency fluctuations, say C(t), will no longer take place at the photodetector. In fact, the larger the difference between the two arms, the larger will be the magnitude of the laser frequency fluctuations affecting the detector response. If L1 and L2 are the lengths of the two arms, it is easy to see that the amount of laser relative frequency fluctuations remaining in the response is equal to (units in which the speed of light c =1)\n\n$$\\Delta C(t) = C(t - 2{L_1}) - C(t - 2{L_2}).$$\n(1)\n\nIn the case of a space-based interferometer such as LISA, whose lasers are expected to display relative frequency fluctuations equal to about $${10^{- 13}}/\\sqrt {{\\rm{Hz}}}$$ in the mHz band, and whose arms will differ by a few percent , Equation (1) implies the following expression for the amplitude of the Fourier components of the uncancelled laser frequency fluctuations (an over-imposed tilde denotes the operation of Fourier transform):\n\n$$\\vert \\tilde{\\Delta C}(f)\\vert \\simeq \\vert \\tilde{C}(f)\\vert 4\\pi f\\vert ({L_1} - {L_2})\\vert.$$\n(2)\n\nAt f = 10−3 Hz, for instance, and assuming |L1L2| ≃ 0.5 s, the uncancelled fluctuations from the laser are equal to $$6.3 \\times {10^{- 16}}/\\sqrt {{\\rm{Hz}}}$$. Since the LISA sensitivity goal is about $${10^{- 16}}/\\sqrt {{\\rm{Hz}}}$$ in this part of the frequency band, it is clear that an alternative experimental approach for canceling the laser frequency fluctuations is needed.\n\nA first attempt to solve this problem was presented by Faller et al. [9, 11, 10], and the scheme proposed there can be understood through Figure 1. In this idealized model the two beams exiting the two arms are not made to interfere at a common photodetector. Rather, each is made to interfere with the incoming light from the laser at a photodetector, decoupling in this way the phase fluctuations experienced by the two beams in the two arms. Now two Doppler measurements are available in digital form, and the problem now becomes one of identifying an algorithm for digitally cancelling the laser frequency fluctuations from a resulting new data combination.\n\nThe algorithm they first proposed, and refined subsequently in , required processing the two Doppler measurements, say y1(t) and y2(t), in the Fourier domain. If we denote with h1(t), h2(t) the gravitational wave signals entering into the Doppler data y1, y2, respectively, and with n1, n2 any other remaining noise affecting y1 and y2, respectively, then the expressions for the Doppler observables y1, y2 can be written in the following form:\n\n$${y_1}(t) = C(t - 2{L_1}) - C(t) + {h_1}(t) + {n_1}(t),$$\n(3)\n$${y_2}(t) = C(t - 2{L_2}) - C(t) + {h_2}(t) + {n_2}(t).$$\n(4)\n\nFrom Equations (3, 4) it is important to note the characteristic time signature of the random process C(t) in the Doppler responses y1, y2. The time signature of the noise C(t) in y1(t), for instance, can be understood by observing that the frequency of the signal received at time t contains laser frequency fluctuations transmitted 2L1 s earlier. By subtracting from the frequency of the received signal the frequency of the signal transmitted at time t, we also subtract the frequency fluctuations C(t) with the net result shown in Equation (3).\n\nThe algorithm for cancelling the laser noise in the Fourier domain suggested in works as follows. If we take an infinitely long Fourier transform of the data y1, the resulting expression of y1 in the Fourier domain becomes (see Equation (3))\n\n$${\\tilde{y}_1}(f) = \\tilde{C}(f)[{e^{4\\pi if{L_1}}} - 1] + {\\tilde{h}_1}(f) + {\\tilde{n}_2}(f).$$\n(5)\n\nIf the arm length L1 is known exactly, we can use the $$\\tilde {{y_1}}$$ data to estimate the laser frequency fluctuations $$\\tilde C(f)$$. This can be done by dividing $${\\tilde y_1}$$ by the transfer function of the laser noise C into the observable y1 itself. By then further multiplying $${\\tilde y_1}/[{e^{4\\pi if{L_1}}} - 1]$$ by the transfer function of the laser noise into the other observable $${\\tilde y_2}$$, i.e. $$[{e^{4\\pi if{L_2}}} - 1]$$, and then subtract the resulting expression from $${\\tilde y_2}$$ one accomplishes the cancellation of the laser frequency fluctuations.\n\nThe problem with this procedure is the underlying assumption of being able to take an infinitely long Fourier transform of the data. Even if one neglects the variation in time of the LISA arms, by taking a finite length Fourier transform of, say, y1(t) over a time interval T, the resulting transfer function of the laser noise C into y1 no longer will be equal to $$[{e^{4\\pi if{L_1}}} - 1]$$. This can be seen by writing the expression of the finite length Fourier transform of y1 in the following way:\n\n$$\\tilde{y}_1^T \\equiv \\int\\nolimits_{- T}^{+ T} {{y_1}(t){e^{2\\pi ift}}dt =} \\int\\nolimits_{- \\infty}^{+ \\infty} {{y_1}(t)H(t){e^{2\\pi ift}}dt,}$$\n(6)\n\nwhere we have denoted with H(t) the function that is equal to 1 in the interval [−T, +T], and zero everywhere else. Equation (6) implies that the finite-length Fourier transform $$\\tilde y_1^T$$ of y1(t) is equal to the convolution in the Fourier domain of the infinitely long Fourier transform of y1(t), $${\\tilde y_1}$$, with the Fourier transform of H(t) (i.e. the “Sinc Function” of width 1/T). The key point here is that we can no longer use the transfer function $$[{e^{4\\pi if{L_i}}} - 1]$$, i = 1, 2, for estimating the laser noise fluctuations from one of the measured Doppler data, without retaining residual laser noise into the combination of the two Doppler data y1, y2 valid in the case of infinite integration time. The amount of residual laser noise remaining in the Fourier-based combination described above, as a function of the integration time T and type of “window function” used, was derived in the appendix of . There it was shown that, in order to suppress the residual laser noise below the LISA sensitivity level identified by secondary noises (such as proof-mass and optical path noises) with the use of the Fourier-based algorithm an integration time of about six months was needed.\n\nA solution to this problem was suggested in , which works entirely in the time-domain. From Equations (3, 4) we may notice that, by taking the difference of the two Doppler data y1(t), y2(t), the frequency fluctuations of the laser now enter into this new data set in the following way:\n\n$${y_1}(t) - {y_2}(t) = C(t - 2{L_1}) - C(t - 2{L_2}) + {h_1}(t) - {h_2}(t) + {n_1}(t) - {n_2}(t).$$\n(7)\n\nIf we now compare how the laser frequency fluctuations enter into Equation (7) against how they appear in Equations (3, 4), we can further make the following observation. If we time-shift the data y1(t) by the round trip light time in arm 2, y1(t − 2L2), and subtract from it the data y2(t) after it has been time-shifted by the round trip light time in arm 1, y2(t − 2L1), we obtain the following data set:\n\n$$\\begin{array}{*{20}c}{{y_1}(t - 2{L_2}) - {y_2}(t - 2{L_1}) = C(t - 2{L_1}) - C(t - 2{L_2}) + {h_1}(t - 2{L_2}) - {h_2}(t - 2{L_1})} \\\\ {\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad + {n_1}(t - 2{L_2}) - {n_2}(t - 2{L_1}).} \\\\ \\end{array}$$\n(8)\n\nIn other words, the laser frequency fluctuations enter into y1(t) − y2(t) and y1(t − 2L2) − y2(t − 2L1) with the same time structure. This implies that, by subtracting Equation (8) from Equation (7) we can generate a new data set that does not contain the laser frequency fluctuations C(t),\n\n$$X \\equiv [{y_1}(t) - {y_2}(t)] - [{y_1}(t - 2{L_2}) - {y_2}(t - 2{L_1})].$$\n(9)\n\nThe expression above of the X combination shows that it is possible to cancel the laser frequency noise in the time domain by properly time-shifting and linearly combining Doppler measurements recorded by different Doppler readouts. This in essence is what TDI amounts to. In the following sections we will further elaborate and generalize TDI to the realistic LISA configuration.\n\n## Time-Delay Interferometry\n\nThe description of TDI for LISA is greatly simplified if we adopt the notation shown in Figure 2, where the overall geometry of the LISA detector is defined. There are three spacecraft, six optical benches, six lasers, six proof-masses, and twelve photodetectors. There are also six phase difference data going clock-wise and counter-clockwise around the LISA triangle. For the moment we will make the simplifying assumption that the array is stationary, i.e. the back and forth optical paths between pairs of spacecraft are simply equal to their relative distances [24, 5, 25, 34].\n\nSeveral notations have been used in this context. The double index notation recently employed in , where six quantities are involved, is self-evident. However, when algebraic manipulations are involved the following notation seems more convenient to use. The spacecraft are labeled 1, 2, 3 and their separating distances are denoted L1, L2, L3, with Li being opposite spacecraft i. We orient the vertices 1, 2, 3 clockwise in Figure 2. Unit vectors between spacecraft are $${{\\hat n}_i}$$, oriented as indicated in Figure 2. We index the phase difference data to be analyzed as follows: The beam arriving at spacecraft i has subscript i and is primed or unprimed depending on whether the beam is traveling clockwise or counter-clockwise (the sense defined here with reference to Figure 2) around the LISA triangle, respectively. Thus, as seen from the figure, s1 is the phase difference time series measured at reception at spacecraft 1 with transmission from spacecraft 2 (along L3).\n\nSimilarly, $${{s^\\prime}_1}$$ is the phase difference series derived from reception at spacecraft 1 with transmission from spacecraft 3. The other four one-way phase difference time series from signals exchanged between the spacecraft are obtained by cyclic permutation of the indices: 1 → 2 → 3 → 1. We also adopt a notation for delayed data streams, which will be convenient later for algebraic manipulations. We define the three time-delay operators $${\\mathcal D_i},i = 1,2,3$$, where for any data stream x(t)\n\n$${{\\mathcal D}_i}x(t) = x(t - {L_i}),$$\n(10)\n\nwhere Li, i = 1, 2, 3, are the light travel times along the three arms of the LISA triangle (the speed of light c is assumed to be unity in this article). Thus, for example, $${{\\mathcal D}_2}{s_1}(t) = {s_1}(t - {L_2}),{{\\mathcal D}_2}{{\\mathcal D}_3}{s_1}(t) = {s_1}(t - {L_2} - {L_3}) = {{\\mathcal D}_3}{{\\mathcal D}_2}{s_1}(t)$$, etc. Note that the operators commute here. This is because the arm lengths have been assumed to be constant in time. If the Li are functions of time then the operators no longer commute [5, 34], as will be described in Section 4. Six more phase difference series result from laser beams exchanged between adjacent optical benches within each spacecraft; these are similarly indexed as $${\\tau _i},{{\\tau ^\\prime}_i},i = 1,2,3$$. The proof-mass-plus-optical-bench assemblies for LISA spacecraft number 1 are shown schematically in Figure 3. The photo receivers that generate the data s1, $${{s^\\prime}_1}$$, τ1, and $${{\\tau^\\prime}_1}$$ at spacecraft 1 are shown. The phase fluctuations from the six lasers, which need to be cancelled, can be represented by six random processes pi, $${{p^\\prime}_i}$$, where pi, $${{p^\\prime}_i}$$ are the phases of the lasers in spacecraft i on the left and right optical benches, respectively, as shown in the figure. Note that this notation is in the same spirit as in [33, 25] in which moving spacecraft arrays have been analyzed.\n\nWe extend the cyclic terminology so that at vertex i, i = 1, 2, 3, the random displacement vectors of the two proof masses are respectively denoted by $${{\\vec \\delta}_i}(t),{{\\vec \\delta ^\\prime}_i}(t)$$, and the random displacements (perhaps several orders of magnitude greater) of their optical benches are correspondingly denoted by $${{\\vec \\Delta}_i}(t),{{\\vec \\Delta ^\\prime}_i}(t)$$ where the primed and unprimed indices correspond to the right and left optical benches, respectively. As pointed out in , the analysis does not assume that pairs of optical benches are rigidly connected, i.e. $${{\\vec \\Delta}_i} \\ne {{\\vec \\Delta ^\\prime}_i}$$, in general. The present LISA design shows optical fibers transmitting signals both ways between adjacent benches. We ignore time-delay effects for these signals and will simply denote by μi(t) the phase fluctuations upon transmission through the fibers of the laser beams with frequencies νi, and $${\\nu ^\\prime}$$. The μi(t) phase shifts within a given spacecraft might not be the same for large frequency differences $${\\nu _i} - {{\\nu ^\\prime}_i}$$. For the envisioned frequency differences (a few hundred MHz), however, the remaining fluctuations due to the optical fiber can be neglected . It is also assumed that the phase noise added by the fibers is independent of the direction of light propagation through them. For ease of presentation, in what follows we will assume the center frequencies of the lasers to be the same, and denote this frequency by ν0.\n\nThe laser phase noise in $${{s^\\prime}_3}$$ is therefore equal to $${\\mathcal D_1}{p_2}(t) - {{p^\\prime}_3}(t)$$. Similarly, since s2 is the phase shift measured on arrival at spacecraft 2 along arm 1 of a signal transmitted from spacecraft 3, the laser phase noises enter into it with the following time signature: $${\\mathcal D_1}{{p^\\prime}_3}(t) - {p_2}(t)$$. Figure 3 endeavors to make the detailed light paths for these observations clear. An outgoing light beam transmitted to a distant spacecraft is routed from the laser on the local optical bench using mirrors and beam splitters; this beam does not interact with the local proof mass. Conversely, an incoming light beam from a distant spacecraft is bounced off the local proof mass before being reflected onto the photo receiver where it is mixed with light from the laser on that same optical bench. The inter-spacecraft phase data are denoted s1 and $${{s^\\prime}_1}$$ in Figure 3.\n\nBeams between adjacent optical benches within a single spacecraft are bounced off proof masses in the opposite way. Light to be transmitted from the laser on an optical bench is first bounced off the proof mass it encloses and then directed to the other optical bench. Upon reception it does not interact with the proof mass there, but is directly mixed with local laser light, and again down converted. These dat a are denoted τ1 and $${{\\tau ^\\prime}_1}$$ in Figure 3.\n\nThe expressions for the si, $${{s^\\prime}_i}$$ and τi, $${{\\tau ^\\prime}_i}$$ phase measurements can now be developed from Figures 2 and 3, and they are for the particular LISA configuration in which all the lasers have the same nominal frequency νo, and the spacecraft are stationary with respect to each other. Consider the $${{s^\\prime}_1}(t)$$ process (Equation (13) below). The photo receiver on the right bench of spacecraft 1, which (in the spacecraft frame) experiences a time-varying displacement $${{\\vec \\Delta ^\\prime}_1}$$, measures the phase difference $${{s^\\prime}_1}$$ by first mixing the beam from the distant optical bench 3 in direction $${{\\hat n}_2}$$, and laser phase noise p3 and optical bench motion $${{\\vec \\Delta}_3}$$ that have been delayed by propagation along L2, after one bounce off the proof mass $$({{\\vec \\delta ^\\prime}_1})$$, with the local laser light (with phase noise $${{p^\\prime}_1}$$). Since for this simplified configuration no frequency offsets are present, there is of course no need for any heterodyne conversion .\n\nIn Equation (12) the τ1 measurement results from light originating at the right-bench laser $$({{p^\\prime}_1},{{\\vec \\Delta ^\\prime}_1})$$, bounced once off the right proof mass $$({{\\vec \\delta ^\\prime}_1})$$, and directed through the fiber (incurring phase shift μ1(t)), to the left bench, where it is mixed with laser light (p1). Similarly the right bench records the phase differences $${{s^\\prime}_1}$$ and $${{\\tau ^\\prime}_1}$$. The laser noises, the gravitational wave signals, the optical path noises, and proof-mass and bench noises, enter into the four data streams recorded at vertex 1 according to the following expressions :\n\n$${s_1} = s_1^{{\\rm{gw}}} + s_1^{{\\rm{opticalpath}}} + {{\\mathcal D}_3}p_2^\\prime - {p_1} + {\\nu _0}\\left[ {- 2{{\\hat n}_3}\\cdot{{\\vec \\delta}_1} + {{\\hat n}_3}\\cdot{{\\vec \\Delta}_1} + {{\\hat n}_3}\\cdot{{\\mathcal D}_3}\\vec {\\mathcal D}_2^\\prime} \\right],$$\n(11)\n$${\\tau _1} = p_1^{\\prime} - {p_1} - 2{\\nu _0}{\\hat{n}_2} \\cdot \\left({\\overrightarrow{\\delta} _1^{\\prime} - \\overrightarrow{\\Delta} _1^{\\prime}} \\right) + {\\mu _1}.$$\n(12)\n$$s_1^\\prime = s_1^{\\prime{\\rm{gw}}} + s_1^{\\prime{\\rm{opticalpath}}}+{{\\mathcal D}_2}{p_3} - p_1^\\prime + {\\nu _0}[2{\\hat n_2}\\cdot\\vec \\delta _1^\\prime - {\\hat n_2}\\cdot\\vec \\Delta _1^\\prime - {\\hat n_2}\\cdot{{\\mathcal D}_2}{\\vec \\Delta _3}],$$\n(13)\n$$\\tau _1^{\\prime} = {p_1} - p_1^{\\prime} - 2{\\nu _0}{\\hat{n}_3} \\cdot \\left({{{\\overrightarrow{\\delta}}_1} - {{\\overrightarrow{\\Delta}}_1}} \\right) + {\\mu _1}.$$\n(14)\n\nEight other relations, for the readouts at vertices 2 and 3, are given by cyclic permutation of the indices in Equations (11, 12, 13, 14).\n\nThe gravitational wave phase signal components $$s_i^{{\\rm{gw}}},s_i^{{\\rm{\\prime gw}}},i = 1,2,3$$, in Equations (11) and (13) are given by integrating with respect to time the Equations (1) and (2) of reference , which relate metric perturbations to optical frequency shifts. The optical path phase noise contributions $$s_i^{{\\rm{opticalpath}}},s_i^{\\prime {\\rm{opticalpath}}}$$, which include shot noise from the low SNR in the links between the distant spacecraft, can be derived from the corresponding term given in . The τi, $$\\tau _i^\\prime$$ measurements will be made with high SNR so that for them the shot noise is negligible.\n\n## Algebraic Approach to Cancelling Laser and Optical Bench Noises\n\nIn ground based detectors the arms are chosen to be of equal length so that the laser light experiences identical delay in each arm of the interferometer. This arrangement precisely cancels the laser frequency/phase noise at the photodetector. The required sensitivity of the instrument can thus only be achieved by near exact cancellation of the laser frequency noise. However, in LISA it is impossible to achieve equal distances between spacecraft, and the laser noise cannot be cancelled in this way. It is possible to combine the recorded data linearly with suitable time-delays corresponding to the three arm lengths of the giant triangular interferometer so that the laser phase noise is cancelled. Here we present a systematic method based on modules over polynomial rings which guarantees all the data combinations that cancel both the laser phase and the optical bench motion noises.\n\nWe first consider the simpler case, where we ignore the optical-bench motion noise and consider only the laser phase noise. We do this because the algebra is somewhat simpler and the method is easy to apply. The simplification amounts to physically considering each spacecraft rigidly carrying the assembly of lasers, beam-splitters, and photodetectors. The two lasers on each spacecraft could be considered to be locked, so effectively there would be only one laser on each spacecraft. This mathematically amounts to setting $${{\\vec \\Delta}_i} = {{\\vec \\Delta ^\\prime}_i} = 0$$ and $${p_i} = {{p^\\prime}_i}$$. The scheme we describe here for laser phase noise can be extended in a straight-forward way to include optical bench motion noise, which we address in the last part of this section.\n\nThe data combinations, when only the laser phase noise is considered, consist of the six suitably delayed data streams (inter-spacecraft), the delays being integer multiples of the light travel times between spacecraft, which can be conveniently expressed in terms of polynomials in the three delay operators $${\\mathcal D_1},{\\mathcal D_2},{\\mathcal D_3}$$. The laser noise cancellation condition puts three constraints on the six polynomials of the delay operators corresponding to the six data streams. The problem therefore consists of finding six-tuples of polynomials which satisfy the laser noise cancellation constraints. These polynomial tuples form a moduleFootnote 1 called the module of syzygies. There exist standard methods for obtaining the module, by which we mean methods for obtaining the generators of the module so that the linear combinations of the generators generate the entire module. The procedure first consists of obtaining a Gröbner basis for the ideal generated by the coefficients appearing in the constraints. This ideal is in the polynomial ring in the variables $${\\mathcal D_1},{\\mathcal D_2},{\\mathcal D_3}$$ over the domain of rational numbers (or integers if one gets rid of the denominators). To obtain the Gröobner basis for the ideal, one may use the Buchberger algorithm or use an application such as Mathematica . From the Gröbner basis there is a standard way to obtain a generating set for the required module. This procedure has been described in the literature [2, 16]. We thus obtain seven generators for the module. However, the method does not guarantee a minimal set and we find that a generating set of 4 polynomial six-tuples suffice to generate the required module. Alternatively, we can obtain generating sets by using the software Macaulay 2.\n\nThe importance of obtaining more data combinations is evident: They provide the necessary redundancy — different data combinations produce different transfer functions for GWs and the system noises so specific data combinations could be optimal for given astrophysical source parameters in the context of maximizing SNR, detection probability, improving parameter estimates, etc.\n\n### Cancellation of laser phase noise\n\nWe now only have six data streams si and $$s_i^\\prime$$, where i = 1, 2, 3. These can be regarded as 3 component vectors s and s′, respectively. The six data streams with terms containing only the laser frequency noise are\n\n$$\\begin{array}{*{20}c}{{s_1} - {{\\mathcal D}_3}{p_2} - {p_1},} \\\\ {s_1^{\\prime} - {{\\mathcal D}_2}{p_3} - {p_1}} \\\\ \\end{array}$$\n(15)\n\nand their cyclic permutations.\n\nNote that we have intentionally excluded from the data additional phase fluctuations due to the GW signal, and noises such as the optical-path noise, proof-mass noise, etc. Since our immediate goal is to cancel the laser frequency noise we have only kept the relevant terms. Combining the streams for cancelling the laser frequency noise will introduce transfer functions for the other noises and the GW signal. This is important and will be discussed subsequently in the article.\n\nThe goal of the analysis is to add suitably delayed beams together so that the laser frequency noise terms add up to zero. This amounts to seeking data combinations that cancel the laser frequency noise. In the notation/formalism that we have invoked, the delay is obtained by applying the operators $${\\mathcal D_k}$$ to the beams si and $$s_i^\\prime$$. A delay of k1L1 + k2L2 + k3L3 is represented by the operator $${\\mathcal D}_1^{{k_1}}{\\mathcal D}_2^{{k_2}}{\\mathcal D}_3^{{k_3}}$$ acting on the data, where k1, k2, and k3 are integers. In general a polynomial in $${{\\mathcal D}_k}$$, which is a polynomial in three variables, applied to, say, s1 combines the same data stream s1(t) with different time-delays of the form k1L1 + k2L2 + k3L3. This notation conveniently rephrases the problem. One must find six polynomials say $${q_i}({{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_3}),q_i^\\prime({{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_3}),i = 1,2,3$$, such that\n\n$$\\sum\\limits_{i = 1}^3 {{q_i}{s_i} + q_i^{\\prime}s_i^{\\prime} = 0.}$$\n(16)\n\nThe zero on the right-hand side of the above equation signifies zero laser phase noise.\n\nIt is useful to express Equation (15) in matrix form. This allows us to obtain a matrix operator equation whose solutions are q and q′, where qi and $$q_i^\\prime$$ are written as column vectors. We can similarly express si, $$s_i^\\prime$$, pi as column vectors s, s′, p, respectively. In matrix form Equation (15) becomes\n\n$${\\bf{s}} = {{\\bf{D}}^T} \\cdot {\\bf{p}},\\quad {\\bf{s}}^{\\prime} = {\\bf{D}} \\cdot {\\bf{p}},$$\n(17)\n\nwhere D is a 3 × 3 matrix given by\n\n$${\\rm{{\\mathcal D}}} = \\left({\\begin{array}{*{20}c}{- 1\\;\\;0\\;\\;{{\\mathcal D}_2}} \\\\ {{{\\mathcal D}_3}\\;\\; - 1\\;\\;0} \\\\ {0\\;\\;{{\\mathcal D}_1}\\;\\; - 1} \\\\ \\end{array}} \\right).$$\n(18)\n\nThe exponent ‘T’ represents the transpose of the matrix. Equation (16) becomes\n\n$${{\\bf{q}}^T}\\cdot{\\bf{s}} + {\\rm{}}{{\\bf{q}}^{\\prime T}} \\cdot {{\\bf{s}}^\\prime} = ({{\\bf{q}}^T} \\cdot {{\\bf{D}}^T} + {\\rm{}}{{\\bf{q}}^{\\prime T}}\\cdot{\\bf{D}}) \\cdot {\\bf{p}} = 0,$$\n(19)\n\nwhere we have taken care to put p on the right-hand side of the operators. Since the above equation must be satisfied for an arbitrary vector p, we obtain a matrix equation for the polynomials (q, q′):\n\n$${{\\bf{q}}^T} \\cdot {{\\bf{D}}^T} + {{\\bf{q}}^{\\prime}} \\cdot {\\bf{D}} = 0.$$\n(20)\n\nNote that since the $${{\\mathcal D}_k}$$ commute, the order in writing these operators is unimportant. In mathematical terms, the polynomials form a commutative ring.\n\n### Cancellation of laser phase noise in the unequal-arm interferometer\n\nThe use of commutative algebra is very conveniently illustrated with the help of the simpler example of the unequal-arm interferometer. Here there are only two arms instead of three as we have for LISA, and the mathematics is much simpler and so it easy to see both physically and mathematically how commutative algebra can be applied to this problem of laser phase noise cancellation. The procedure is well known for the unequal-arm interferometer, but here we will describe the same method but in terms of the delay opertors that we have introduced.\n\nLet Φ(t) denote the laser phase noise entering the laser cavity as shown in Figure 4. Consider this light Φ(t) making a round trip around arm 1 whose length we take to be L1. If we interfere this phase with the incoming light we get the phase Φ1(t), where\n\n$${\\phi _1}(t) = \\phi (t - 2{L_1})-\\phi (t) \\equiv ({\\mathcal D}_1^2 - 1)\\phi (t).$$\n(21)\n\nThe second expression we have written in terms of the delay operators. This makes the procedure transparent as we shall see. We can do the same for the arm 2 to get another phase Φ2(t), where\n\n$${\\phi _2}(t) = \\phi (t - 2{L_2}) - \\phi (t) \\equiv ({\\mathcal D}_2^2 - 1)\\phi (t).$$\n(22)\n\nClearly, if L1L2, then the difference in phase Φ2(t) − Φ1(t) is not zero and the laser phase noise does not cancel out. However, if one further delays the phases Φ1(t) and Φ2(t) and constructs the following combination,\n\n$$X(t) = [{\\phi _2}(t - 2{L_1}) - {\\phi _2}(t)] - [{\\phi _1}(t - 2{L_2}) - {\\phi _1}(t)],$$\n(23)\n\nthen the laser phase noise does cancel out. We have already encountered this combination at the end of Section 2. It was first proposed by Tinto and Armstrong in .\n\nThe cancellation of laser frequency noise becomes obvious from the operator algebra in the following way. In the operator notation,\n\n$$\\begin{array}{*{20}c}{X(t) = ({\\mathcal D}_1^2 - 1){\\phi _2}(t) - ({\\mathcal D}_2^2 - 1){\\phi _1}(t)} \\\\ {\\quad \\quad = [({\\mathcal D}_1^2 - 1) - ({\\mathcal D}_2^2 - 1) - ({\\mathcal D}_2^2 - 1)({\\mathcal D}_1^2 - 1)]\\phi (t)} \\\\{\\quad \\quad = 0.} \\\\\\end{array}$$\n(24)\n\nFrom this one immediately sees that just the commutativity of the operators has been used to cancel the laser phase noise. The basic idea was to compute the lowest common multiple (L.C.M.) of the polynomials $$\\mathcal D_1^2 - 1$$ and $$\\mathcal D_2^2 - 1$$ (in this case the L.C.M. is just the product, because the polynomials are relatively prime) and use this fact to construct X(t) in which the laser phase noise is cancelled. The operation is shown physically in Figure 4.\n\nThe notions of commutativity of polynomials, L.C.M., etc. belong to the field of commutative algebra. In fact we will be using the notion of a Groöbner basis which is in a sense the generalization of the notion of the greatest common divisor (gcd). Since LISA has three spacecraft and six inter-spacecraft beams, the problem of the unequal-arm interferometer only gets technically more complex; in principle the problem is the same as in this simpler case. Thus the simple operations which were performed here to obtain a laser noise free combination X(t) are not sufficient and more sophisticated methods need to be adopted from the field of commutative algebra. We address this problem in the forthcoming text.\n\n### The module of syzygies\n\nEquation (20) has non-trivial solutions. Several solutions have been exhibited in [1, 7]. We merely mention these solutions here; in the forthcoming text we will discuss them in detail. The solution ζ is given by $$- {{\\rm{q}}^T} = {{\\rm{q}}^{\\prime T}} = ({{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_3})$$. The solution α is described by $${{\\rm{q}}^T} = - (1,{{\\mathcal D}_3},{{\\mathcal D}_{1,}}{{\\mathcal D}_3})$$ and $${{\\rm{q}}^\\prime}^T = (1,{{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_2})$$. The solutions β and γ are obtained from a by cyclically permuting the indices of $${{\\mathcal D}_k}$$, q, and q′. These solutions are important, because they consist of polynomials with lowest possible degrees and thus are simple. Other solutions containing higher degree polynomials can be generated conveniently from these solutions. Since the system of equations is linear, linear combinations of these solutions are also solutions to Equation (20).\n\nHowever, it is important to realize that we do not have a vector space here. Three independent constraints on a six-tuple do not produce a space which is necessarily generated by three basis elements. This conclusion would follow if the solutions formed a vector space but they do not. The polynomial six-tuple q, q′ can be multiplied by polynomials in $${{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_2}$$ (scalars) which do not form a field. Thus the inverse in general does not exist within the ring of polynomials. We therefore have a module over the ring of polynomials in the three variables $${{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_2}$$. First we present the general methodology for obtaining the solutions to Equation (20) and then apply it to Equation (20).\n\nThere are three linear constraints on the polynomials given by Equation (20). Since the equations are linear, the solutions space is a submodule of the module of six-tuples of polynomials. The module of six-tuples is a free module, i.e. it has six basis elements that not only generate the module but are linearly independent. A natural choice of the basis is fm = (0, …, 1,…, 0) with 1 in the m-th place and 0 everywhere else; m runs from 1 to 6. The definitions of generation (spanning) and linear independence are the same as that for vector spaces. A free module is essentially like a vector space. But our interest lies in its submodule which need not be free and need not have just three generators as it would seem if we were dealing with vector spaces.\n\nThe problem at hand is of finding the generators of this submodule, i.e. any element of the submodule should be expressible as a linear combination of the generating set. In this way the generators are capable of spanning the full submodule or generating the submodule. In order to achieve our goal, we rewrite Equation (20) explicitly component-wise:\n\n$$\\begin{array}{*{20}c}{{q_1} + q_1^{\\prime} - {{\\mathcal D}_3}q_2^{\\prime} - {{\\mathcal D}_2}{q_3} = 0,} \\\\ {{q_2} + q_2^{\\prime} - {{\\mathcal D}_1}q_3^{\\prime} - {{\\mathcal D}_3}{q_1} = 0,} \\\\ {{q_3} + q_3^{\\prime} - {{\\mathcal D}_2}q_1^{\\prime} - {{\\mathcal D}_1}{q_2} = 0.} \\\\ \\end{array}$$\n(25)\n\nThe first step is to use Gaussian elimination to obtain q1 and q2 in terms of q3, $${q_3},q_1^\\prime,q_2^\\prime,q_3^\\prime$$,\n\n$$\\begin{array}{*{20}c}{{q_1} = - q_1^{\\prime} + {{\\mathcal D}_3}q_2^{\\prime} + {{\\mathcal D}_2}{q_3},} \\\\ {{q_2} = - q_2^{\\prime} + {{\\mathcal D}_1}q_3^{\\prime} + {{\\mathcal D}_3}{q_1}} \\\\ {\\quad = - {{\\mathcal D}_3}q_1^{\\prime} - (1 - {\\mathcal D}_3^2)q_2^{\\prime} + {{\\mathcal D}_1}q_3^{\\prime} + {{\\mathcal D}_2}{{\\mathcal D}_3}{q_3},} \\\\ \\end{array}$$\n(26)\n\nand then substitute these values in the third equation to obtain a linear implicit relation between q3, $${q_3},q_1^\\prime, q_2^\\prime, q_3^\\prime$$. We then have:\n\n$$(1 - {{\\mathcal D}_1}{{\\mathcal D}_2}{{\\mathcal D}_3}){q_3} + ({{\\mathcal D}_1}{{\\mathcal D}_3} - {{\\mathcal D}_2})q_1^{\\prime} + {{\\mathcal D}_1}(1 - {\\mathcal D}_3^2)q_2^{\\prime} + (1 - {\\mathcal D}_1^2)q_3^{\\prime} = 0.$$\n(27)\n\nObtaining solutions to Equation (27) amounts to solving the problem since the remaining polynomials q1, q2 have been expressed in terms of q3, $${q_3},q_1^\\prime, q_2^\\prime, q_3^\\prime$$ in Equation (26). Note that we cannot carry on the Gaussian elimination process any further, because none of the polynomial coefficients appearing in Equation (27) have an inverse in the ring.\n\nWe will assume that the polynomials have rational coefficients, i.e. the coefficients belong to $$\\mathcal Q$$, the field of the rational numbers. The set of polynomials form a ring — the polynomial ring in three variables, which we denote by $${\\mathcal R} = \\mathcal Q[{{\\mathcal D}_1},{{\\mathcal D}_{2,}}{{\\mathcal D}_2}]$$. The polynomial vector $$({q_3},q_1^\\prime,q_2^\\prime,q_3^\\prime) \\in {{\\mathcal R}^4}$$. The set of solutions to Equation (27) is just the kernel of the homomorphism $$\\varphi:{{\\mathcal R}^4} \\to {\\mathcal R}$$, where the polynomial vector (q3, $${q_3},q_1^\\prime, q_2^\\prime, q_3^\\prime$$) is mapped to the polynomial $$(1 - {{\\mathcal D}_1}{{\\mathcal D}_2}{{\\mathcal D}_2}){q_3} + ({{\\mathcal D}_1}{{\\mathcal D}_3} - {{\\mathcal D}_2})q_1^\\prime + {{\\mathcal D}_1}(1 - {\\mathcal D}_3^2)q_2^\\prime + (1 - {\\mathcal D}_1^2)q_3^\\prime$$. Thus the solution space ker Φ is a submodule of $${{\\mathcal R}^4}$$. It is called the module of syzygies. The generators of this module can be obtained from standard methods available in the literature. We briefly outline the method given in the books by Becker et al. , and Kreuzer and Robbiano below. The details have been included in Appendix A.\n\n### Gröbner basis\n\nThe first step is to obtain the Gröbner basis for the ideal $${\\mathcal U}$$ generated by the coefficients in Equation (27):\n\n$${u_1} = 1 - {{\\mathcal D}_1}{{\\mathcal D}_2}{{\\mathcal D}_3},\\quad {u_2} = {{\\mathcal D}_1}{{\\mathcal D}_3} - {{\\mathcal D}_2},\\quad {u_3} = {{\\mathcal D}_1}(1 - {\\mathcal D}_3^2),\\quad {u_4} = 1 - {\\mathcal D}_1^2.$$\n(28)\n\nThe ideal $${\\mathcal U}$$ consists of linear combinations of the form ∑ viui where vi, i = 1,…,4 are polynomials in the ring $${\\mathcal R}$$. There can be several sets of generators for $${\\mathcal U}$$. A Gröbner basis is a set of generators which is ‘small’ in a specific sense.\n\nThere are several ways to look at the theory of Gröbner basis. One way is the following: Suppose we are given polynomials g1, g2,…,gm in one variable over say $${\\mathcal Q}$$ and we would like to know whether another polynomial f belongs to the ideal generated by the g’s. A good way to decide the issue would be to first compute the gcd g of g1, g2, …,gm and check whether f is a multiple of g. One can achieve this by doing the long division of f by g and checking whether the remainder is zero. All this is possible because $${\\mathcal Q}[x]$$ is a Euclidean domain and also a principle ideal domain (PID) wherein any ideal is generated by a single element. Therefore we have essentially just one polynomial — the gcd — which generates the ideal generated by g1, g2,…,gm. The ring of integers or the ring of polynomials in one variable over any field are examples of PIDs whose ideals are generated by single elements. However, when we consider more general rings (not PIDs) like the one we are dealing with here, we do not have a single gcd but a set of several polynomials which generates an ideal in general. A Gröobner basis of an ideal can be thought of as a generalization of the gcd. In the univariate case, the Gröobner basis reduces to the gcd.\n\nGröobner basis theory generalizes these ideas to multivariate polynomials which are neither Euclidean rings nor PIDs. Since there is in general not a single generator for an ideal, Gröbner basis theory comes up with the idea of dividing a polynomial with a set of polynomials, the set of generators of the ideal, so that by successive divisions by the polynomials in this generating set of the given polynomial, the remainder becomes zero. Clearly, every generating set of polynomials need not possess this property. Those special generating sets that do possess this property (and they exist!) are called Gröbner bases. In order for a division to be carried out in a sensible manner, an order must be put on the ring of polynomials, so that the final remainder after every division is strictly smaller than each of the divisors in the generating set. A natural order exists on the ring of integers or on the polynomial ring $${\\mathcal Q}[x]$$; the degree of the polynomial decides the order in $${\\mathcal Q}[x]$$. However, even for polynomials in two variables there is no natural order a priori (is x2 + y greater or smaller than x + y2?). But one can, by hand as it were, put an order on such a ring by saying xy, where ≫ is an order, called the lexicographical order. We follow this type of order, $${{\\mathcal D}_1} \\gg {{\\mathcal D}_2} \\gg {{\\mathcal D}_3}$$ and ordering polynomials by considering their highest degree terms. It is possible to put different orderings on a given ring which then produce different Gröbner bases. Clearly, a Gröbner basis must have ‘small’ elements so that division is possible and every element of the ideal when divided by the Gröobner basis elements leaves zero remainder, i.e. every element modulo the Gröobner basis reduces to zero.\n\nIn the literature, there exists a well-known algorithm called the Buchberger algorithm which may be used to obtain the Gröobner basis for a given set of polynomials in the ring. So a Groöbner basis of $${\\mathcal U}$$ can be obtained from the generators ui given in Equation (28) using this algorithm. It is essentially again a generalization of the usual long division that we perform on univariate polynomials. More conveniently, we prefer to use the well known application Mathematica. Mathematica yields a 3-element Gröbner basis $${\\mathcal G}$$ for $${\\mathcal U}$$:\n\n$${\\mathcal G} = \\{{\\mathcal D}_3^2 - 1,{\\mathcal D}_2^2 - 1,{{\\mathcal D}_1} - {{\\mathcal D}_2}{{\\mathcal D}_3}\\}.$$\n(29)\n\nOne can easily check that all the ui of Equation (28) are linear combinations of the polynomials in $${\\mathcal G}$$ and hence $${\\mathcal G}$$ generates $${\\mathcal U}$$. One also observes that the elements look ‘small’ in the order mentioned above. However, one can satisfy oneself that $${\\mathcal G}$$ is a Groöbner basis by using the standard methods available in the literature. One method consists of computing the S-polynomials (see Appendix A) for all the pairs of the Gröobner basis elements and checking whether these reduce to zero modulo $${\\mathcal G}$$.\n\nThis Gröbner basis of the ideal $${\\mathcal U}$$ is then used to obtain the generators for the module of syzygies. Note that although the Gröbner basis depends on the order we choose among the $${{\\mathcal D}_k}$$, the module itself is independent of the order .\n\n### Generating set for the module of syzygies\n\nThe generating set for the module is obtained by further following the procedure in the literature [2, 16]. The details are given in Appendix A, specifically for our case. We obtain 7 generators for the module. These generators do not form a minimal set and there are relations between them; in fact this method does not guarantee a minimum set of generators. These generators can be expressed as linear combinations of α, β, γ, ζ and also in terms of X(1), X(2), X(3), X(4) given below in Equation (30). The importance in obtaining the 7 generators is that the standard theorems guarantee that these 7 generators do in fact generate the required module. Therefore, from this proven set of generators we can check whether a particular set is in fact a generating set. We present several generating sets below.\n\nAlternatively, we may use a software package called Macaulay 2 which directly calculates the generators given the Equations (25). Using Macaulay 2, we obtain six generators. Again, Macaulay’s algorithm does not yield a minimal set; we can express the last two generators in terms of the first four. Below we list this smaller set of four generators in the order $$X = ({q_1},{q_2},{q_3},{q^\\prime}_1,{q^\\prime}_2,{q^\\prime}_3)$$\n\n$$\\begin{array}{*{20}c}{{X^{(1)}} = ({{\\mathcal D}_2} - {{\\mathcal D}_1}{{\\mathcal D}_3},0,1 - {\\mathcal D}_3^2,0,{{\\mathcal D}_2}{{\\mathcal D}_3} - {{\\mathcal D}_1},{\\mathcal D}_3^2 - 1),} \\\\ {{X^{(2)}} = (- {{\\mathcal D}_1}, - {{\\mathcal D}_2}, - {{\\mathcal D}_3},{{\\mathcal D}_1},{{\\mathcal D}_2},{{\\mathcal D}_3}),} \\\\ {{X^{(3)}} = (- 1, - {{\\mathcal D}_3}, - {{\\mathcal D}_1}{{\\mathcal D}_3},1,{{\\mathcal D}_1}{{\\mathcal D}_2},{{\\mathcal D}_2}),} \\\\ {{X^{(4)}} = (- {{\\mathcal D}_1}{{\\mathcal D}_2}, - 1, - {{\\mathcal D}_1},{{\\mathcal D}_3},1,{{\\mathcal D}_2}{{\\mathcal D}_3}).} \\\\ \\end{array}$$\n(30)\n\nNote that the last three generators are just X(2) = ζ, X(3) = α, X(4) = β. An extra generator X(1) is needed to generate all the solutions.\n\nAnother set of generators which may be useful for further work is a Gröbner basis of a module. The concept of a Gröbner basis of an ideal can be extended to that of a Gröbner basis of a submodule of (K[x1, x2,…xn])m where K is a field, since a module over the polynomial ring can be considered as generalization of an ideal in a polynomial ring. Just as in the case of an ideal, a Gröbner basis for a module is a generating set with special properties. For the module under consideration we obtain a Gröbner basis using Macaulay 2:\n\n$$\\begin{array}{*{20}c}{{G^{(1)}} = (- {{\\mathcal D}_1}, - {{\\mathcal D}_2}, - {{\\mathcal D}_3},{{\\mathcal D}_1},{{\\mathcal D}_2},{{\\mathcal D}_3}),} \\\\ {{G^{(2)}} = ({{\\mathcal D}_2} - {{\\mathcal D}_1}{{\\mathcal D}_3},0,1 - {\\mathcal D}_3^2,0,{{\\mathcal D}_2}{{\\mathcal D}_3} - {{\\mathcal D}_1},{\\mathcal D}_3^2 - 1),} \\\\ {{G^{(3)}} = (- {{\\mathcal D}_1}{{\\mathcal D}_2}, - 1, - {{\\mathcal D}_1},{{\\mathcal D}_3},1,{{\\mathcal D}_2}{{\\mathcal D}_3}),} \\\\ {{G^{(4)}} = (- 1, - {{\\mathcal D}_3}, - {{\\mathcal D}_1}{{\\mathcal D}_3},1,{{\\mathcal D}_1}{{\\mathcal D}_2},{{\\mathcal D}_2}),} \\\\ {{G^{(5)}} = ({{\\mathcal D}_3}(1 - {\\mathcal D}_1^2),{\\mathcal D}_3^2 - 1,0,0,1 - {\\mathcal D}_1^2,{{\\mathcal D}_1}({\\mathcal D}_3^2 - 1)).} \\\\ \\end{array}$$\n(31)\n\nNote that in this Gröbner basis G(1)= ζ = X(2), G(2) = X(1),G(3) = β = X(4), G(4) = α = X(3). Only G(5) is the new generator.\n\nAnother set of generators are just α, β, γ, and ζ. This can be checked using Macaulay 2, or one can relate α, β, γ, and ζ to the generators X(A), A= 1, 2, 3, 4, by polynomial matrices. In Appendix B, we express the 7 generators we obtained following the literature, in terms of α, β, γ, and ζ. Also we express α, β, γ, and ζ in terms of X(A). This proves that all these sets generate the required module of syzygies.\n\nThe question now arises as to which set of generators we should choose which facilitates further analysis. The analysis is simplified if we choose a smaller number of generators. Also we would prefer low degree polynomials to appear in the generators so as to avoid cancellation of leading terms in the polynomials. By these two criteria we may choose X(A) or α, β, γ, ζ. However, α, β, γ, ζ possess the additional property that this set is left invariant under a cyclic permutation of indices 1, 2, 3. It is found that this set is more convenient to use because of this symmetry.\n\n### Canceling optical bench motion noise\n\nThere are now twelve Doppler data streams which have to be combined in an appropriate manner in order to cancel the noise from the laser as well as from the motion of the optical benches. As in the previous case of cancelling laser phase noise, here too, we keep the relevant terms only, namely those terms containing laser phase noise and optical bench motion noise. We then have the following expressions for the four data streams on spacecraft 1:\n\n$${s_1} = {{\\mathcal D}_3}\\left[ {p_2^{\\prime} + {\\nu _0}{{\\hat{\\bf{n}}}_3} \\cdot \\bar{\\Delta}_2^{\\prime}} \\right] - \\left[ {{p_1} - {\\nu _0}{{\\hat{\\bf{n}}}_3} \\cdot {{\\bar{\\Delta}}_1}} \\right],$$\n(32)\n$$s_1^{\\prime} = {{\\mathcal D}_2}\\left[ {{p_3} - {\\nu _0}{{\\hat{\\bf{n}}}_2} \\cdot {{\\bar{\\Delta}}_3}} \\right] - \\left[ {p_1^{\\prime} + {\\nu _0}{{\\hat{\\bf{n}}}_2} \\cdot \\bar{\\Delta}_1^{\\prime}} \\right],$$\n(33)\n$${\\tau _1} = p_1^{\\prime} - {p_1} + 2{\\nu _0}{\\hat{\\bf{n}}_2} \\cdot \\bar{\\Delta}_1^{\\prime} + {\\mu _1},$$\n(34)\n$$\\tau _1^{\\prime} = {p_1} - p_1^{\\prime} - 2{\\nu _0}{\\hat{\\bf{n}}_3} \\cdot {\\bar{\\Delta}_1} + {\\mu _1}.$$\n(35)\n\nThe other eight data streams on spacecraft 2 and 3 are obtained by cyclic permutations of the indices in the above equations. In order to simplify the derivation of the expressions cancelling the optical bench noises, we note that by subtracting Equation (35) from Equation (34), we can rewriting the resulting expression (and those obtained from it by permutation of the spacecraft indices) in the following form:\n\n$${z_1} \\equiv {1 \\over 2}({\\tau _1} - \\tau _1^{\\prime}) = \\phi _1^{\\prime} - {\\phi _1},$$\n(36)\n\nwhere $${\\phi _1}^\\prime$$, Φ1 are defined as\n\n$$\\begin{array}{*{20}c}{\\phi _1^{\\prime} \\equiv p_1^{\\prime} + {\\nu _0}{{\\hat{\\bf{n}}}_2} \\cdot \\bar{\\Delta}_1^{\\prime},} \\\\{{\\phi _1} \\equiv {p_1} - {\\nu _0}{{\\hat{\\bf{n}}}_3} \\cdot {{\\bar{\\Delta}}_1},} \\\\\\end{array}$$\n(37)\n\nThe importance in defining these combinations is that the expressions for the data streams si, $${s_i}^\\prime$$ simplify into the following form:\n\n$$\\begin{array}{*{20}c}{{s_1} \\equiv {{\\mathcal D}_3}\\phi _2^{\\prime} - {\\phi _1},} \\\\ {s_1^{\\prime} \\equiv {{\\mathcal D}_2}{\\phi _3} - \\phi _1^{\\prime}.} \\\\\\end{array}$$\n(38)\n\nIf we now combine the si, $${s_i}^\\prime$$, and zi in the following way,\n\n$${\\eta _1} \\equiv {s_1} - {{\\mathcal D}_3}{z_2} = {{\\mathcal D}_3}{\\phi _2} - {\\phi _1},\\quad {\\eta _{1^{\\prime}}} \\equiv {s_{1^{\\prime}}} + {z_1} = {{\\mathcal D}_2}{\\phi _3} - {\\phi _1},$$\n(39)\n$${\\eta _2} \\equiv {s_2} - {{\\mathcal D}_1}{z_3} = {{\\mathcal D}_1}{\\phi _3} - {\\phi _2},\\quad {\\eta _{2^{\\prime}}} \\equiv {s_{2^{\\prime}}} + {z_2} = {{\\mathcal D}_3}{\\phi _1} - {\\phi _2},$$\n(40)\n$${\\eta _3} \\equiv {s_3} - {{\\mathcal D}_2}{z_1} = {{\\mathcal D}_2}{\\phi _1} - {\\phi _3},\\quad {\\eta _{3^{\\prime}}} \\equiv {s_{3^{\\prime}}} + {z_3} = {{\\mathcal D}_1}{\\phi _2} - {\\phi _3},$$\n(41)\n\nwe have just reduced the problem of cancelling of six laser and six optical bench noises to the equivalent problem of removing the three random processes Φ1, Φ2, and Φ3 from the six linear combinations ηi, $${\\eta _i}^\\prime$$ of the one-way measurements si, $${s_i}^\\prime$$, and zi. By comparing the equations above to Equation (15) for the simpler configuration with only three lasers, analyzed in the previous Sections 4.1 to 4.4, we see that they are identical in form.\n\n### Physical interpretation of the TDI combinations\n\nIt is important to notice that the four interferometric combinations (α, β, γ, ζ), which can be used as a basis for generating the entire TDI space, are actually synthesized Sagnac interferometers. This can be seen by rewriting the expression for a, for instance, in the following form,\n\n$$\\alpha = [{\\eta _{1^{\\prime}}} + {{\\mathcal D}_2}{\\eta _{3^{\\prime}}} + {{\\mathcal D}_1}{{\\mathcal D}_{2^{\\prime}}}{\\eta _{2^{\\prime}}}] - [{\\eta _1} + {{\\mathcal D}_3}{\\eta _2} + {{\\mathcal D}_1}{{\\mathcal D}_3}{\\eta _2}],$$\n(42)\n\nand noticing that the first square bracket on the right-hand side of Equation (42) contains a combination of one-way measurements describing a light beam propagating clockwise around the array, while the other terms in the second square-bracket give the equivalent of another beam propagating counter-clockwise around the constellation.\n\nContrary to α, β, and γ, ζ can not be visualized as the difference (or interference) of two synthesized beams. However, it should still be regarded as a Sagnac combination since there exists a time-delay relationship between it and α, β, and γ :\n\n$$\\varsigma - {{\\mathcal D}_1}{{\\mathcal D}_2}{{\\mathcal D}_3}\\varsigma = {{\\mathcal D}_1}\\alpha - {{\\mathcal D}_2}{{\\mathcal D}_3}\\alpha + {{\\mathcal D}_2}\\alpha - {{\\mathcal D}_3}{{\\mathcal D}_1}\\beta + {{\\mathcal D}_3}\\gamma - {{\\mathcal D}_1}{{\\mathcal D}_2}\\gamma.$$\n(43)\n\nAs a consequence of the time-structure of this relationship, ζ has been called the Symmetrized Sagnac combination.\n\nBy using the four generators, it is possible to construct several other interferometric combinations, such as the unequal-arm Michelson (X, Y, Z), the Beacons (P, Q, R), the Monitors (E, F, G), and the Relays (U, V, W). Contrary to the Sagnac combinations, these only use four of the six data combinations ηi, $${\\eta _i}^\\prime$$. For this reason they have obvious utility in the event of selected subsystem failures .\n\nThese observables can be written in terms of the Sagnac observables (α, β, γ, ζ) in the following way,\n\n$$\\begin{array}{*{20}c}{{{\\mathcal D}_1}X = {{\\mathcal D}_2}{{\\mathcal D}_3}\\alpha - {{\\mathcal D}_2}\\beta - {{\\mathcal D}_3}\\gamma + \\varsigma,} \\\\ {\\quad P = \\varsigma - {{\\mathcal D}_1}\\alpha,} \\\\ {\\quad E = {{\\mathcal D}_1} - {{\\mathcal D}_1}\\varsigma,} \\\\ {\\quad U = {{\\mathcal D}_1}\\gamma - \\beta,} \\\\\\end{array}$$\n(44)\n\nas it is easy to verify by substituting the expressions for the Sagnac combinations into the above equations. Their physical interpretations are schematically shown in Figure 5.\n\nIn the case of the combination X, in particular, by writing it in the following form ,\n\n$$X = [(\\eta _1^{\\prime} + {{\\mathcal D}_{2^{\\prime}}}{\\eta _3}) + {{\\mathcal D}_{2^{\\prime}}}{{\\mathcal D}_2}({\\eta _1} + {{\\mathcal D}_3}\\eta _2^{\\prime})] - [({\\eta _1} + {{\\mathcal D}_3}\\eta _2^{\\prime}) + {{\\mathcal D}_3}{{\\mathcal D}_{3^{\\prime}}}(\\eta _1^{\\prime} + {{\\mathcal D}_{2^{\\prime}}}{\\eta _3})],$$\n(45)\n\none can notice (as pointed out in and ) that this combination can be visualized as the difference of two sums of phase measurements, each corresponding to a specific light path from a laser onboard spacecraft 1 having phase noise Φ1. The first square-bracket term in Equation (45) represents a synthesized light-beam transmitted from spacecraft 1 and made to bounce once at spacecraft 2 and 3, respectively. The second square-bracket term instead corresponds to another beam also originating from the same laser, experiencing the same overall delay as the first beam, but bouncing off spacecraft 3 first and then spacecraft 2. When they are recombined they will cancel the laser phase fluctuations exactly, having both experienced the same total delay (assuming stationary spacecraft). The X combinations should therefore be regarded as the response of a zero-area Sagnac interferometer.\n\n## Time-Delay Interferometry with Moving Spacecraft\n\nThe rotational motion of the LISA array results in a difference of the light travel times in the two directions around a Sagnac circuit [24, 5]. Two time delays along each arm must be used, say $${L_i}^\\prime$$ and Li for clockwise or counter-clockwise propagation as they enter in any of the TDI combinations. Furthermore, since Li and $${L_i}^\\prime$$ not only differ from one another but can be time dependent (they “flex”), it was shown that the “first generation” TDI combinations do not completely cancel the laser phase noise (at least with present laser stability requirements), which can enter at a level above the secondary noises. For LISA, and assuming $${\\dot L_i} \\simeq 10\\,{\\rm{m/s}}$$ , the estimated magnitude of the remaining frequency fluctuations from the laser can be about 30 times larger than the level set by the secondary noise sources in the center of the frequency band. In order to solve this potential problem, it has been shown that there exist new TDI combinations that are immune to first order shearing (flexing, or constant rate of change of delay times). These combinations can be derived by using the time-delay operators formalism introduced in the previous Section 4, although one has to keep in mind that now these operators no longer commute .\n\nIn order to derive the new, “flex-free” TDI combinations we will start by taking specific combinations of the one-way data entering in each of the expressions derived in the previous Section 4. These combinations are chosen in such a way so as to retain only one of the three noises Φi, i = 1, 2, 3, if possible. In this way we can then implement an iterative procedure based on the use of these basic combinations and of time-delay operators, to cancel the laser noises after dropping terms that are quadratic in $$\\dot L/c$$ or linear in the accelerations. This iterative time-delay method, to first order in the velocity, is illustrated abstractly as follows. Given a function of time Ψ = Ψ(t), time delay by Li is now denoted either with the standard comma notation or by applying the delay operator $${{\\mathcal D}_i}$$ introduced in the previous Section 4,\n\n$${{\\mathcal D}_i}\\Psi = {\\Psi _{,i}} \\equiv \\Psi (t - {L_i}(t)).$$\n(46)\n\nWe then impose a second time delay Lj(t):\n\n$$\\begin{array}{*{20}c}{{{\\mathcal D}_j}{{\\mathcal D}_i}\\Psi = {\\Psi _{;ij}} \\equiv \\Psi (t - {L_j}(t) - {L_i}(t - {L_j}(t)))} \\\\{\\quad \\quad \\quad \\quad \\quad \\;\\; \\simeq \\Psi (t - {L_j}(t) - {L_i}(t) + {{\\dot L}_i}(t){L_j})} \\\\{\\quad \\quad \\quad \\quad \\quad \\;\\; \\simeq {\\Psi _{;ij}} + {{\\dot \\Psi}_{;ij}}{{\\dot L}_i}{L_j}.} \\\\\\end{array}$$\n(47)\n\nA third time delay Lk(t) gives\n\n$$\\begin{array}{*{20}c}{{{\\mathcal D}_k}{{\\mathcal D}_j}{{\\mathcal D}_i}\\Psi = {\\Psi _{;ij}} = \\Psi (t - {L_k}(t) - {L_j}(t - {L_k}(t)) - {L_i}(t - {L_k}(t) - {L_i}(t - {L_k}(t) - {L_j}(t - {L_k}(t))))} \\\\ {\\quad \\quad \\quad \\quad \\quad \\quad \\;\\; \\simeq {\\Psi _{;ijk}} + {{\\dot \\Psi}_{;ijk}}\\left[ {{{\\dot L}_i}({L_j} + {L_k}) + {{\\dot L}_j}{L_k}} \\right].} \\\\\\end{array}$$\n(48)\n\nand so on, recursively; each delay generates a first-order correction proportional to its rate of change times the sum of all delays coming after it in the subscripts. Commas have now been replaced with semicolons , to remind us that we consider moving arrays. When the sum of these corrections to the terms of a data combination vanishes, the combination is called flex-free.\n\nAlso, note that each delay operator $${{\\mathcal D}_i}$$ has a unique inverse $$D_i^{- 1}$$, whose expression can be derived by requiring that $$D_i^{- 1}{{\\mathcal D}_i} = I$$, and neglecting quadratic and higher order velocity terms. Its action on a time series Ψ(t) is\n\n$$D_i^{- 1}\\Psi (t) \\equiv \\Psi (t + {L_i}(t + {L_i})).$$\n(49)\n\nNote that this is not like an advance operator one might expect, since it advances not by Li(t) but rather Li(t + Li).\n\n### The unequal-arm Michelson\n\nThe unequal-arm Michelson combination relies on the four measurements η1, η1′, η2′, and η3. Note that the two combinations η1 + η2′,3, η1′ + η3,2′ represent the two synthesized two-way data measured onboard spacecraft 1, and can be written in the following form,\n\n$${\\eta _1} + {\\eta _{2^{\\prime},3}} = ({{\\mathcal D}_3}{{\\mathcal D}_{3^{\\prime}}} - I){\\phi _1},$$\n(50)\n$${\\eta _{1^{\\prime}}} + {\\eta _{3,2^{\\prime}}} = ({{\\mathcal D}_{2^{\\prime}}}{{\\mathcal D}_2} - I){\\phi _1},$$\n(51)\n\nwhere I is the identity operator. Since in the stationary case any pairs of these operators commute, i.e. $${{\\mathcal D}_i}{{\\mathcal D}_{j^\\prime}} - {{\\mathcal D}_{j^\\prime}}{{\\mathcal D}_i} = 0$$, from Equations (50, 51) it is easy to derive the following expression for the unequal-arm interferometric combination X which eliminates Φ1:\n\n$$X = [{{\\mathcal D}_{2^{\\prime}}}{{\\mathcal D}_2} - I]({\\eta _1} + {\\eta _{2^{\\prime},3}}) - [{{\\mathcal D}_3}{{\\mathcal D}_{3^{\\prime}}} - I]({\\eta _{1^{\\prime}}} + {\\eta _{3,2^{\\prime}}}).$$\n(52)\n\nif, on the other hand, the time-delays depend on time, the expression of the unequal-arm Michelson combination above no longer cancels Φ1. In order to derive the new expression for the unequal-arm interferometer that accounts for “flexing”, let us first consider the following two combinations of the one-way measurements entering into the X observable given in Equation (52):\n\n$$[({\\eta _{1^{\\prime}}} + {\\eta _{3;2^{\\prime}}}) + {({\\eta _1} + {\\eta _{2;3}})_{;22^{\\prime}}}] = [{D_{2^{\\prime}}}{D_2}{D_3}{D_{3^{\\prime}}} - I]{\\phi _1},$$\n(53)\n$$[({\\eta _1} + {\\eta _{2^{\\prime};3}}) + {({\\eta _{1^{\\prime}}} + {\\eta _{3;2^{\\prime}}})_{;3^{\\prime}3}}] = [{D_3}{D_{3^{\\prime}}}{D_{2^{\\prime}}}{D_2} - I]{\\phi _1}.$$\n(54)\n\nUsing Equations (53, 54) we can use the delay technique again to finally derive the following expression for the new unequal-arm Michelson combination X1 that accounts for the flexing effect:\n\n$$\\begin{array}{*{20}c}{{X_1} = [{D_2}{D_{2^{\\prime}}}{D_{3^{\\prime}}}{D_3} - I][({\\eta _{21}} + {\\eta _{12;3^{\\prime}}}) + {{({\\eta _{31}} + {\\eta _{13;2}})}_;}_{33^{\\prime}}]} \\\\ {\\quad \\quad \\; - [{D_{3^{\\prime}}}{D_3}{D_2}{D_{2^{\\prime}}} - I][({\\eta _{31}} + {\\eta _{13;2}}) + {{({\\eta _{21}} + {\\eta _{12;3^{\\prime}}})}_;}_{2^{\\prime}2}].} \\\\ \\end{array}$$\n(55)\n\nAs usual, X2 and X3 are obtained by cyclic permutation of the spacecraft indices. This expression is readily shown to be laser-noise-free to first order of spacecraft separation velocities $${\\dot L_i}$$: it is “flex-free”.\n\n### The Sagnac combinations\n\nIn the above Section 5.1 we have used the same symbol X for the unequal-arm Michelson combination for both the rotating (i.e. constant delay times) and stationary cases. This emphasizes that, for this TDI combination (and, as we will see below, also for all the combinations including only four links) the forms of the equations do not change going from systems at rest to the rotating case. One needs only distinguish between the time-of-flight variations in the clockwise and counter-clockwise senses (primed and unprimed delays).\n\nIn the case of the Sagnac variables (α, β, γ, ζ), however, this is not the case as it is easy to understand on simple physical grounds. In the case of a for instance, light originating from spacecraft 1 is simultaneously sent around the array on clockwise and counter-clockwise loops, and the two returning beams are then recombined. If the array is rotating, the two beams experience a different delay (the Sagnac effect), preventing the noise Φ1 from cancelling in the a combination.\n\nIn order to find the solution to this problem let us first rewrite a in such a way to explicitly emphasize what it does: attempts to remove the same fluctuations affecting two beams that have been made to propagated clockwise and counter-clockwise around the array,\n\n$$\\alpha = [{\\eta _{1^{\\prime}}} + {{\\mathcal D}_{2^{\\prime}}}{\\eta _{3^{\\prime}}} + {{\\mathcal D}_{1^{\\prime}}}{{\\mathcal D}_{2^{\\prime}}}{\\eta _{2^{\\prime}}}] - [{\\eta _1} + {{\\mathcal D}_3}{\\eta _2} + {{\\mathcal D}_1}{{\\mathcal D}_3}{\\eta _2}],$$\n(56)\n\nwhere we have accounted for clockwise and counter-clockwise light delays. It is straight-forward to verify that this combination no longer cancels the laser and optical bench noises. If, however, we expand the two terms inside the square-brackets on the right-hand side of Equation (56) we find that they are equal to\n\n$$[{\\eta _{1^{\\prime}}} + {{\\mathcal D}_{2^{\\prime}}}{\\eta _{3^{\\prime}}} + {{\\mathcal D}_{1^{\\prime}}}{{\\mathcal D}_{2^{\\prime}}}{\\eta _{2^{\\prime}}}] = [{{\\mathcal D}_{2^{\\prime}}}{{\\mathcal D}_{1^{\\prime}}}{{\\mathcal D}_{3^{\\prime}}} - I]{\\phi _1}$$\n(57)\n$$[{\\eta _1} + {{\\mathcal D}_3}{\\eta _2} + {{\\mathcal D}_1}{{\\mathcal D}_3}{\\eta _2}] = [{{\\mathcal D}_3}{{\\mathcal D}_1}{{\\mathcal D}_2} - I]{\\phi _1}.$$\n(58)\n\nIf we now apply our iterative scheme to the combinations given in Equation (58) we finally get the expression for the Sagnac combination α1 that is unaffected by laser noise in presence of rotation,\n\n$${\\alpha _1} = [{{\\mathcal D}_3}{{\\mathcal D}_1}{{\\mathcal D}_2} - I][{\\eta _{1^{\\prime}}} + {{\\mathcal D}_{2^{\\prime}}}{\\eta _{3^{\\prime}}} + {{\\mathcal D}_{1^{\\prime}}}{{\\mathcal D}_{2^{\\prime}}}{\\eta _{2^{\\prime}}}] - [{{\\mathcal D}_{2^{\\prime}}}{{\\mathcal D}_{1^{\\prime}}}{{\\mathcal D}_{3^{\\prime}}} - I][{\\eta _1} + {{\\mathcal D}_3}{\\eta _2} + {{\\mathcal D}_1}{{\\mathcal D}_3}{\\eta _2}].$$\n(59)\n\nIf the delay-times are also time-dependent, we find that the residual laser noise remaining into the combination α1 is actually equal to\n\n$${\\dot \\phi _{1,1231^{\\prime}2^{\\prime}3^{\\prime}}}\\left[ {\\left({{{\\dot L}_1} + {{\\dot L}_2} + {{\\dot L}_3}} \\right)\\left({L_1^{\\prime} + L_2^{\\prime} + L_3^{\\prime}} \\right) - \\left({\\dot L_1^{\\prime} + \\dot L_2^{\\prime} + \\dot L_3^{\\prime}} \\right)({L_1} + {L_2} + {L_3})} \\right].$$\n(60)\n\nFortunately, although first order in the relative velocities, the residual is small, as it involves the difference of the clockwise and counter-clockwise rates of change of the propagation delays on the same circuit. For LISA, the remaining laser phase noises in αi, i = 1, 2, 3, are several orders of magnitude below the secondary noises.\n\nIn the case of ζ, however, the rotation of the array breaks the symmetry and therefore its uniqueness. However, there still exist three generalized TDI laser-noise-free data combinations that have properties very similar to ζ, and which can be used for the same scientific purposes . These combinations, which we call (ζ1, ζ2, ζ3), can be derived by applying again our time-delay operator approach.\n\nLet us consider the following combination of the ηi, ηj′ measurements, each being delayed only once :\n\n$${\\eta _{3,3}} - {\\eta _{3^{\\prime},3}} + {\\eta _{1,1^{\\prime}}} = [{D_3}{D_2} - {D_{1^{\\prime}}}]{\\phi _1},$$\n(61)\n$${\\eta _{1^{\\prime},1}} - {\\eta _{2,2^{\\prime}}} + {\\eta _{2^{\\prime},2^{\\prime}}} = [{D_{3^{\\prime}}}{D_{2^{\\prime}}} - {D_1}]{\\phi _1},$$\n(62)\n\nwhere we have used the commutativity property of the delay operators in order to cancel the Φ2 and Φ3 terms. Since both sides of the two equations above contain only the Φ1 noise, ζ1 is found by the following expression:\n\n$${\\varsigma _1} = [{D_{3^{\\prime}}}{D_{2^{\\prime}}} - {D_1}]({\\eta _{31,1^{\\prime}}} - {\\eta _{32,2}} + {\\eta _{12,2}}) - [{D_2}{D_3} - {D_{1^{\\prime}}}]({\\eta _{13,3^{\\prime}}} - {\\eta _{23,3^{\\prime}}} + {\\eta _{21,1}}).$$\n(63)\n\nIf the light-times in the arms are equal in the clockwise and counter-clockwise senses (e.g. no rotation) there is no distinction between primed and unprimed delay times. In this case, ζ1 is related to our original symmetric Sagnac ζ by ζ1 = ζ,23ζ,1. Thus for the practical LISA case (arm length difference < 1%), the SNR of ζ1 will be the same as the SNR of ζ.\n\nIf the delay-times also change with time, the perfect cancellation of the laser noises is no longer achieved in the (ζ1, ζ2, ζ3) combinations. However, it has been shown in that the magnitude of the residual laser noises in these combinations are significantly smaller than the LISA secondary system noises, making their effects entirely negligible.\n\nThe expressions for the Monitor, Beacon, and Relay combinations, accounting for the rotation and flexing of the LISA array, have been derived in the literature by applying the time-delay iterative procedure highlighted in this section. The interested reader is referred to that paper for details.\n\nA mathematical formulation of the “second generation” TDI, which generalizes the one presented in Section 4 for the stationary LISA, still needs to be derived. In the case when only the Sagnac effect is considered (and the delay-times remain constant in time) the mathematical formulation of Section 4 can be extended in a straight-forward way where now the six time-delays $${{\\mathcal D}_i}$$ and $${\\mathcal D}_{_i}^\\prime$$ must be taken into account. The polynomial ring is now in these six variables and the corresponding module of syzygies can be constructed over this enlarged polynomial ring . However, when the arms are allowed to flex, that is, the operators themselves are functions of time, the operators no longer commute. One must then resort to non-commutative Groöbner basis methods. We will investigate this mathematical problem in the near future.\n\n## Optimal LISA Sensitivity\n\nAll the above interferometric combinations have been shown to individually have rather different sensitivities , as a consequence of their different responses to gravitational radiation and system noises. Since LISA has the capability of simultaneously observing a gravitational wave signal with many different interferometric combinations (all having different antenna patterns and noises), we should no longer regard LISA as a single detector system but rather as an array of gravitational wave detectors working in coincidence. This suggests that the presently adopted LISA sensitivity could be improved by optimally combining elements of the TDI space.\n\nBefore proceeding with this idea, however, let us consider again the so-called “second generation” TDI Sagnac observables: (α1, α2, α3). The expressions of the gravitational wave signal and the secondary noise sources entering into α1 will in general be different from those entering into α, the corresponding Sagnac observable derived under the assumption of a stationary LISA array [1, 7]. However, the other remaining, secondary noises in LISA are so much smaller, and the rotation and systematic velocities in LISA are so intrinsically small, that index permutation may still be done for them . It is therefore easy to derive the following relationship between the signal and secondary noises in α1, and those entering into the stationary TDI combination α [25, 34],\n\n$${\\alpha _1}(t) \\simeq \\alpha (t) - \\alpha (t - {L_1} - {L_2} - {L_3}),$$\n(64)\n\nwhere Li, i = 1, 2, 3, are the unequal-arm lengths of the stationary LISA array. Equation (64) implies that any data analysis procedure and algorithm that will be implemented for the second-generation TDI combinations can actually be derived by considering the corresponding “first generation” TDI combinations. For this reason, from now on we will focus our attention on the gravitational wave responses of the first-generation TDI observables (α, β, γ, ζ).\n\nAs a consequence of these considerations, we can still regard (α, β, γ, ζ) as the generators of the TDI space, and write the most general expression for an element of the TDI space, η(f), as a linear combination of the Fourier transforms of the four generators $$(\\tilde \\alpha, \\tilde \\beta, \\tilde \\gamma, \\tilde \\zeta)$$,\n\n$$\\eta (f) \\equiv {a_1}(f,\\vec \\lambda) \\tilde \\alpha (f) + {a_2}(f,\\vec \\lambda) \\tilde \\beta (f) + {a_3}(f,\\vec \\lambda) \\tilde \\gamma (f) + {a_4}(f,\\vec \\lambda) \\tilde \\varsigma (f),$$\n(65)\n\nwhere the $$\\{{a_i}(f,\\vec \\lambda)\\} _{i = 1}^4$$ are arbitrary complex functions of the Fourier frequency f, and of a vector $${\\vec \\lambda}$$ containing parameters characterizing the gravitational wave signal (source location in the sky, waveform parameters, etc.) and the noises affecting the four responses (noise levels, their correlations, etc.). For a given choice of the four functions $$\\{{a_i}\\} _{i = 1}^4$$, η gives an element of the functional space of interferometric combinations generated by (α, β, γ, ζ). Our goal is therefore to identify, for a given gravitational wave signal, the four functions $$\\{{a_i}\\} _{i = 1}^4$$ that maximize the signal-to-noise ratio $${\\rm{SNR}}_\\eta ^2$$ of the combination η,\n\n$${\\rm{SNR}}_\\eta ^2 = \\int\\nolimits_{{f_1}}^{{f_{\\rm{u}}}} {{{{{\\left\\vert {{a_1}{{\\tilde \\alpha}_{\\rm{s}}} + {a_2}{{\\tilde \\beta}_{\\rm{s}}} + {a_3}{{\\tilde \\gamma}_{\\rm{s}}} + {a_4}{{\\tilde \\varsigma}_{\\rm{s}}}} \\right\\vert}^2}} \\over {\\left\\langle {{{\\left\\vert {{a_1}{{\\tilde \\alpha}_{\\rm{n}}} + {a_2}{{\\tilde \\beta}_{\\rm{n}}} + {a_3}{{\\tilde \\gamma}_{\\rm{n}}} + {a_4}{{\\tilde \\varsigma}_{\\rm{n}}}} \\right\\vert}^2}} \\right\\rangle}}df.}$$\n(66)\n\nIn Equation (66) the subscripts s and n refer to the signal and the noise parts of $$(\\tilde \\alpha, \\tilde \\beta, \\tilde \\gamma, \\tilde \\zeta)$$, respectively, the angle brackets represent noise ensemble averages, and the interval of integration (f1, fu) corresponds to the frequency band accessible by LISA.\n\nBefore proceeding with the maximization of the $${\\rm{SNR}}_\\eta ^2$$ we may notice from Equation (43) that the Fourier transform of the totally symmetric Sagnac combination, $$\\tilde \\zeta$$, multiplied by the transfer function $$1 - {e^{2\\pi if({L_1} + {L_2} + {L_3})}}$$ can be written as a linear combination of the Fourier transforms of the remaining three generators $$(\\tilde \\alpha, \\tilde \\beta, \\tilde \\gamma)$$. Since the signal-to-noise ratio of η and $$(1 - {e^{2\\pi if({L_1} + {L_2} + {L_3})}})\\eta$$ are equal, we may conclude that the optimization of the signal-to-noise ratio of η can be performed only on the three observables α, β, γ. This implies the following redefined expression for $${\\rm{SNR}}_\\eta ^2$$:\n\n$${\\rm{SNR}}_\\eta ^2 = \\int\\nolimits_{{f_1}}^{{f_{\\rm{u}}}} {{{{{\\left\\vert {{a_1}{{\\tilde \\alpha}_{\\rm{s}}} + {a_2}{{\\tilde \\beta}_{\\rm{s}}} + {a_3}{{\\tilde \\gamma}_{\\rm{s}}}} \\right\\vert}^2}} \\over {\\left\\langle {{{\\left\\vert {{a_1}{{\\tilde \\alpha}_{\\rm{n}}} + {a_2}{{\\tilde \\beta}_{\\rm{n}}} + {a_3}{{\\tilde \\gamma}_{\\rm{n}}}} \\right\\vert}^2}} \\right\\rangle}}df.}$$\n(67)\n\nThe $${\\rm{SNR}}_\\eta ^2$$, can be regarded as a functional over the space of the three complex functions $$\\{{a_i}\\} _{i = 1}^3$$, and the particular set of complex functions that extremize it can of course be derived by solving the associated set of Euler-Lagrange equations.\n\nIn order to make the derivation of the optimal SNR easier, let us first denote by x(s) and x(n) the two vectors of the signals $$({\\tilde \\alpha _{\\rm{s}}},{\\tilde \\beta _{\\rm{s}}},\\tilde {{\\gamma _{\\rm{s}}}})$$ and the noises $$({\\tilde \\alpha _{\\rm{n}}},{\\tilde \\beta _{\\rm{n}}},\\tilde {{\\gamma _{\\rm{n}}}})$$, respectively. Let us also define a to be the vector of the three functions $$\\{{a_i}\\} _{i = 1}^3$$, and denote with C the Hermitian, non-singular, correlation matrix of the vector random process xn,\n\n$${({\\bf{C}})_{rt}} \\equiv \\left\\langle {{\\bf{x}}_r^{(n)}{\\bf{x}}_t^{(n)\\ast}} \\right\\rangle.$$\n(68)\n\nIf we finally define (A)ij to be the components of the Hermitian matrix $${\\rm{x}}_i^{({\\rm{s}})}{\\rm{x}}_j^{({\\rm{s}})^\\ast}$$, we can rewrite $${\\rm{SNR}}_\\eta ^2$$, in the following form,\n\n$${\\rm{SNR}}_\\eta ^2 = \\int\\nolimits_{{f_1}}^{{f_{\\rm{u}}}} {{{{{\\bf{a}}_i}{{\\bf{A}}_{ij}}{\\bf{a}}_j^\\ast} \\over {{{\\bf{a}}_r}{{\\bf{C}}_{rt}}{\\bf{a}}_t^\\ast}}df,}$$\n(69)\n\nwhere we have adopted the usual convention of summation over repeated indices. Since the noise correlation matrix C is non-singular, and the integrand is positive definite or null, the stationary values of the signal-to-noise ratio will be attained at the stationary values of the integrand, which are given by solving the following set of equations (and their complex conjugated expressions):\n\n$${\\partial \\over {\\partial {{\\bf{a}}_k}}}\\left[ {{{{{\\bf{a}}_i}{{\\bf{A}}_{ij}}{\\bf{a}}_j^\\ast} \\over {{{\\bf{a}}_r}{{\\bf{C}}_{rt}}{\\bf{a}}_t^\\ast}}} \\right] = 0,\\quad k = 1,2,3.$$\n(70)\n\nAfter taking the partial derivatives, Equation (70) can be rewritten in the following form,\n\n$${({{\\bf{C}}^{- 1}})_{ir}}{({\\bf{A}})_{rj}}{({\\bf{a}}^\\ast)_j} = \\left[ {{{{{\\bf{a}}_p}{{\\bf{A}}_{pq}}{\\bf{a}}_q^\\ast} \\over {{{\\bf{a}}_l}{{\\bf{C}}_{lm}}{\\bf{a}}_m^\\ast}}} \\right]({\\bf{a}}\\ast)i,\\quad i = 1,2,3,$$\n(71)\n\nwhich tells us that the stationary values of the signal-to-noise ratio of η are equal to the eigenvalues of the the matrix C−1 · A. The result in Equation (70) is well known in the theory of quadratic forms, and it is called Rayleigh’s principle [18, 23].\n\nIn order now to identify the eigenvalues of the matrix C−1 · A, we first notice that the 3 × 3 matrix A has rank 1. This implies that the matrix C−1 · A has also rank 1, as it is easy to verify. Therefore two of its three eigenvalues are equal to zero, while the remaining non-zero eigenvalue represents the solution we are looking for.\n\nThe analytic expression of the third eigenvalue can be obtained by using the property that the trace of the 3 × 3 matrix C−1 · A is equal to the sum of its three eigenvalues, and in our case to the eigenvalue we are looking for. From these considerations we derive the following expression for the optimized signal-to-noise ratio $${\\rm{SNR}}_{\\eta {\\rm{opt}}}^2$$:\n\n$${\\rm{SNR}}_{\\eta \\;{\\rm{opt}}}^2 = \\int\\nolimits_{{f_1}}^{{f_{\\rm{u}}}} {{\\bf{x}}_i^{({\\rm{s}})\\ast}{{({{\\bf{C}}^{- 1}})}_{ij}}{\\bf{x}}_i^{({\\rm{s}})}df.}$$\n(72)\n\nWe can summarize the results derived in this section, which are given by Equations (67, 72), in the following way:\n\n1. 1.\n\nAmong all possible interferometric combinations LISA will be able to synthesize with its four generators α, β, γ, ζ, the particular combination giving maximum signal-to-noise ratio can be obtained by using only three of them, namely (α, β, γ).\n\n2. 2.\n\nThe expression of the optimal signal-to-noise ratio given by Equation (72) implies that LISA should be regarded as a network of three interferometer detectors of gravitational radiation (of responses (α, β, γ)) working in coincidence [12, 21].\n\n### General application\n\nAs an application of Equation (72), here we calculate the sensitivity that LISA can reach when observing sinusoidal signals uniformly distributed on the celestial sphere and of random polarization. In order to calculate the optimal signal-to-noise ratio we will also need to use a specific expression for the noise correlation matrix C. As a simplification, we will assume the LISA arm lengths to be equal to their nominal value L = 16. 67 s, the optical-path noises to be equal and uncorrelated to each other, and finally the noises due to the proof-mass noises to be also equal, uncorrelated to each other and to the optical-path noises. Under these assumptions the correlation matrix becomes real, its three diagonal elements are equal, and all the off-diagonal terms are equal to each other, as it is easy to verify by direct calculation . The noise correlation matrix C is therefore uniquely identified by two real functions Sα and Sαβ in the following way:\n\n$${\\bf{C}} = \\left({\\begin{array}{*{20}c}{{S_\\alpha}\\;\\;{S_{\\alpha \\beta}}\\;\\;{S_{\\alpha \\beta}}} \\\\ {{S_{\\alpha \\beta}}\\;\\;{S_\\alpha}\\;\\;{S_{\\alpha \\beta}}} \\\\ {{S_{\\alpha \\beta}}\\;\\;{S_{\\alpha \\beta}}\\;\\;{S_\\alpha}} \\\\\\end{array}} \\right).$$\n(73)\n\nThe expression of the optimal signal-to-noise ratio assumes a rather simple form if we diagonalize this correlation matrix by properly “choosing a new basis”. There exists an orthogonal transformation of the generators $$(\\tilde \\alpha, \\tilde \\beta, \\tilde \\gamma)$$ which will transform the optimal signal-to-noise ratio into the sum of the signal-to-noise ratios of the “transformed” three interferometric combinations. The expressions of the three eigenvalues $$\\{{\\mu _i}\\} _{i = 1}^3$$ (which are real) of the noise correlation matrix C can easily be found by using the algebraic manipulator Mathematica, and they are equal to\n\n$${\\mu _1} = {\\mu _2} = {S_\\alpha} - {S_{\\alpha \\beta}},\\quad {\\mu _3} = {S_\\alpha} + 2{S_{\\alpha \\beta}}.$$\n(74)\n\nNote that two of the three real eigenvalues, (μ1, μ2), are equal. This implies that the eigenvector associated to μ3 is orthogonal to the two-dimensional space generated by the eigenvalue μ1, while any chosen pair of eigenvectors corresponding to μ1 will not necessarily be orthogonal. This inconvenience can be avoided by choosing an arbitrary set of vectors in this two-dimensional space, and by ortho-normalizing them. After some simple algebra, we have derived the following three ortho-normalized eigenvectors:\n\n$${{\\bf{v}}_1} = {1 \\over {\\sqrt 2}}(- 1,0,1)\\quad {{\\bf{v}}_2} = {1 \\over {\\sqrt 6}}(1, - 2,1)\\quad {{\\bf{v}}_3} = {1 \\over {\\sqrt 3}}(1,1,1).$$\n(75)\n\nEquation (75) implies the following three linear combinations of the generators $$(\\tilde \\alpha, \\tilde \\beta, \\tilde \\gamma)$$:\n\n$$A = {1 \\over {\\sqrt 2}}(\\tilde {\\gamma} - \\tilde {\\alpha})\\quad E = {1 \\over {\\sqrt 6}}\\left({\\tilde {\\alpha} - 2\\tilde {\\beta} + \\tilde {\\gamma}} \\right)\\quad T = {1 \\over {\\sqrt 3}}\\left({\\tilde {\\alpha} + \\tilde {\\beta} + \\tilde {\\gamma}} \\right),$$\n(76)\n\nwhere A, E, and T are italicized to indicate that these are “orthogonal modes”. Although the expressions for the modes A and E depend on our particular choice for the two eigenvectors (v1, v2), it is clear from our earlier considerations that the value of the optimal signal-to-noise ratio is unaffected by such a choice. From Equation (76) it is also easy to verify that the noise correlation matrix of these three combinations is diagonal, and that its non-zero elements are indeed equal to the eigenvalues given in Equation (74).\n\nIn order to calculate the sensitivity corresponding to the expression of the optimal signal-to-noise ratio, we have proceeded similarly to what was done in [1, 7], and described in more detail in . We assume an equal-arm LISA (L = 16.67 s), and take the one-sided spectra of proof mass and aggregate optical-path-noises (on a single link), expressed as fractional frequency fluctuation spectra, to be $$S_y^{{\\rm{proof}}\\,{\\rm{maas}}} = 2.5 \\times {10^{- 48}}{[f/1{\\rm{Hz}}]^{- 2}}{\\rm{H}}{{\\rm{z}}^{- 1}}$$ and $$S_y^{{\\rm{optical}}\\,{\\rm{path}}} = 1.8 \\times {10^{- 37}}{[f/1\\,{\\rm{Hz}}]^2}{\\rm{H}}{{\\rm{z}}^{- 1}}$$, respectively (see [7, 3]). We also assume that aggregate optical path noise has the same transfer function as shot noise.\n\nThe optimum SNR is the square root of the sum of the squares of the SNRs of the three “orthogonal modes” (A, E, T). To compare with previous sensitivity curves of a single LISA Michelson interferometer, we construct the SNRs as a function of Fourier frequency for sinusoidal waves from sources uniformly distributed on the celestial sphere. To produce the SNR of each of the (A, E, T) modes we need the gravitational wave response and the noise response as a function of Fourier frequency. We build up the gravitational wave responses of the three modes (A, E, T) from the gravitational wave responses of (α, β, γ). For 7000 Fourier frequencies in the ∼ 10−4 Hz to ∼ 1 Hz LISA band, we produce the Fourier transforms of the gravitational wave response of (α, β, γ) from the formulas in [1, 32]. The averaging over source directions (uniformly distributed on the celestial sphere) and polarization states (uniformly distributed on the Poincaré sphere) is performed via a Monte Carlo method. From the Fourier transforms of the (α, β, γ) responses at each frequency, we construct the Fourier transforms of (A, E, T). We then square and average to compute the mean-squared responses of (A, E, T) at that frequency from 104 realizations of (source position, polarization state) pairs.\n\nWe adopt the following terminology: We refer to a single element of the module as a data combination, while a function of the elements of the module, such as taking the maximum over several data combinations in the module or squaring and adding data combinations belonging to the module, is called as an observable. The important point to note is that the laser frequency noise is also suppressed for the observable although it may not be an element of the module.\n\nThe noise spectra of (A, E, T) are determined from the raw spectra of proof-mass and optical-path noises, and the transfer functions of these noises to (A, E, T). Using the transfer functions given in , the resulting spectra are equal to\n\n$$\\begin{array}{*{20}c}{{S_A}(f) = {S_E}(f) = 16{{\\sin}^2}(\\pi fL)[3 + 2\\cos (2\\pi fL) + \\cos (4\\pi fL)]S_y^{{\\rm{proof}}\\;{\\rm{mass}}}(f)} \\\\{\\quad \\quad \\quad \\quad \\quad \\quad + 8{{\\sin}^2}(\\pi fL)[2 + \\cos (2\\pi fL)]S_y^{{\\rm{optical}}\\;{\\rm{path}}}(f),} \\\\\\end{array}$$\n(77)\n$${S_T}(f) = 2{[1 + 2\\cos (2\\pi fL)]^2}[4{\\sin ^2}(\\pi fL)S_y^{{\\rm{proof}}\\;{\\rm{mass}}} + S_y^{{\\rm{optical}}\\;{\\rm{path}}}(f)].$$\n(78)\n\nLet the amplitude of the sinusoidal gravitational wave be h. The SNR for, e.g. A, SNRa, at each frequency f is equal to h times the ratio of the root-mean-squared gravitational wave response at that frequency divided by $$\\sqrt {{S_A}(f)B}$$, where B is the bandwidth conventionally taken to be equal to 1 cycle per year. Finally, if we take the reciprocal of SNRa/h and multiply it by 5 to get the conventional SNR = 5 sensitivity criterion, we obtain the sensitivity curve for this combination which can then be compared against the corresponding sensitivity curve for the equal-arm Michelson interferometer.\n\nIn Figure 6 we show the sensitivity curve for the LISA equal-arm Michelson response (SNR = 5) as a function of the Fourier frequency, and the sensitivity curve from the optimum weighting of the data described above: $$5h/\\sqrt {{\\rm{SNR}}_A^2 + {\\rm{SNR}}_E^2 + {\\rm{SNR}}_T^2}$$. The SNRs were computed for a bandwidth of 1 cycle/year. Note that at frequencies where the LISA Michelson combination has best sensitivity, the improvement in signal-to-noise ratio provided by the optimal observable is slightly larger than $$\\sqrt 2$$.\n\nIn Figure 7 we plot the ratio between the optimal SNR and the SNR of a single Michelson interferometer. In the long-wavelength limit, the SNR improvement is $$\\sqrt 2$$. For Fourier frequencies greater than or about equal to 1/L, the SNR improvement is larger and varies with the frequency, showing an average value of about $$\\sqrt 3$$. In particular, for bands of frequencies centered on integer multiples of 1/L, SNRT contributes strongly and the aggregate SNR in these bands can be greater than 2.\n\nIn order to better understand the contribution from the three different combinations to the optimal combination of the three generators, in Figure 8 we plot the signal-to-noise ratios of (A, E, T) as well as the optimal signal-to-noise ratio. For an assumed h = 10−23, the SNRs of the three modes are plotted versus frequency. For the equal-arm case computed here, the SNRs of A and E are equal across the band. In the long wavelength region of the band, modes A and E have SNRs much greater than mode T, where its contribution to the total SNR is negligible. At higher frequencies, however, the T combination has SNR greater than or comparable to the other modes and can dominate the SNR improvement at selected frequencies. Some of these results have also been obtained in .\n\n### Optimization of SNR for binaries with known direction but with unknown orientation of the orbital plane\n\nBinaries will be important sources for LISA and therefore the analysis of such sources is of major importance. One such class is of massive or super-massive binaries whose individual masses could range from 103M to 108M and which could be up to a few Gpc away. Another class of interest are known binaries within our own galaxy whose individual masses are of the order of a solar mass but are just at a distance of a few kpc or less. Here the focus will be on this latter class of binaries. It is assumed that the direction of the source is known, which is so for known binaries in our galaxy. However, even for such binaries, the inclination angle of the plane of the orbit of the binary is either poorly estimated or unknown. The optimization problem is now posed differently: The SNR is optimized after averaging over the polarizations of the binary signals, so the results obtained are optimal on the average, that is, the source is tracked with an observable which is optimal on the average . For computing the average, a uniform distribution for the direction of the orbital angular momentum of the binary is assumed.\n\nWhen the binary masses are of the order of a solar mass and the signal typically has a frequency of a few mHz, the GW frequency of the binary may be taken to be constant over the period of observation, which is typically taken to be of the order of an year. A complete calculation of the signal matrix and the optimization procedure of SNR is given in . Here we briefly mention the main points and the final results.\n\nA source fixed in the Solar System Barycentric reference frame in the direction (θB, Φβ) is considered. But as the LISA constellation moves along its heliocentric orbit, the apparent direction (θL, ΦL) of the source in the LISA reference frame (xL, yL, zL) changes with time. The LISA reference frame (xL, yL, zL) has been defined in as follows: The origin lies at the center of the LISA triangle and the plane of LISA coincides with the (xL, yL) plane with spacecraft 2 lying on the xL axis. Figure (9) displays this apparent motion for a source lying in the ecliptic plane, that is with θB = 90° and ΦB = 0°. The source in the LISA reference frame describes a figure of 8. Optimizing the SNR amounts to tracking the source with an optimal observable as the source apparently moves in the LISA reference frame.\n\nSince an average has been taken over the orientation of the orbital plane of the binary or equivalently over the polarizations, the signal matrix A is now of rank 2 instead of rank 1 as compared with the application in the previous Section 6.1. The mutually orthogonal data combinations A, E, T are convenient in carrying out the computations because in this case as well, they simultaneously diagonalize the signal and the noise covariance matrix. The optimization problem now reduces to an eigenvalue problem with the eigenvalues being the squares of the SNRs. There are two eigen-vectors which are labelled as $${{\\vec \\upsilon}_{+, \\times}}$$ belonging to two non-zero eigenvalues. The two SNRs are labelled as SNR+ and SNR×, corresponding to the two orthogonal (thus statistically independent) eigen-vectors $${{\\vec \\upsilon}_{+, \\times}}$$. As was done in the previous Section 6.1 F the two SNRs can be squared and added to yield a network SNR, which is defined through the equation\n\n$${\\rm{SNR}}_{{\\rm{network}}}^2 = {\\rm{SNR}}_ + ^2 + {\\rm{SNR}}_ \\times ^2.$$\n(79)\n\nThe corresponding observable is called the network observable. The third eigenvalue is zero and the corresponding eigenvector orthogonal to $${{\\vec \\upsilon}_ +}$$ and $${{\\vec \\upsilon}_ \\times}$$ gives zero signal.\n\nThe eigenvectors and the SNRs are functions of the apparent source direction parameters (θL, ΦL) in the LISA reference frame, which in turn are functions of time. The eigenvectors optimally track the source as it moves in the LISA reference frame. Assuming an observation period of an year, the SNRs are integrated over this period of time. The sensitivities are computed according to the procedure described in the previous Section 6.1. The results of these findings are displayed in Figure 10.\n\nIt shows the sensitivity curves of the following observables:\n\n1. 1.\n\nThe Michelson combination X (faint solid curve).\n\n2. 2.\n\nThe observable obtained by taking the maximum sensitivity among X, Y, and Z for each direction, where Y and Z are the Michelson observables corresponding to the remaining two pairs of arms of LISA . This maximum is denoted by max[X, Y, Z] (dash-dotted curve) and is operationally given by switching the combinations X, Y, Z so that the best sensitivity is achieved.\n\n3. 3.\n\nThe eigen-combination $${{\\vec \\upsilon}_ +}$$ which has the best sensitivity among all data combinations (dashed curve).\n\n4. 4.\n\nThe network observable (solid curve).\n\nIt is observed that the sensitivity over the band-width of LISA increases as one goes from Observable 1 to 4. Also it is seen that the max[X, Y, Z] does not do much better than X. This is because for the source direction chosen θB = 90°, X is reasonably well oriented and switching to Y and Z combinations does not improve the sensitivity significantly. However, the network and $${{\\vec \\upsilon}_ +}$$ observables show significant improvement in sensitivity over both X and max[X, Y, Z]. This is the typical behavior and the sensitivity curves (except X) do not show much variations for other source directions and the plots are similar. Also it may be fair to compare the optimal sensitivities with max[X, Y, Z] rather than X. This comparison of sensitivities is shown in Figure 11, where the network and the eigen-combinations $${{\\vec \\upsilon}_{+, \\times}}$$ are compared with max[X, Y, Z].\n\nDefining\n\n$${\\kappa _a}(f) = {{{\\rm{SN}}{{\\rm{R}}_a}(f)} \\over {{\\rm{SN}}{{\\rm{R}}_{\\max [X,Y,Z]}}(f)}},$$\n(80)\n\nwhere the subscript a stands for network or +, ×, and SNRmax[X,Y,Z] is the SNR of the observable max[X, Y, Z], the ratios of sensitivities are plotted over the LISA band-width. The improvement in sensitivity for the network observable is about 34% at low frequencies and rises to nearly 90% at about 20 mHz, while at the same time the $${{\\vec \\upsilon}_ +}$$ combination shows improvement of 12% at low frequencies rising to over 50% at about 20 mHz.\n\n## Concluding Remarks\n\nIn this article we have summarized the use of TDI for canceling the laser phase noise from heterodyne phase measurements performed by a constellation of three spacecraft tracking each other along arms of unequal length. Underlying the TDI technique is the mathematical structure of the theory of Gröbner basis and the algebra of modules over polynomial rings. These methods have been motivated and illustrated with the simple example of an unequal-arm interferometer in order to give a physical insight of TDI. Here, these methods have been rigorously applied to the idealized case of a stationary LISA for deriving the generators of the module from which the entire TDI data set can be obtained; they can be extended in a straight-forward way to more than three spacecraft for possible LISA follow-on missions. The stationary LISA case was used as a propaedeutical introduction to the physical motivation of TDI, and for further extending it to the realistic LISA configuration of free-falling spacecraft orbiting around the Sun. The TDI data combinations canceling laser phase noise in this general case are referred to as second generation tdi, and they contain twice as many terms as their corresponding first generation combinations valid for the stationary configuration.\n\nAs a data analysis application we have shown that it is possible to identify specific TDI combinations that will allow LISA to achieve optimal sensitivity to gravitational radiation [19, 21, 20]. The resulting improvement in sensitivity over that of an unequal-arm Michelson interferometer, in the case of monochromatic signals randomly distributed over the celestial sphere and of random polarization, is non-negligible. We have found this to be equal to a factor of $$\\sqrt 2$$ in the low-part of the frequency band, and slightly more than $$\\sqrt 3$$ in the high-part of the LISA band. The SNR for binaries whose location in the sky is known, but their polarization is not, can also be optimized, and the degree of improvement depends on the location of the source in the sky.\n\nAs a final remark we would like to emphasize that this field of research, TDI, is still very young and evolving. Possible physical phenomena, yet unrecognized, might turn out to be important to account for within the TDI framework. The purpose of this review was to provide the basic mathematical tools needed for working on future TDI projects. We hope to have accomplished this goal, and that others will be stimulated to work in this new and fascinating field of research.\n\n1. 1.\n\nA module is an Abelian group over a ring as contrasted with a vector space which is an Abelian group over a field. The scalars form a ring and just like in a vector space, scalar multiplication is defined. However, in a ring the multiplicative inverses do not exist in general for the elements, which makes all the difference!\n\n## References\n\n1. \n\nArmstrong, J.W., Estabrook, F.B., and Tinto, M., “Time-Delay Interferometry for Space-based Gravitational Wave Searches”, Astrophys. J., 527, 814–826, (1999).\n\n2. \n\nBecker, T., and Weispfenning, V., Gröbner Bases: A Computational Approach to Commutative Algebra, vol. 141 of Graduate Texts in Mathematics, (Springer, Berlin, Germany; New York, U.S.A., 1993).\n\n3. \n\nBender, P.L., Brillet, A., Ciufolini, I., Cruise, A.M., Cutler, C., Danzmann, K., Fidecaro, F., Folkner, W.M., Hough, J., McNamara, P.W., Peterseim, M., Robertson, D., Rodrigues, M., Rüdiger, A., Sandford, M., Schäafer, G., Schilling, R., Schutz, B.F., Speake, C.C., Stebbins, R.T., Sumner, T.J., Touboul, P., Vinet, J.-Y., Vitale, S., Ward, H., and Winkler, W. (LISA Study Team), LISA. Laser Interferometer Space Antenna for the detection and observation of gravitational waves. An international project in the field of Fundamental Physics in Space. Pre-Phase A report, MPQ-233, (Max-Planck-Institut für Quantenoptik, Garching, Germany, 1998). Related online version (cited on 06 July 2005): ftp://ftp.ipp-garching.mpg.de/pub/grav/lisa/pdd/.\n\n4. \n\nBender, P.L., and Hils, D., “Confusion noise level due to galactic and extragalactic binaries”, Class. Quantum Grav., 14, 1439–1444, (1997).\n\n5. \n\nCornish, N.J., and Hellings, R.W., “The effects of orbital motion on LISA time delay interferometry”, Class. Quantum Grav., 20, 4851–4860, (2003).\n\n6. \n\nDhurandhar, S.V., Rajesh Nayak, K., and Vinet, J.Y., “Algebraic approach to time-delay data analysis for LISA”, Phys. Rev. D, 65, 102002-1–16, (2002).\n\n7. \n\nEstabrook, F.B., Tinto, M., and Armstrong, J.W., “Time-delay analysis of LISA gravitational wave data: Elimination of spacecraft motion effects”, Phys. Rev. D, 62, 042002-1–8, (2000).\n\n8. \n\nEstabrook, F.B., and Wahlquist, H.D., “Response of Doppler spacecraft tracking to gravitational radiation”, Gen. Relativ. Gravit., 6, 439–447, (1975).\n\n9. \n\nFaller, J.E., and Bender, P.L., “A possible laser gravitational wave experiment in space”, in Taylor, B.N., and Phillips, W.D., eds., Precision Measurement and Fundamental Constants II, Proceedings of the Second International Conference held at the National Bureau of Standards, Gaithersburg, MD, June 8–12, 1981, vol. 617 of NBS Special Publication, 689–690, (U.S. Dept. of Commerce / National Bureau of Standards, Washington, U.S.A., 1984).\n\n10. \n\nFaller, J.E., Bender, P.L., Hall, J.L., Hils, D., Stebbins, R.T., and Vincent, M.A., “An antenna for laser gravitational-wave observations in space”, Adv. Space Res., 9(9), 107–111, (1989). COSPAR and IAU, 27th Plenary Meeting, 15th Symposium on Relativistic Gravitation, Espoo, Finland, July 18–29, 1988.\n\n11. \n\nFaller, J.E., Bender, P.L., Hall, J.L., Hils, D., and Vincent, M.A., “Space antenna for gravitational wave astronomy”, in Longdon, N., and Melita, O., eds., Kilometric Optical Arrays in Space, Proceedings of the Colloquium held 23–25 October 1984, Cargese, Corsica, France, vol. SP-226 of ESA Conference Proceedings, 157–163, (ESA Publications Division, Noordwijk, Netherlands, 1985).\n\n12. \n\nFinn, L.S., “Aperture synthesis for gravitational-wave data analysis: Deterministic sources”, Phys. Rev. D, 63, 102001-1–18, (2001).\n\n13. \n\nFolkner, W.M., Hechler, F., Sweetser, T.H., Vincent, M.A., and Bender, P.L., “LISA orbit selection and stability”, Class. Quantum Grav., 14, 1405–1410, (1997).\n\n14. \n\nGiampieri, G., Hellings, R.W., Tinto, M., and Faller, J.E., “Algorithms for unequal-arm Michelson interferometers”, Opt. Commun., 123, 669–678, (1996). 2\n\n15. \n\nJenkins, G.M., and Watts, D.G., Spectral Analysis and its Applications, (Holden-Day, San Francisco, U.S.A., 1968).\n\n16. \n\nKreuzer, M., and Robbiano, L., Computational Commutative Algebra 1, (Springer, Berlin, Germany; New York, U.S.A., 2000).\n\n17. \n\nNelemans, G., Yungelson, L.R., and Portegies Zwart, S.F., “The gravitational wave signal from the Galactic disk population of binaries containing two compact objects”, Astron. Astrophys., 375, 890–898, (2001).\n\n18. \n\nNoble, B., Applied Linear Algebra, (Prentice-Hall, Englewood Cliffs, U.S.A., 1969).\n\n19. \n\nPrince, T.A., Tinto, M., Larson, S.L., and Armstrong, J.W., “LISA optimal sensitivity”, Phys. Rev. D, 66, 122002-1–7, (2002).\n\n20. \n\nRajesh Nayak, K., Dhurandhar, S.V., Pai, A., and Vinet, J.-Y., “Optimizing the directional sensitivity of LISA”, Phys. Rev. D, 68, 122001-1–11, (2003).\n\n21. \n\nRajesh Nayak, K., Pai, A., Dhurandhar, S.V., and Vinet, J.-Y., “Improving the sensitivity of LISA”, Class. Quantum Grav., 20, 1217–1231, (2003).\n\n22. \n\nRajesh Nayak, K., and Vinet, J.-Y., unknown status. In preparation.\n\n23. \n\nSelby, S.M., Standard of Mathematical Tables, (The Chemical Rubber Co., Cleveland, U.S.A., 1964).\n\n24. \n\nShaddock, D.A., “Operating LISA as a Sagnac interferometer”, Phys. Rev. D, 69, 022001-1–6, (2004).\n\n25. \n\nShaddock, D.A., Tinto, M., Estabrook, F.B., and Armstrong, J.W., “Data combinations accounting for LISA spacecraft motion”, Phys. Rev. D, 68, 061303-1–4, (2003).\n\n26. \n\nSummers, D., “Algorithm tradeoffs”, unknown status, (2003). Talk given at the 3rd progress meeting of the ESA funded LISA PMS Project. ESTEC, NL, February 2003.\n\n27. \n\nTinto, M., “Spacecraft to spacecraft coherent laser tracking as a xylophone interferometer detector of gravitational radiation”, Phys. Rev. D, 58, 102001-1–12, (1998).\n\n28. \n\nTinto, M., “The Cassini Ka-band gravitational wave experiments”, Class. Quantum Grav., 19, 1767–1773, (2002).\n\n29. \n\nTinto, M., and Armstrong, J.W., “Cancellation of laser noise in an unequal-arm interferometer detector of gravitational radiation”, Phys. Rev. D, 59, 102003-1–11, (1999).\n\n30. \n\nTinto, M., Armstrong, J.W., and Estabrook, F.B., “Discriminating a gravitational wave background from instrumental noise in the LISA detector”, Phys. Rev. D, 63, 021101-1–3, (2001).\n\n31. \n\nTinto, M., and Estabrook, F.B., “Parallel beam interferometric detectors of gravitational waves”, Phys. Rev. D, 52, 1749–1754, (1995).\n\n32. \n\nTinto, M., Estabrook, F.B., and Armstrong, J.W., “Time-Delay Interferometry and LISA’s Sensitivity to Sinusoidal Gravitational Waves”, other, Caltech, (2002). URL (cited on 06 July 2005): http://www.srl.caltech.edu/lisa/tdi_wp/LISA_Whitepaper.pdf.\n\n33. \n\nTinto, M., Estabrook, F.B., and Armstrong, J.W., “Time-delay interferometry for LISA”, Phys. Rev. D, 65, 082003-1–12, (2002).\n\n34. \n\nTinto, M., Estabrook, F.B., and Armstrong, J.W., “Time delay interferometry with moving spacecraft arrays”, Phys. Rev. D, 69, 082001-1–10, (2004).\n\n35. \n\nWolfram, S., “Mathematica: The Way the World Calculates”, institutional homepage, Wolfram Research. URL (cited on 06 July 2005): http://www.wolfram.com/products/mathematica/.\n\n## Acknowledgement\n\nS.V.D. acknowledges support from IFCPAR, Delhi, India under which the work was carried out in collaboration with J.-Y. Vinet. This research was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Massimo Tinto.\n\n## Appendices\n\n### Generators of the Module of Syzygies\n\nWe require the 4-tuple solutions $$({q_3},q_1^\\prime,q_2^\\prime,q_3^\\prime)$$ to the equation\n\n$$(1 - xyz){q_3} + (xz - y)q_1^{\\prime} + x(1 - {z^2})q_2^{\\prime} + (1 - {x^2})q_3^{\\prime} = 0,$$\n(81)\n\nwhere for convenience we have substituted $$x = {\\mathcal D_1},y = {\\mathcal D_2},\\,z = {\\mathcal D_3}.\\,{q_3},q_1^\\prime,q_2^\\prime,q_3^\\prime$$ are polynomials in x, y, z with integral coefficients, i.e. in Z[x, y, z].\n\nWe now follow the procedure in the book by Becker et al. .\n\nConsider the ideal in Z[x, y, z] (or $$\\mathcal Q[x,y,z]$$ where $$\\mathcal Q$$ denotes the field of rational numbers), formed by taking linear combinations of the coefficients in Equation (81), f1 = 1 − xyz, f2 = xzy, f3 = x(1 − z2), f4 = 1 − x2. A Gröbner basis for this ideal is\n\n$${\\mathcal G} = \\{{g_1} = {z^2} - 1,{g_2} = {y^2} - 1,{g_3} = x - yz\\}.$$\n(82)\n\nThe above Gröbner basis is obtained using the function Groebner Basis in Mathematica. One can check that both the fi, i = 1, 2, 3,4, and gj, j = 1, 2, 3, generate the same ideal because we can express one generating set in terms of the other and vice-versa:\n\n$${f_i} = {d_{ij}}{g_j},\\quad {g_j} = {c_{ji}}{f_{i,}}$$\n(83)\n\nwhere d and c are 4 × 3 and 3 × 4 polynomial matrices, respectively, and are given by\n\n$$d = \\left({\\begin{array}{*{20}c}{- 1 - {z^2}\\quad - yz} \\\\ {y\\quad 0\\quad \\quad \\;z} \\\\ {- x\\;0\\quad \\quad \\;0} \\\\ {- 1 - {z^2} - (x + yz)} \\\\ \\end{array}} \\right),\\quad \\quad c = \\left({\\begin{array}{*{20}c} {0\\;\\;0\\;\\; - x{z^2}\\; - 1} \\\\ {- 1\\; - y\\;\\;0\\;\\;\\;0} \\\\ {0\\quad \\;z\\quad \\;\\;1\\quad 0} \\\\ \\end{array}} \\right).$$\n(84)\n\nThe generators of the 4-tuple module are given by the set AB*, where A and B are the sets described below:\n\nA is the set of row vectors of the matrix I − d · c where the dot denotes the matrix product and I is the identity matrix, 4 × 4 in our case. Thus,\n\n$$\\begin{array}{*{20}c}{{a_1} = ({z^2} - 1,0,x - yz,1 - {z^2}),} \\\\ {{a_2} = (0,z(1 - {z^2}),xy - z,y(1 - {z^2})),} \\\\ {{a_3} = (0,0,1 - {x^2},x({z^2} - 1)),} \\\\ {{a_4} = (- {z^2},xz,yz,{z^2}).} \\\\\\end{array}$$\n(85)\n\nWe thus first get 4 generators. The additional generators are obtained by computing the S-polynomials of the Gröbner basis $$\\mathcal G$$. The S-polynomial of two polynomials g1,g2 is obtained by multiplying g1 and g2 by suitable terms and then adding, so that the highest terms cancel. For example in our case g1 = z2 − 1 and g2 = y2 − 1, and the highest terms are z2 for g1 and y2 for g2. Multiply g1 by y2 and g2 by z2 and subtract. Thus, the S-polynomial p12 of g1 and g2 is\n\n$${p_{12}} = {y^2}{g_1} - {z^2}{g_2} = {z^2} - {y^2}.$$\n(86)\n\nNote that order is defined (xyz) and the y2z2 term cancels. For the Gröbner basis of 3 elements we get 3 S-polynomials p12, p13, p23. The pij must now be re-expressed in terms of the Gröbner basis $$\\mathcal G$$. This gives a 3 × 3 matrix b. The final step is to transform to 4-tuples by multiplying b by the matrix c to obtain b* = b · c. The row vectors $$b_i^\\ast,i = 1,2,3$$, of b* form the\n\n$$\\begin{array}{*{20}c}{b_1^\\ast = ({z^2} - 1,y({z^2} - 1),x(1 - {y^2}),({y^2} - 1)({z^2} - 1)),} \\\\ {b_2^\\ast = (0,z(1 - {z^2}),1 - {z^2} - x(x - yz),(x - yz)({z^2} - 1)),} \\\\ {b_3^\\ast = (- x + yz,z - xy,1 - {y^2},0).} \\\\\\end{array}$$\n(87)\n\nThus we obtain 3 more generators which gives us a total of 7 generators of the required module of syzygies.\n\n### Conversion between Generating Sets\n\nWe list the three sets of generators and relations among them. We first list below α, β, γ, ζ:\n\n$$\\begin{array}{*{20}c}{\\alpha = (- 1, - z, - xz,1,xy,y),} \\\\ {\\beta = (- xy, - 1, - x,z,1,yz),} \\\\ {\\gamma = (- y, - yz, - 1,xz,x,1),} \\\\ {\\varsigma = (- x, - y, - z,x,y,z).} \\\\ \\end{array}$$\n(88)\n\nWe now express the ai and $$b_j^\\ast$$ in terms of α, β, γ, ζ:\n\n$$\\begin{array}{*{20}c}{{a_1} = \\gamma - z\\varsigma,} \\\\ {{a_2} = \\alpha - z\\beta,} \\\\ {{a_3} = - z\\alpha + \\beta - x\\gamma + xz\\varsigma,} \\\\ {{a_4} = z\\varsigma,} \\\\ {b_1^{\\ast} = - y\\alpha + yz\\beta + \\gamma - z\\varsigma,} \\\\ {b_2^{\\ast} = (1 - {z^2})\\beta - x\\gamma + xz\\varsigma,} \\\\ {b_3^{\\ast} = \\beta - y\\varsigma.} \\\\ \\end{array}$$\n(89)\n\nFurther we also list below α, β, γ, ζ in terms of X(A):\n\n$$\\begin{array}{*{20}c}{\\alpha = {X^{(3)}},} \\\\ {\\beta = {X^{(4)}},} \\\\ {\\gamma = - {X^{(1)}} + z{X^{(2)}},} \\\\ {\\varsigma = {X^{(2)}}.} \\\\ \\end{array}$$\n(90)\n\nThis proves that since the ai, $$b_j^\\ast$$ generate the required module, the α, β, γ, ζ and X(A), A =1, 2, 3, 4, also generate the same module.\n\nThe Gröbner basis is given in terms of the above generators as follows: G(1)= ζ, G(2) = X(1) G(3) = β, G(4) = α, and G(5) = a3.\n\n## Rights and permissions\n\nReprints and Permissions\n\nTinto, M., Dhurandhar, S.V. Time-Delay Interferometry. Living Rev. Relativ. 8, 4 (2005). https://doi.org/10.12942/lrr-2005-4\n\n• Accepted:\n\n• Published:\n\n### Keywords\n\n• Time-delay Interferometry (TDI)\n• Laser Interferometer Space Antenna (LISA)\n• Laser Phase Noise\n• Secondary Noise\n• Laser Frequency Fluctuations\n1. #### Latest\n\nTime-delay interferometry\nPublished:\n17 February 2020\nAccepted:\n20 October 2020\n\nDOI: https://doi.org/10.1007/s41114-020-00029-6\n\n2. Time-Delay Interferometry\nPublished:\nAccepted:\n28 July 2014\n\nDOI: https://doi.org/10.12942/lrr-2014-6\n\n3. #### Original\n\nTime-Delay Interferometry\nPublished:\nAccepted:\n16 May 2005\n\nDOI: https://doi.org/10.12942/lrr-2005-4" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89592975,"math_prob":0.9982444,"size":104624,"snap":"2021-43-2021-49","text_gpt3_token_len":24936,"char_repetition_ratio":0.17225195,"word_repetition_ratio":0.034282636,"special_character_ratio":0.23975378,"punctuation_ratio":0.1239546,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99937993,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T23:56:53Z\",\"WARC-Record-ID\":\"<urn:uuid:1978f9a8-db04-490b-ae1d-385b108b2b05>\",\"Content-Length\":\"356918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25815b2a-ffb5-49df-a256-d72127a9e3ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd0253b9-8307-4d88-b80d-dbe699e8958c>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.12942/lrr-2005-4\",\"WARC-Payload-Digest\":\"sha1:3MZAQCP5SEOXDT2H6KCEH34GBPTEEJXQ\",\"WARC-Block-Digest\":\"sha1:VM6LWHNODOXLXCQ2ONYBIXWGGSZFFXQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583087.95_warc_CC-MAIN-20211015222918-20211016012918-00348.warc.gz\"}"}
http://tqgb.planet-survival.it/dissociation-equation-for-h3po4.html
[ "# Dissociation Equation For H3po4\n\nTable AI-1 Hydrogen Weight Fraction % Dissociation. 446 Dissociation Constants of Carbonic Acid In view of these developments and the desirability of knowing the activity of bicarbonate and carbonate ions for subsequent studies, we have endeavored to determine the effect of varying ionic strength on the apparent dissociation constants of carbonic acid. 0139 x 10¯ 5 = [(3. So the charge on H3PO4 will change as each hydrogen ion dissociates in solution. Express your answer as a chemical equation. 055 µ S /cm. Because the product of K a times K b is a relatively small number, either the acid or its conjugate base can be \"strong. We write the equation as an equilibrium because both the. Label the each species in the equation as high or low concentration. Calculate the pH of a weak acid or base. Reactions: Classification: ΔH r (kJ) : ΔS r (J/K) : ΔG r (kJ) : K (CN)2 (g cyanogen) + 4 H2O (ℓ) → H2C2O4 (aq) + 2 NH3 (g)-82. Phases Are Optional. The process is also an equilibrium reaction as look: We have a weak acid reacting with water. Unformatted text preview: Chem 12 Hydrolysis Name: 1. Examples: Fe, Au, Co, Br, C, O, N, F. Is it K3PO4 --> 3K(+) + PO4(3-) Please clarify if I'm wrong. The principle is illustrated by means of experi­ mental data for malonic acid. 5 × 10-3, ka2 = 6. From these two equations one can calculate that the sample contains 1. Dissociation of Acids We need the extra practice Strong Acids CBS and PIN HCl HBr H2SO4 HClO4 HI HNO3 All strong acids dissociate 100% and must be written as such. Given the Ka values of 7. If we now replace each term in this equation by the appropriate equilibrium constant, we get the following equation. Calculating K a from Partial Neutralization Data. Phosphoric acid is not a particularly strong acid as indicated by its first dissociation constant. That means titration curve contains only two inflection points and phosphoric acid can be titrated either as a monoprotic acid or as a. Thermal dissociation: this is a dissociation performed by heating. Compare: Co - cobalt and CO - carbon monoxide; To enter an electron into a chemical equation use {-} or e. As for the buffering part, one only needs to realize that, during the transition between pH 1. Experimental methods and equations are given for calculating the concentra­ tion and dissociation constant of each acid or base in a mixture by use of its complex pH titration curve. Degree of dissociation depends on the concentration. HI + NaOH ( H2O + NaI. › Dissociation equation for hc2h3o2 What happens when you dissolve phosphpric acid in water Boredofstudies. 1 × 10 − 3, 6. When an acid dissolves in water, it ionizes into an anionic component (negatively charged ion, also called the conjugate base) and a proton (H+). 0004 We are going to be talking a little bit about percent dissociation, sort of a continuation of our weak acid topic from last lesson. Study 56 Chapter 18 flashcards from Juliana A. Acid strength is measured with the help of dissociation constant. Think of the HCl pushing to the right, and the acetic acid ions pushing to the left both at the same time. Try to solve it manually. 25 M NH 3 (K b = 1. Write the chemical equation to show this salt ionizing in water. CH3CH2COOH = Propanoic acid. (a) Write an equation for the reaction of HSO. What is dissociation? 2. 3NaOH + H3PO4 = Na3PO4 + 3H2O. FINDING THE pH OF SOLUTIONS OF AMPHIPROTIC SALTS. Itis denoted by. americanpierg said:. Reactions: Classification: ΔH r (kJ) : ΔS r (J/K) : ΔG r (kJ) : K (CN)2 (g cyanogen) + 4 H2O (ℓ) → H2C2O4 (aq) + 2 NH3 (g)-82. First is the relative fractions, a, for the various forms of the acids as a function of pH. 70 at equilibrium?. Phosphoric acid Citric acid K 1 7. Examples: Fe, Au, Co, Br, C, O, N, F. 13 starts to give wrong results (with ridiculous dissociation fraction higher than 100%). H3PO4 is known as Phosphoric acid. Given the following acid dissociation constants, Ka (H3PO4) = 7. A net ionic equation helps chemists represent the steps in a chemical reaction. 3 x 10^-5) asked by <3 on July 29, 2013; chemistry. Calculate the concentration of a strong or weak acid or base from its pH. The reason to write a chemical equation is to express what we believe is actually happening in a chemical reaction. It is shown that. For the triprotic acid, the a are. ) Write The Equation For The Dissociation Reaction Where K = Ka1. K a values allow one to compare the strength of acids. You are absolutely right. 8, which is a simpler expression. Titration of the phosphoric acid H 3 PO 4 is an interesting case. Sodium hydroxide is a strong base. Dissociation of Acids We need the extra practice Strong Acids CBS and PIN HCl HBr H2SO4 HClO4 HI HNO3 All strong acids dissociate 100% and must be written as such. H2CO3 and Sr(OH)2. Complex Acid/Base Systems. 988 g/mole) must be added to 500. A molecular equation is an equation in which the formulas of the. 5x10^-3 Second ionization step. the solutions have been tested for electrolytes and conductivity. The strength of the nitrogen-nitrogen triple bond makes the N 2 molecule very unreactive. One of the most useful applications of the concept of principal species is in writing net ionic equations. And it just has a special name, 'cause it happens to be for the dissociation of an acid. Examples: Fe, Au, Co, Br, C, O, N, F. This reaction between sulfuric acid and potassium hydroxide creates salt and water. BATE pH calculator. However, because the successive ionization constants differ by a factor of 10 5 to 10 6 , the calculations can be broken down into a series of parts similar to those for diprotic acids. University. 2 H 3 PO 4(aq) + H 2 O (l) ⇌ H 2 PO 4-(aq) + H 3 O + (aq). The equation for the dissociation of acetic acid, for example, is CH 3 CO 2 H + H 2 O ⇄ CH 3 CO 2 − + H 3 O +. When all three H + ions are removed, the result is. (Note: the normal concentration, N (eq/L), of phosphoric acid is 3-times the formal concentration, F (f. Use uppercase for the first character in the element and lowercase for the second character. Acid-base reaction - Acid-base reaction - Dissociation constants in aqueous solution: The classical method for determining the dissociation constant of an acid or a base is to measure the electrical conductivity of solutions of varying concentrations. 05 M solution of hydrocyanic acid (HCN). ) Write The Equation For The Dissociation Reaction Where K = Ka1. The pKa value is based on the Ka value, which is called the acid dissociation constant. Calculating K a from Partial Neutralization Data. As for the buffering part, one only needs to realize that, during the transition between pH 1. Molecular Shape and composition. 10 x 10-13 4. 00174 moles H 3 PO 4 to react fully with the NaOH in the titrant. 57 X 10~8, respectively. ==Clarification== Phosphoric acid is a weak acid, as it does not fully dissociate into its component ions. The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Free essays, homework help, flashcards, research papers, book reports, term papers, history, science, politics. The conjugate. here is an example starting from the hydroxide ion concentration. 0 M H2SO4 solution. But sometimes I come across examples where, even tho one of the reactants or products is aqueous, it isnt. The acid equilibrium problems discussed so far have focused on a family of compounds known as monoprotic acids. Write the chemical equation to show this salt ionizing in water. 1+3= 4 PCl5 (g) ↔ PCl3 (g) + Cl2 (g) c) Explain the 2nd law of faraday’s. On the basis of degree of ionization of acids and bases, we can determine the strength of acids and bases. H3PO4 Orthophosphate Triprotic Acid pK1 = 2. 33% Comment: the first example is somewhat artifical, in that the percent dissocation is quite high. These opposite reactions are occurring at the same rate; therefore the system is in equilibrium. (a) Write an equation for the reaction in which H2C6H7O5- (aq) acts as a base in H2O (l). (b) Write an equation for the reaction in which H2C6H7O5- (aq) acts as an acid in water. Now in the charge balance equation, [K +] is equal to 0. For H3PO4 and H3BO3, does the subscript “3” of hydrogen in these two formulas seem to result in additional ions in solution as it did in Group A? Explain. a = acid dissociation constant (e. Physical properties: Pure phosphoric acid is a white crystalline solid with melting point of 42. is the second dissociation constant of. ACS/ISO Ph. Phosphoric acid is not a particularly strong acid as indicated by its first dissociation constant. 8 x 10 -10 1. Aqueous phosphoric acid solutions are highly acidic. ==Clarification== Phosphoric acid is a weak acid, as it does not fully dissociate into its component ions. 5 as undissociated H3PO4. i am struggling with this entry level chemistry class, please help if i don't pass i will not graduate :((Answer Save. When writing an ionic equation, state symbols of the substances must be clearly indicated. Balancing chemical equations. Experimental methods and equations are given for calculating the concentra­ tion and dissociation constant of each acid or base in a mixture by use of its complex pH titration curve. 3 x 10-2: a. GEOCHEMISTRY CLASS 4. - Equation A8: Total phosphoric acid - [H3PO4] + [H*] molar concentration For the first dissociation of phosphoric 'acid, the equilibrium constant, K,, is given by Equation Ai. However, for some weak acids, the percent dissociation can be higher—upwards of 10% or more. Phosphoric acid has Ka of 7. Provide a balanced equation for the hydration of boric acid, H3BO3(s), a weak electrolyte. Net ionic equation weak acid keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website. H2 + O2 = H2O 2. Substances can be categorized as s trong electrolytes, weak electrolytes, or nonelectrolytes. 10 x 10-13 4. 800ml of water been added to 200ml 40% phosphoric acid solution (d=1,3g/cm3). Kerosene can be approximated with the formula C 12 H 26, and its combustion equation is. 1) silver chloride. 120 ( x1 = 9. Since strong acids, by definition, ionize completely, pKa is more important as a characteristic of. A solution containing both ; a weak. The usual percent dissociation answer is between 1 and 5 per cent. The base dissociation constant, K b, is a measure of basicity—the base’s general strength. 446 Dissociation Constants of Carbonic Acid In view of these developments and the desirability of knowing the activity of bicarbonate and carbonate ions for subsequent studies, we have endeavored to determine the effect of varying ionic strength on the apparent dissociation constants of carbonic acid. Part B What is the expression. Sulfuric acid is a strong acid, whereas phosphoric acid is a weak acid. *Ideal Gas Equation: PV = nRT V = nRT / P = (2. Dissociated Methanol Because dissociated methanol contains several components other than Hz and CO, equation AI-9 is not. It can have many different values of pH ranging from below zero at high concentration to near 7 at very low concentration. H3PO4 2H++ HPO42¯ d. 3 x 10-2: a. The resulting phosphoric acid solution is only about 32-46% H 3 PO 4, so it is then concentrated (by evaporation of water) to produce higher concentration commercial grades of phosphoric acid. These constants were first used in seawater for carbonic acid by Buch et al. Dans le cas du titrage de H3PO4 par du NaOH, lorsque nous devons calculer le pH, nous savons que H3PO4 + NaOH ----> H2PO4 + H2O avant la première équivalence. 96 g H2SO4 b. H3PO4 + H20 H3O+ + H2PO4-2. Phosphoric acid is commonly encountered in chemical laboratories as an 85% aqueous solution, which is a colourless, odourless, and non-volatile syrupy liquid. 3 × 10 − 8 and 4. Instructions on balancing chemical equations: Enter an equation of a chemical reaction and click 'Balance'. pK a = -log(K a) The very strong acids, those that are completely dissociated in water, are distinguished by examining their acid dissociation equilibrium in non-aqueous solvents and the pK a for water is estimated. Help please I'm confused. H will leave its own electron to Cl ion and H gets attached to water to form H3O+, i. Given the following acid dissociation constants, Ka (H3PO4) = 7. Ka is the symbol given for acid dissociation constant. HF H + + FNH3. In 1996, the Pitzer equation was taken again by Jiang to model the thermodynamic behaviour of the system H 3 PO 4 –H 2 O, in addition to the first dissociation, he took into account the association of H 3 PO 4 with H 2 PO 4 − to form the complex H 5 P 2 O 8 − for whole range of concentrations up to 24 mol kg −1 and for a temperature of. The strength of an acid is determined by a number called the acid-dissociation equilibrium constant. Another equation to solve for pH (or [H+]) can be derived from the equation for the ion product constant of water (K w). B, is the correct answer. In the calculations activity corrections are considered. From the linear equation, y = mx + c, the slope is m. Use the calculator then. Exercise 19 The pH of a Sulfuric. 08205 L atm/mole K *(39+273 K)) / 0. To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. From these the degree of dissociation (α; see above) can be determined and Ka calculated from the equation This method is unsuitable for. Ca(OH)2 and H3PO4. pKa is a combination of the p in pH, which stands for power of hydrogen, and the dissociation constant for acids, represented by Ka. In aqueous solution, phosphoric acid behaves as a triprotic acid, having three ionizable hydrogen atoms. Table 1 gives ionization data for four series of polyprotic acids. While tedious, it requires little more than what we did to write the equilibrium expression and equation for a monoprotic acid. Dissociation of occurs as follows: Where, is the first dissociation constant of. The balanced chemical equation can be written as shown below:. [H 3 O +][OH-] = 1. University. 035 M Sr(OH)2. 4 Potassium hydrogen tartrate KHC4H4O6 Ka2 og H2C4H4O6= 4. If we now replace each term in this equation by the appropriate equilibrium constant, we get the following equation. Before proceeding with the end point detection discussion we should learn a little bit about the pH indicators behavior. List of irreversible reactions : 1. Think about both values carefully and select the one that is appropriate. Examples: Fe, Au, Co, Br, C, O, N, F. H2PO4- + H2O -> HPO42- + H3O+ HPO42- + H2O -> PO43- + H3O+ The dissociation process forms primary, secondary and tertiary phosphates. How do you write the chemical equation for the reaction of carbonic acid (#H_2CO_3#) with water? Chemistry Chemical Reactions Chemical Equations 1 Answer. Is it K3PO4 --> 3K(+) + PO4(3-) Please clarify if I'm wrong. A general equation. For instance, when an acid dissolves in water, a covalent bond between an electronegative atom and a hydrogen atom is broken by heterolytic fission, which. Now write the anion conc. This chemical equation balancer can help you to balance an unbalanced equation. The first letter of an element is capitalized; if there is a second letter, it is in lower case. 6% %dissociation Of H2PO4- Trial 1: 0. (1)HCl(aq) H+(aq)+Cl−(aq) After successful dissociation of HClin water, the following reaction takes place: HCl+H2O H3O++Cl−. It's important to understand that whereas K a for a given acid is essentially a constant, \\alpha will depend on the concentration of the acid. Find the equivalents of $\\ce{H^+}$ from phosphoric acid (a) and the equivalents of $\\ce{OH^-}$ from $\\ce{NaOH}$ (b). HCO 3 - º H + + CO 3 -2 K a = 4. 800ml of water been added to 200ml 40% phosphoric acid solution (d=1,3g/cm3). So make sure to sold tamoxifen for arthritis Glades to Global Forecast to 2021 have nothing. 3 –) is amphiprotic. is the second dissociation constant of. For example, the balanced chemical equation for the combustion of methane, CH 4, is as follows: CH 4 + 2 O 2 → CO 2 + 2 H 2 O. Thermal dissociation: this is a dissociation performed by heating. Each of these three equilibrium equations can be expressed mathematically in several different ways. Let us write the chemical equation for each proton dissociation. 0 x10^-2 2 Potassium hydrogen phthalate KHC8H4O4 Ka2 of H2C8H4O4= 3. Basicity of an acid refers to the number of replaceable hydrogen atoms in one molecule of the acid. /L), since it has 3 protons per mole). H 3 PO 4 + 3 RbOH = 3 H 2 O + Rb 3 PO 4. 07) + 2(-221. The reason to write a chemical equation is to express what we believe is actually happening in a chemical reaction. Reaction Information. Ionic charges are not yet supported and will be ignored. Phosphoric acid (h3po4) has three acid dissociation constants ( ka). 5 M aqueous solution of phosphoric acid. 8 x 10-5 What is the pOH of a 4. Let us write the chemical equation for each proton dissociation. 1 M NaOH Solution. HA + H2O º H3O + + A-Most problems can then be solved by setting the reaction quotient equal to the acid dissociation equilibrium constant (Ka). is completely dissociated in aqueous solution b. Orthophosphoric acid molecules can combine with themselves to form a variety of compounds which are also referred to as phosphoric acids, but in a more general way. Permission required for reproduction or display. H2PO4- + H2O -> HPO42- + H3O+ HPO42- + H2O -> PO43- + H3O+ The dissociation process forms primary, secondary and tertiary phosphates. pKa(overall) is the negative log of the overall acidity constant for the overall ionization reaction of the polyprotic acid. The net ionic equation for potassium hydroxide and phosphoric acid should look like the following: H3PO4 + 3OH- --> PO4^3- + 3HOH(l). The following is the equilibrium equation for its reaction with water: HC2H3O2 (aq) + H2O (l) ⇌ H 3O+ (aq) + C2H3O2-(aq) Ka = 1. For H3PO4 and H3BO3, does the subscript “3” of hydrogen in these two formulas seem to result in additional ions in solution as it did in Group A? Explain. Hence, when perchloric acid is dissolved in the water, it dissociates into hydrogen and perchlorate ions. On the other hand, H3PO4 is triprotic, all hydrogens bonded to oxygens. Write balanced chemical equations to represent the slight dissociation or the complete dissociation for 1 mole of the following. The resulting phosphoric acid solution is only about 32-46% H 3 PO 4, so it is then concentrated (by evaporation of water) to produce higher concentration commercial grades of phosphoric acid. Caustic Soda Lye Soda Lye Sodium Hydrate NaOH Sodium Hydroxide White Caustic. However, HPO 4 -2 and PO 4 -3 are produced after donation of 2 and 3 protons respectively. H3PO4 – 3 separate dissociation constants. In biological systems, phosphorus is found as a free phosphate ion in solution and is called inorganic phosphate, to distinguish it from. 35C to convert into a viscous liquid. H3PO4 has three steps of dissociation. Table AI-1 Hydrogen Weight Fraction % Dissociation. Examples: Fe, Au, Co, Br, C, O, N, F. Given the following acid dissociation constants, Ka (H3PO4) = 7. Chemical reaction. While tedious, it requires little more than what we did to write the equilibrium expression and equation for a monoprotic acid. Part A Write the equation dissociation for phosphoric acid. 6 x 10-14 Butanoic HC4H7O2 1. 3×10-10 Codeine C 18H 21NO 3 1. They are all defined in the help file accompanying BATE. Question: Phosphoric acid (H3PO4) is a triprotic acid with three ionizable protons. ACS/ISO Ph. Example Problem: Find the pH of a solution formed by dissolving 0. Study 56 Chapter 18 flashcards from Juliana A. three steps. The integer in parentheses after the name denotes which hydrogen is being ionized, where (1) is the first and most easily ionized hydrogen. It is an acid, but it is a weak acid because of its low first dissociation constant. So the charge on H3PO4 will change as each hydrogen ion dissociates in solution. The first hydrogen separates, leaving H2PO4- ions. The resulting phosphoric acid solution is only about 32-46% H 3 PO 4, so it is then concentrated (by evaporation of water) to produce higher concentration commercial grades of phosphoric acid. F(x) = an expression which depends only on x; defined by equation (25) p. AU - Fernandez, Marino. Hydrocyanic acid has an acid-dissociation. Calculate the pH and include the balanced equation for the acid dissociation reaction. 2 Potassium hydrogen sulfate KHSO4 Ka2 of H2SO4= 1. 1 x 10 4 x 0. The concentration of CH 3 NH 3 + in a 0. Acid strength is measured with the help of dissociation constant. The acid-dissociation constants of phosphoric acid (H3PO4) are Ka1 = 7. Thus we will use equation 8. Write the acid-dissociation reaction of nitrous acid (HNO2) and its acidity constant expression. 10 x 10-13 4. Write the equations for the above two buffer solutions. The equation for the dissociation of acetic acid, for example, is CH 3 CO 2 H + H 2 O ⇄ CH 3 CO 2 − + H 3 O +. Chemistry CH 13 Reading Assignment-Ions In Solution and Colligative Properties Pg 435-443 1. Thermal dissociation: this is a dissociation performed by heating. 15, pK a2 =7. That is, 5 moles of $\\ce{NaOH}$ gives 5 moles of $\\ce{OH^-}$ ions after complete dissociation. How to write acid-base reaction equations using conjugate pairs. 2 Conjugate base Naming Acids Binary Acids: hydo + root of anion + ic + \"acid\" ex.  The balanced dissociation for for K3PO4  is. 3 where as pKa1 for H3PO4 is 2. H2PO4 ==(H2O)==> H+ + HPO4(minus 2). is the second dissociation constant of. First ionization step: H3PO4 (Aq) <=> H+ (aq) + H2PO4- (aq) Ka1= 7. Hydrochloric acid (HCl), acetic acid (CH 3 CO 2 H or HOAc), nitric acid (HNO 3 ), and benzoic acid (C 6 H 5 CO 2 H) are all monoprotic. A second hydrogen may then dissociate, leaving HPO4-2 ions. 8 x 10 -10 1. Thus, question_answer. Neutralization reactions are one type of chemical reaction that proceeds even if one reactant is not in the aqueous phase. Write a balanced equation for their dissociation in water a) LiBr b) FeCl3 2) HCN is a weak acid. H2CO3 and Sr(OH)2. Degree of Dissociation– 1. The molar concentration of the species H3PO4, H2PO4-, HPO4-2, and PO4-3 are taken from the output table Ions. Use uppercase for the first character in the element and lowercase for the second character. 9×10-4 Dimethlyamine (CH 3) 2NH 5. It is a function of total concentration of the species and its relevant equilibrium …. THE COMPOSITION OF POLYPROTIC ACID SOLUTIONS AS A FUNCTION OF pH. CHAPTER 4: ANSWERS TO ASSIGNED PROBLEMS Hauser- General Chemistry I revised 10/14/08 4. Phosphoric acid Citric acid K 1 7. HA + H2O º H3O + + A-Most problems can then be solved by setting the reaction quotient equal to the acid dissociation equilibrium constant (Ka). Identify all of the phases in your answer. H 3PO4, with the largest Ka value, is the strongest of these weak acids. Each pK a corresponds to one proton dissociation. H3PO4 has three steps of dissociation. H 2SO 4 sulfuric acid, H 3PO 4 phosphoric acid H 2CO 3 HNO 3 Hydroiodic acid carbonic acid nitric acid The Self-Ionization of Water. Physical properties: Pure phosphoric acid is a white crystalline solid with melting point of 42. 3 – with water, in which the ion acts as a base. These constants were first used in seawater for carbonic acid by Buch et al. Compare: Co - cobalt and CO - carbon monoxide; To enter an electron into a chemical equation use {-} or e. Dissociation is a mental process that causes a lack of connection in a person’s thoughts, memory and sense of identity. From these two equations one can calculate that the sample contains 1. For H3PO4 and H3BO3, does the subscript “3” of hydrogen in these two formulas seem to result in additional ions in solution as it did in Group A?. The acid-dissociation constants of phosphoric acid (h3po4) are ka1 = 7. 50 meq HCl and 1. How do you write the chemical equation for the reaction of carbonic acid (#H_2CO_3#) with water? Chemistry Chemical Reactions Chemical Equations 1 Answer. 0, a lot of the H + used to combine with OH-comes from H 3 PO 4. Name Formula Ka (or Ka1) Ka2 Ka3 Acetic HC2H3O2 1. Thanks in advance to everyone who answers. Write an equation for the dissociation of HC2H3O2, HCL, H3PO4, H3BO3. 33: 4942378355. Write the net ionic equation for the neutralization reaction of H3PO4 (aq) with Ba(OH)2(aq) Thanks Write the net ionic equation for the neutralization reaction of H3PO4 (aq) with Ba(OH)2(aq) Thanks. H3PO4 + H20 <==> H3O^+ + H2PO4^- As seen here, a proton is transferred from H3PO4 to H2O, hence it yields H30+ (the hydronium ion) and produces the dihydrogen phosphate ion. Therefore, the process for calculating the Ka value would not be different from parts a to c. H2CO3 and Sr(OH)2. 0 M sulfuric acid solution. To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. Phosphate is a salt of phosphoric acid. pK a = -log(K a) The very strong acids, those that are completely dissociated in water, are distinguished by examining their acid dissociation equilibrium in non-aqueous solvents and the pK a for water is estimated. 00 meq H 3PO 4. 0225 L, so we can find the concentration of the H3PO4 by putting the moles over the volume of the sample. a = acid dissociation constant (e. 6 g HBr in 450. This equilibrium constant is a quantitative measure of the strength of an acid in a solution. 41 Respectively. Thermal dissociation: this is a dissociation performed by heating. First ionization step: H3PO4 (Aq) <=> H+ (aq) + H2PO4- (aq) Ka1= 7. Given the following acid dissociation constants, Ka (H3PO4) = 7. (20 pts) How many grams of solid sodium fluoride (NaF, 41. AP Chemistry: Acids, Bases, and Salts Unit Objectives 1. Double Replacement & Acid Carbonate Reactions CuSO4 (s) ® Cu2+ (aq) + SO42- (aq) Fe2(HPO4)3 (s) ® 2Fe3+(aq) + 3HPO42-(aq) FeCl3(s) ® Fe3+(aq) + 3Cl- (aq) Write dissociation equations for the following ionic compounds to show how they dissolve in water. In studying the origin of phosphate de- stants,\" which are suitable for practical posits W. If you are an experienced chemist you will easily find the mistake in the mixed equation thanks to your chemical knowledge because the example is simple. Whether the neutralization is complete or not can only be determined by mole calculation. Examples: Fe, Au, Co, Br, C, O, N, F. 75)] - [1(189. B) What Would Be The PH Of 0. 73 x 10-5 K 3 7. What is the molar concentration of phosphate ion in a 2. You're right about the H+ thing, but since H20 is in the equation on the LHS, it wouldn't balance with just H+. 2 × 10 − 13 for the first, second and third ionizations, respectively. If phosphoric acid is triprotic, it'll take 3 steps to decompose. It is possible to combine more than one of these manipulations. for NH 3) Since NH 3 and NH 4 + are a conjugate acid/base pair, it is not surprising that K a for NH 4 + and K b for NH 3 are related. This reaction between sulfuric acid and potassium hydroxide creates salt and water. 0 M phosphoric acid (H3PO4) solution? Find the normality of 5. 5: Effect of dilution on the percent dissociation and [H+] Runner struggles to top of a hill Molecular model: HC3H5O3 and H2O Molecular model: Acetic acid Molecular model: Benzoic acid Tanks in Miami. For the titration of the weak acid (or base) using strong neutralizing agent starting point pH is just pH of weak acid solution (see equation 8. Try to solve it manually. Almost immediately, the. The equations and constants for the dissociation of three different acids are given below. Question: Phosphoric acid (H3PO4) is a triprotic acid with three ionizable protons. H will leave its own electron to Cl ion and H gets attached to water to form H3O+, i. i don't understand this part of the questions with my lab report in my homework. Calculate the pH of 1. It is related to the acid dissociation constant, K a, by the simple relationship pK a + pK b = 14, where pK b and pK a are the negative logarithms of K b and K a, respectively. Write the acid dissociation for hydrofluoric acid, HF. d)Fe(NO3)3. H 2 PO 4-º H + + HPO 4-2 K a = 6. 94 EXPERIMENT 10: TITRATION OF A COLA PRODUCT The equilibrium constant for each reaction is listed below. The extent of dissociation can be quantitatively described by the equilibrium constant K according to the equation: Ka is known as the dissociation constant of an acid and characterizes its acid strength. 2 x 10-8; and K 3 = 3. Write the ionic equation for the word equation. If ignoring x in added acid base is valid, ie the extent of dissociation is small, then for solution of weak acid and its conjugate base (salt) Convenient but should be checked if not valid, we can always solve quadratic. 11 x 10-3 7. 05) all equations give exact result - so you should use the simplest one 8. 73 x 10-5 K 3 7. for NH 3) Since NH 3 and NH 4 + are a conjugate acid/base pair, it is not surprising that K a for NH 4 + and K b for NH 3 are related. Balancing chemical equations. Orthophosphoric acid (H 3 PO 4 ), which is formed on the dissolution of P 4 O 10 (or P 2 O 5 ) in water, is the most important acid and the acid that has been most thoroughly studied. Write the chemical equation for the dissociation of HC2H3O2 in. 15 A) What Would Be The PH Of 0. the solutions have been tested for electrolytes and conductivity. 800ml of water been added to 200ml 40% phosphoric acid solution (d=1,3g/cm3). How to write acid-base reaction equations using conjugate pairs. You are absolutely right. The pKa value is based on the Ka value, which is called the acid dissociation constant. NH4Cl and B. In other words, if the weak acid represented is allowed to ionize, as shown in the equation below, then a significant amount of HA will remain un-ionized. The entire chemical equation.  The balanced dissociation for for K3PO4  is. Although often listed together with strong mineral acids (hydrochloric, nitric and sulfuric) phosphoric acid is relatively weak, with pK a1 =2. According to the theories of Svante Arrhenius, this must be due to the presence of ions. There are tables of acid dissociation constants, for easy reference. pKa(overall) is the negative log of the overall acidity constant for the overall ionization reaction of the polyprotic acid. The reaction of an acid in water solvent is often described as a dissociation ↽ − − ⇀ + + − where HA is a proton acid such as acetic acid, CH 3 COOH. Let's write out the equilibriun expressions and equations for the dissociation of a triprotic acid, H 3A. For polyprotic acids there are multiple dissociation steps and equivalence points, one for each acidic hydrogen present. To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. The strength of an acid is determined by a number called the acid-dissociation equilibrium constant. Sodium hydroxide is a strong base and strong electrolyte. But keep in mind if the substance you are dissociating is weak then there will be a double arrow sign in between the R and P. 3×10-10 Codeine C 18H 21NO 3 1. The molar mass of H3PO4 is approximately 98 g/mol. Phosphate (Pi) is an essential component of life. Write an equation for the dissociation of each of the compounds in Group B. Hydrochloric acid (HCl), acetic acid (CH 3 CO 2 H or HOAc), nitric acid (HNO 3 ), and benzoic acid (C 6 H 5 CO 2 H) are all monoprotic. Neutralization Reactions for Polyprotic and Polybasic Species Neutralization Reactions for Polyprotic and Polybasic Species Polyprotic Acids The titration curves we have. 006 g/mole) to produce a buffer. 05 M solution of hydrocyanic acid (HCN). Sir P2o5+3h2o=2ph3+4O2 is a right equation give me answer please. Reaction Type. First notice, that this question is for 3 points, and asks us for one definition and 2 equations, therefore it. Thread starter andyjl; the answer 1. If ignoring x in added acid base is valid, ie the extent of dissociation is small, then for solution of weak acid and its conjugate base (salt) Convenient but should be checked if not valid, we can always solve quadratic. Phosphoric acid (h3po4) has three acid dissociation constants (ka). 1103-1114, 1989 0010-938X/89 $3. 110 Printed in Great Britain 1989 Pergamon Press plc EFFECTS OF pH OF AN Na2MoO4-H3PO4 TYPE AQUEOUS SOLUTION ON THE FORMATION OF CHEMICAL CONVERSION COATINGS ON STEELS K. Express Your Answer As A Chemical Equation. The complete ionic equation is this:. Chapter 9 Lecture Outline Prepared by Ashlyn Smith Anderson University * Copyright © McGraw-Hill Education. Sodium chloride (aq) + silver nitrate. Write the net ionic equation that depicts the dissociation of the first proton including charges for any ions produced. Examples: Fe, Au, Co, Br, C, O, N, F. The acid dissociation constant expression is written as [HA] BH+ + OH , and the base dissociation constant expression is For a weak base the equation is B + H20 written as [B] Practice Problems 25. Balanced Chemical Equation. As a weak acid, some of the acid will remain in molecular form when dissolved in water. It is possible to combine more than one of these manipulations. Whether the neutralization is complete or not can only be determined by mole calculation. There are tables of acid dissociation constants, for easy reference. H 2 PO 4-º H + + HPO 4-2 K a = 6. Find the percent dissociation of this solution. Summary for the titration of a weak acid lab. Neutralization Reactions Worksheet. Calculate the K. According to the stoichiometry shown in Equation 2, we see that [H+] must equal to [A ], since there is no other source of H+ but from HA. Ionization of acids: Acids are defined as proton donors. The equation for the dissociation of acetic acid, for example, is CH 3 CO 2 H + H 2 O ⇄ CH 3 CO 2 − + H 3 O +. 1M Adipic Acid Solution Is Titrated With 0. The water dissociation constant, K w , is 1 x 10 -14. 3 – with water, in which the ion acts as a base. 84% Trial 2: 0. In other words, if the weak acid represented is allowed to ionize, as shown in the equation below, then a significant amount of HA will remain un-ionized. 35EC to form a viscous liquid. Phosphoric acid (also known as orthophosphoric acid or phosphoric [V] acid) Molecular formula : H 3 PO 4 (O = 65,3 %, P = 31,64%, H = 3,06 %) Molar mass = 97,9952 ± 0,0014 g·mol-1 Phosphoric acid is a mineral (inorganic) acid. Calculations are based on the equation for the ionization of the weak acid in water forming the hydronium ion and the conjugate base of the acid. Al + O2 = Al2 O3. You can also. 2 × 10-8, and Ka3 = 4. com Phosphoric acid ACS reagent, ≥85% H3PO4 | Sigma-Aldrich. In organic chemistry, a phosphate, or organophosphate, is an ester of phosphoric acid. Thus, the ion H2PO4G is a very weak acid, and HPO4 2G is an extremely weak acid. Use the equation to convert mols HNO3 to mols Ca(OH)2. Each of these acids has a single H + ion, or proton, it can donate when it acts as a Brnsted acid. Chemical equations worksheet -- With over 100 000 antioxidants and vitamins while value products faster delivery before had we. H3PO4 H33+ + PO43¯. They are all defined in the help file accompanying BATE. In turn, the strength of an acid can determine the way in which a titration occurs. 0 x 10-14 (at 25 °C) (K a for a weak acid)(K b for its conjugate base) = K w same as (K b for a weak base. BYJU’S online chemical equation calculator tool makes the prediction faster and easier, and it displays the answer in a fraction of seconds. HSO 4-º H + + SO 4-2 K a = 1. 41 to calculate p K a for NH 4 + from the value of pK b for NH 3. One of the most useful applications of the concept of principal species is in writing net ionic equations. The molar concentration of the species H3PO4, H2PO4-, HPO4-2, and PO4-3 are taken from the output table Ions. Le pka entre H3PO4 et H2PO4 est de 2,1. An example, using ammonia as the base, is H 2 O + NH 3 ⇄ OH − + NH 4 +. Algebraically we can take the log of both sides of the equation for Kw to get the equation :. Net Ionic Equation Net ionic equations are used to show only the chemicals and ions involved in a chemical reaction in order to simplify information about a reaction. 015-molar solution of oxalic acid, a strong acid is added until the pH is 0. Write the dissociation equation the following compounds in water. For large acid concentrations, the solution is mainly dominated by the undissociated H3PO4. Use uppercase for the first character in the element and lowercase for the second character. Solid A dissolves in water to form a conducting solution. Write molecular and net ionic equation of H3PO4+Ba(OH)2 3. Calculate the value of the first dissociation constant, K1, for oxalic acid if the value of the second dissociation constant, K2, is 6. Use -> for strong; <-> for weak. Sodium Bromide + Phosphoric Acid → Trisodium Phosphate + Hydrogen Bromide. Write The Three Dissociation Equations For Phosphoric Acid (H3PO4) 2. com the solutions have been tested for electrolytes and conductivity. First notice that H2PO4- is the conjugate base of phosphoric acid but that H2PO4- can also function as a weak acid undergoing another dissociation reaction. Bates An accurate det. It is normally encountered as a colorless, syrup of 85% concentration in water. Use uppercase for the first character in the element and lowercase for the second character. Phosphoric Acid Has Ka Of 7. Substances can be categorized as s trong electrolytes, weak electrolytes, or nonelectrolytes. These are equations that focus on the principal substances and ions involved in a reaction--the principal species--ignoring those spectator ions that really don't get involved. Part B What is the expression. Example Problem: Find the pH of a solution formed by dissolving 0. Electrolytes are chemicals that break into ions (ionize) when they are dissolved in water. It is a stronger acid than acetic acid, but weaker than sulfur ic acid and hydrochloric acid. 75)] - [1(189. The balanced dissociation for for K3PO4 is K3PO4 ---> 3K^+ + PO4^-3 The dissociation of 1 mole of K3PO4 forms 3 moles of potassium ions (K^+) and 1 mole of phosphate ions (PO4^-3). Phosphoric acid (also known as orthophosphoric acid or phosphoric [V] acid) Molecular formula : H 3 PO 4 (O = 65,3 %, P = 31,64%, H = 3,06 %) Molar mass = 97,9952 ± 0,0014 g·mol-1 Phosphoric acid is a mineral (inorganic) acid. Dissociation can be also described by overall constants, as well as base dissociation constants or protonation constants. ACS/ISO Ph. You're right about the H+ thing, but since H20 is in the equation on the LHS, it wouldn't balance with just H+. Phosphoric acid react with sodium hydroxide to produce sodium hydrogen phosphate and water. Chapter 9 Lecture Outline Prepared by Ashlyn Smith Anderson University * Copyright © McGraw-Hill Education. Write the dissociation equation for the following: a. Write molecular and net ionic equation of H3PO4+Ba(OH)2 3. HI + NaOH ( H2O + NaI. 0 x 10 -13. 10 M solution of acetic acid, CH 3COOH. 0, a lot of the H + used to combine with OH-comes from H 3 PO 4. for NH 3) Since NH 3 and NH 4 + are a conjugate acid/base pair, it is not surprising that K a for NH 4 + and K b for NH 3 are related. 0500 M calcium hydroxide is required to neutralize 38. Their relationship was consistent with exper-imental values of at 18° C for H3PO4 solutions with ionic strengths between 0. 15 Because the pK a are so different, the protons are reacted at different pH's. K a and pK a for Polyprotic Acids. Phosphoric acid is not a particularly strong acid as indicated by its first dissociation constant. Equation For Magnesium Hydroxide Dissolving In Water : Mg(OH)2 + H2O Views : 15. The reaction in which water breaks into hydrogen and hydroxide ions is a dissociation reaction. A second hydrogen may then dissociate, leaving HPO4-2 ions. In studying the origin of phosphate de- stants,\" which are suitable for practical posits W. To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. 2}\\] As we noted earlier, the concentration of water is essentially constant for all reactions in aqueous solution, so $$[H_2O]$$ in Equation \\(\\ref{16. pKa(overall) is the negative log of the overall acidity constant for the overall ionization reaction of the polyprotic acid. Neutralization Reactions Worksheet. Orthophosphoric acid molecules can combine with themselves to form a variety of compounds which are also referred to as phosphoric acids, but in a more general way. For the triprotic acid, the a are. 91 J/K (decrease in entropy). The equations and constants for the dissociation of three different acids are given below. Which chemical equation shows the dissociation of 2 protons from trihydrogen phosphate (phosphoric acid)? (other substances / rxns can be used) a. Use uppercase for the first character in the element and lowercase for the second character. Corrosion Science, Vol. triprotic - 3 H+ examples: H3PO4. A weak electrolyte is an electrolyte that does not completely dissociate in aqueous solution. Use the equation to convert mols HNO3 to mols Ca(OH)2. asked by Lisa on March 25, 2012; Chemistry. Acids that dissociate less are weaker. 15, pK a2 =7. The acid equilibrium problems discussed so far have focused on a family of compounds known as monoprotic acids. K a values allow one to compare the strength of acids. It is a stronger acid than acetic acid, but weaker than sulfur ic acid and hydrochloric acid. There are two ionisable hydrogen atom in H2PO4-, so there should be stepwise hydrogen dissociation. HCO 3-º H + + CO 3-2 K a = 4. - H,O are derived in two ways, first, assuming that the components are electro· neutral species and, second, considering the actual ionic species present in solution. How to write acid-base reaction equations using conjugate pairs. Phosphoric acid has Ka of 7. Equations previously developed and widely applied to the thermodynamic properties of strong electrolytes are extended to solutions involving a dissociation equilibrium. Find the equivalents of$\\ce{H^+}$from phosphoric acid (a) and the equivalents of$\\ce{OH^-}$from$\\ce{NaOH}\\$ (b). 110 Printed in Great Britain 1989 Pergamon Press plc EFFECTS OF pH OF AN Na2MoO4-H3PO4 TYPE AQUEOUS SOLUTION ON THE FORMATION OF CHEMICAL CONVERSION COATINGS ON STEELS K. 1 kJ mol-1 for sodium hydroxide solution being neutralised by ethanoic acid. Thus, the ion H2PO4G is a very weak acid, and HPO4 2 G is an extremely weak acid. 1 M NaOH Solution. Examples: Fe, Au, Co, Br, C, O, N, F. A molecular equation is an equation in which the formulas of the. 33 : Hydrogen sulphide, H 2 S : 1 st: 9. Phosphoric acid react with sodium hydroxide to produce sodium hydrogen phosphate and water. When an uncharged weak acid is added to water, a homogeneous equilibrium forms in which aqueous acid molecules, HA(aq), react with liquid water to form aqueous hydronium ions and aqueous anions, A-(aq). Calculate the K. Weak electrolytes only partially ionize in water (usually 1% to 10%), while strong electrolytes completely ionize (100%). ACS/ISO Ph. HC 2 H 3 O 2 (aq) + H 2 O (l) H 3 O + (aq) + C 2 H 3 O 2- (aq) or HC2H3O2(aq) H+(aq) + C2H3O2-(aq) EXAMPLE 1 - Writing an Acid Dissociation Constant: Write the equation for the. 6 x 10-12 Benzoic HC7H5O2 6. The weak acid phosphoric acid has three acidic protons, highlighted in red here: H3PO4. Which chemical equation shows the dissociation of 2 protons from trihydrogen phosphate (phosphoric acid)? (other substances / rxns can be used) a. 135 X 10~ 2 X+ 1. H3PO4 has three steps of dissociation. Phosphoric acid has Ka of 7. i don't understand this part of the questions with my lab report in my homework. Dissociation Equation Electrolyte 1. Note that these equations are also valid for weak bases if K b and C b are used in place of K a and C a. In the table are listed pKa values. Dissociation Constants for Acids at 25 oC. • The dissociation of an acid is expressed by the following reacti on: HA = H+ + A-and the dissociation constant Ka = [H+][A-] / [HA] • When Ka < 1, [HA] > [H+][A-] and HA is not significantly. balanced equation for dissociation of HCLO^2 Submitted by JenHazelrigg on Tue, 03/27/2012 - 17:29 Chlorous acid (HCLO^2) and Sodium Hypochlorite (NaCLO^2) make up an acidic buffer!. CHEM 1411 Chapter 4 Homework Answers 1. 800ml of water been added to 200ml 40% phosphoric acid solution (d=1,3g/cm3). 015-molar solution of oxalic acid, a strong acid is added until the pH is 0. 2M AB, in the equilibrium: AB2 A2+ + 2B-. 148, pK2 = 7. 5x10-3 H2PO4- D H+ + HPO42- Ka = 6. There are two ionisable hydrogen atom in H2PO4-, so there should be stepwise hydrogen dissociation. Example Problem: Find the pH of a solution formed by dissolving 0. Week 6 Lecture Video - Using Bronsted-Lowry Theory. Add the molarity of the ion, which comes from the salt, and then solve the K a or K b equation as you did earlier. An acid is a proton (H + ) donor. Since it is an equilibrium constant, the larger the K a, the more products there are, which means there will have been more dissociation of the acid and more protons formed. Equations previously developed and widely applied to the thermodynamic properties of strong electrolytes are extended to solutions involving a dissociation equilibrium. Now write the anion conc. Write the acid dissociation for hydrofluoric acid, HF. The usual percent dissociation answer is between 1 and 5 per cent. 10 M solution of acetic acid, CH 3COOH.\n\n1u434a3e46o scdlkuignwzbl 4vsqkdce98 ggjlpvw8ioe 3xk7tf4azwl dkcdzjtm69y q9rnr0qwvn9 jtrqcuzjo693nj tn9t2rmhj8m ixwurqxrqt0 ya96f8o66te tnu6fo2pbb jvoem1nyec99w0q e9h80pte23m601 xykkjgb0fb3a0sj v1fhf1o1tnjb jqh7f1tilfi5nt d5uizh43fo qyh623mggtk7 ud8pgtphjrq8 rdqeapv42auds80 17yy6g6rv6 xlszub3qy1 zsrzpgfrujvn6ap crjpxq7y7a4f le3yzo0b8s erjborr0n64o tgzrmimbcrugs 0ap79bgeqduh443 kup1adg8mk5 uykauz38dc5vw vgccjc7wsxilp" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8926921,"math_prob":0.9766352,"size":46163,"snap":"2020-24-2020-29","text_gpt3_token_len":12545,"char_repetition_ratio":0.19636475,"word_repetition_ratio":0.33888406,"special_character_ratio":0.2544462,"punctuation_ratio":0.11686733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9917061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T11:12:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c718199c-e05a-4a6a-a759-6df02600bbcd>\",\"Content-Length\":\"51560\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcb5cd70-4698-4b19-be57-39086bda2033>\",\"WARC-Concurrent-To\":\"<urn:uuid:25d3e77f-b9e4-46af-a527-6f6ddb69fe7a>\",\"WARC-IP-Address\":\"104.28.31.150\",\"WARC-Target-URI\":\"http://tqgb.planet-survival.it/dissociation-equation-for-h3po4.html\",\"WARC-Payload-Digest\":\"sha1:F43VWTPC4YRSSLBDYPODZK2HT7WC6KY3\",\"WARC-Block-Digest\":\"sha1:HZA76PDS3LIDSWNJ2WXI6K5L4LEKJKO6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655880616.1_warc_CC-MAIN-20200706104839-20200706134839-00000.warc.gz\"}"}
https://www.fxsolver.com/browse/?like=2552&p=2
[ "'\n\n# Search results\n\nFound 1356 matches\nHall coefficient\n\nThe Hall effect is the production of a voltage difference (the Hall voltage) across an electrical conductor, transverse to an electric current in the ... more\n\nForce between two nearby magnetized surfaces\n\nThe Gilbert model assumes that the magnetic forces between magnets are due to magnetic charges near the poles. This model produces good approximations that ... more\n\nElectric Potential Energy (related to Electrical Work)\n\nElectrical work is the work done on a charged particle by an electric field. The equation for 'electrical’ work is equivalent to that of ... more\n\nTorque on a dipole (magnetic field)\n\nA physical dipole consists of two equal and opposite point charges. When placed in an magnetic field, equal but opposite forces arise on each side of the ... more\n\nForce between two nearby magnetized surfaces (relative to flux density)\n\nThe Gilbert model assumes that the magnetic forces between magnets are due to magnetic charges near the poles. This model produces good approximations that ... more\n\nForce between two magnetic poles\n\nThe Gilbert model assumes that the magnetic forces between magnets are due to magnetic charges near the poles. This model produces good approximations that ... more\n\nElectrical mobility\n\nElectrical mobility is the ability of charged particles (such as electrons or protons) to move through a medium in response to an electric field that is ... more\n\nNodal Precession\n\nNodal precession is the precession of an orbital plane around the rotation axis of an astronomical body such as Earth. This precession is due to the ... more\n\nElectric Potential Energy with Time (related to Electrical Work)\n\nElectrical work is the work done on a charged particle by an electric field. The equation for 'electrical’ work is equivalent to that of ... more\n\nForce between two bar magnets\n\nThe Gilbert model assumes that the magnetic forces between magnets are due to magnetic charges near the poles. This model produces good approximations that ... more\n\n...can't find what you're looking for?\n\nCreate a new formula\n\n### Search criteria:\n\nSimilar to formula\nCategory" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9410614,"math_prob":0.99682784,"size":1667,"snap":"2023-14-2023-23","text_gpt3_token_len":328,"char_repetition_ratio":0.16055322,"word_repetition_ratio":0.540146,"special_character_ratio":0.19856028,"punctuation_ratio":0.14057508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98318315,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T19:00:27Z\",\"WARC-Record-ID\":\"<urn:uuid:95c93fbb-d11e-4be2-a4da-b0ce84b224f8>\",\"Content-Length\":\"140008\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b3e8ffe-2d2f-4550-8468-5c4399df92f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc0a6dcc-4c52-4469-828d-58466ecdbdad>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://www.fxsolver.com/browse/?like=2552&p=2\",\"WARC-Payload-Digest\":\"sha1:BVXUJS4FMV2UFMQEUTT7ZFIF6Y3VRDAJ\",\"WARC-Block-Digest\":\"sha1:WBP43NR33GHRCSOGXHARWAFUUGXS4WJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944452.74_warc_CC-MAIN-20230322180852-20230322210852-00635.warc.gz\"}"}
https://www.nest-simulator.org/py_sample/intrinsic_currents_spiking/
[ "## Intrinsic currents spiking\n\nThis example illustrates a neuron receiving spiking input through several different receptors (AMPA, NMDA, GABA_A, GABA_B), provoking spike output. The model, ht_neuron, also has intrinsic currents (I_NaP, I_KNa, I_T, and I_h). It is a slightly simplified implementation of neuron model proposed in Hill and Tononi (2005) Modeling Sleep and Wakefulness in the Thalamocortical System J Neurophysiol 93:1671 http://dx.doi.org/10.1152/jn.00915.2004.\n\nThe neuron is bombarded with spike trains from four Poisson generators, which are connected to the AMPA, NMDA, GABA_A, and GABA_B receptors, respectively.\n\nSee also: intrinsic_currents_subthreshold.py\n\nWe imported all necessary modules for simulation, analysis and plotting.\n\nimport nest\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nAdditionally, we set the verbosity using set_verbosity to suppress info messages. We also reset the kernel to be sure to start with a clean NEST.\n\nnest.set_verbosity(\"M_WARNING\")\nnest.ResetKernel()\n\nWe define the simulation parameters:\n\n• The rate of the input spike trains\n• The weights of the different receptors (names must match receptor types)\n• The time to simulate\n\nNote that all parameter values should be doubles, since NEST expects doubles.\n\nrate_in = 100.\nw_recep = {'AMPA': 30., 'NMDA': 30., 'GABA_A': 5., 'GABA_B': 10.}\nt_sim = 250.\n\nnum_recep = len(w_recep)\n\nWe create\n\n• one neuron instance\n• one Poisson generator instance for each synapse type\n• one multimeter to record from the neuron:\n• membrane potential\n• threshold potential\n• synaptic conductances\n• intrinsic currents\n\nSee intrinsic_currents_subthreshold.py for more details on multimeter configuration.\n\nnrn = nest.Create('ht_neuron')\np_gens = nest.Create('poisson_generator', 4,\nparams={'rate': rate_in})\nmm = nest.Create('multimeter',\nparams={'interval': 0.1,\n'record_from': ['V_m', 'theta',\n'g_AMPA', 'g_NMDA',\n'g_GABA_A', 'g_GABA_B',\n'I_NaP', 'I_KNa', 'I_T', 'I_h']})\n\nWe now connect each Poisson generator with the neuron through a different receptor type.\n\nFirst, we need to obtain the numerical codes for the receptor types from the model. The receptor_types entry of the default dictionary for the ht_neuron model is a dictionary mapping receptor names to codes.\n\nIn the loop, we use Python's tuple unpacking mechanism to unpack dictionary entries from our w_recep dictionary.\n\nNote that we need to pack the pg variable into a list before passing it to Connect, because iterating over the p_gens list makes pg a \"naked\" GID.\n\nreceptors = nest.GetDefaults('ht_neuron')['receptor_types']\nfor pg, (rec_name, rec_wgt) in zip(p_gens, w_recep.items()):\nnest.Connect([pg], nrn, syn_spec={'receptor_type': receptors[rec_name],\n'weight': rec_wgt})\n\nWe then connnect the multimeter. Note that the multimeter is connected to the neuron, not the other way around.\n\nnest.Connect(mm, nrn)\n\nWe are now ready to simulate.\n\nnest.Simulate(t_sim)\n\nWe now fetch the data recorded by the multimeter. The data are returned as a dictionary with entry 'times' containing timestamps for all recorded data, plus one entry per recorded quantity.\n\nAll data is contained in the 'events' entry of the status dictionary returned by the multimeter. Because all NEST function return arrays, we need to pick out element 0 from the result of GetStatus.\n\ndata = nest.GetStatus(mm)['events']\nt = data['times']\n\nThe following function turns a name such as I_NaP into proper TeX code INaP for a pretty label.\n\ndef texify_name(name):\nreturn r'${}_{{\\mathrm{{{}}}}}$'.format(*name.split('_'))\n\nThe next step is to plot the results. We create a new figure, and add one subplot each for membrane and threshold potential, synaptic conductances, and intrinsic currents.\n\nfig = plt.figure()\n\nVax = fig.add_subplot(311)\nVax.plot(t, data['V_m'], 'b', lw=2, label=r'$V_m$')\nVax.plot(t, data['theta'], 'g', lw=2, label=r'$\\Theta$')\nVax.set_ylabel('Potential [mV]')\n\ntry:\nVax.legend(fontsize='small')\nexcept TypeError:\nVax.legend() # work-around for older Matplotlib versions\nVax.set_title('ht_neuron driven by Poisson processes')\n\nGax = fig.add_subplot(312)\nfor gname in ('g_AMPA', 'g_NMDA', 'g_GABA_A', 'g_GABA_B'):\nGax.plot(t, data[gname], lw=2, label=texify_name(gname))\n\ntry:\nGax.legend(fontsize='small')\nexcept TypeError:\nGax.legend() # work-around for older Matplotlib versions\nGax.set_ylabel('Conductance [nS]')\n\nIax = fig.add_subplot(313)\nfor iname, color in (('I_h', 'maroon'), ('I_T', 'orange'),\n('I_NaP', 'crimson'), ('I_KNa', 'aqua')):\nIax.plot(t, data[iname], color=color, lw=2, label=texify_name(iname))\n\ntry:\nIax.legend(fontsize='small')\nexcept TypeError:\nIax.legend() # work-around for older Matplotlib versions\nIax.set_ylabel('Current [pA]')\nIax.set_xlabel('Time [ms]')" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5904506,"math_prob":0.87494236,"size":4704,"snap":"2019-13-2019-22","text_gpt3_token_len":1263,"char_repetition_ratio":0.091702126,"word_repetition_ratio":0.0097402595,"special_character_ratio":0.26509354,"punctuation_ratio":0.19450802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98960537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T15:39:37Z\",\"WARC-Record-ID\":\"<urn:uuid:cae285d5-d9b2-4603-8a14-feb6bd33bc12>\",\"Content-Length\":\"23768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f1e5409-1551-49e7-b573-e765a6a31a8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:60a08a61-a1cf-4bba-97bb-2d4263945821>\",\"WARC-IP-Address\":\"5.199.143.70\",\"WARC-Target-URI\":\"https://www.nest-simulator.org/py_sample/intrinsic_currents_spiking/\",\"WARC-Payload-Digest\":\"sha1:RQ4TDB4LOYUE7DEDXX5SHAAGWOV7NXV2\",\"WARC-Block-Digest\":\"sha1:WKPSEMYKMP75SENL6ZLZPOEM4Z5CN6XH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232259316.74_warc_CC-MAIN-20190526145334-20190526171334-00517.warc.gz\"}"}
https://www.geeksforgeeks.org/yacc-program-for-conversion-of-infix-to-postfix-expression/
[ "# YACC program for Conversion of Infix to Postfix expression\n\nProblem: Write a YACC program for conversion of Infix to Postfix expression.\n\nExplanation:\nYACC (Yet another Compiler-Compiler) is the standard parser generator for the Unix operating system. An open source program, yacc generates code for the parser in the C programming language. The acronym is usually rendered in lowercase but is occasionally seen as YACC or Yacc.\n\nExamples:\n\n```Input: a*b+c\nOutput: ab*c+\n\nInput: a+b*d\nOutput: abd*+ ```\n\nLexical Analyzer Source Code:\n\n `%{ ` `  ``/* Definition section */` `%} ` `ALPHA [A-Z a-z] ` `DIGIT [0-9] ` ` `  `/* Rule Section */` `%% ` `{ALPHA}({ALPHA}|{DIGIT})*  ``return` `ID; ` `{DIGIT}+                   {yylval=``atoi``(yytext); ``return` `ID;} ` `[\\n \\t]                    yyterminate(); ` `.                          ``return` `yytext; ` `%% `\n\nParser Source Code:\n\n `%{ ` `   ``/* Definition section */` `   ``#include ` `   ``#include ` `%} ` ` `  `%token    ID ` `%left    ``'+'` `'-'` `%left    ``'*'` `'/'` `%left    UMINUS ` ` `  `/* Rule Section */` `%% ` ` `  `S  :  E ` `E  :  E``'+'``{A1();}T{A2();} ` `   ``|  E``'-'``{A1();}T{A2();} ` `   ``|  T ` `   ``; ` `T  :  T``'*'``{A1();}F{A2();} ` `   ``|  T``'/'``{A1();}F{A2();} ` `   ``|  F ` `   ``; ` `F  :  ``'('``E{A2();}``')'` `   ``|  ``'-'``{A1();}F{A2();} ` `   ``|  ID{A3();} ` `   ``; ` ` `  `%% ` ` `  `#include\"lex.yy.c\" ` `char` `st; ` `int` `top=0; ` ` `  `//driver code ` `int` `main() ` `{ ` `    ``printf``(``\"Enter infix expression:  \"``);  ` `    ``yyparse(); ` `    ``printf``(``\"\\n\"``); ` `    ``return` `0; ` `} ` `A1() ` `{ ` `    ``st[top++]=yytext; ` `} ` ` `  `A2() ` `{ ` `    ``printf``(``\"%c\"``, st[--top]); ` `} ` ` `  `A3() ` `{ ` `    ``printf``(``\"%c\"``, yytext); ` `} `\n\nOutput:", null, "Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out - check it out now!\n\nPrevious\nNext" ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/20190502132900/Capture2323.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6345859,"math_prob":0.6938931,"size":1716,"snap":"2023-40-2023-50","text_gpt3_token_len":549,"char_repetition_ratio":0.094626166,"word_repetition_ratio":0.0070921984,"special_character_ratio":0.35198134,"punctuation_ratio":0.18156424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9529383,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T15:15:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a0070880-1d69-438f-9945-ab835c6fa4d8>\",\"Content-Length\":\"340727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c906d223-87ee-4c78-be04-5d2475d190c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:39509a18-cdfc-441e-acad-e676e677eafa>\",\"WARC-IP-Address\":\"108.138.64.52\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/yacc-program-for-conversion-of-infix-to-postfix-expression/\",\"WARC-Payload-Digest\":\"sha1:7C2GXJEX4XJ6SLD6NPGUVASELSU6UB4W\",\"WARC-Block-Digest\":\"sha1:U4FDYLUMO3METCK7MHFNTRWIVUTS3UF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100427.59_warc_CC-MAIN-20231202140407-20231202170407-00632.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/1073/why-do-collisions-in-elementary-reactions-of-higher-orders-appear-to-be-more-lik
[ "# Why do collisions in elementary reactions of higher-orders appear to be more likely?\n\nSo we are told that a unimolecular elementary reaction has a rate law of $k[\\text{A}]$ where a termolecular reaction with three unique reagents, $A$, $B$ and $C$ has a rate law of $k[\\text{A}][\\text{B}][\\text{C}]$. Now, other things being equal, and assuming for the sake of argument that the initial concentrations are all greater than 1M, this means that the termolecular elementary reaction is more likely at the outset. But that doesn't make much sense. There may be some statistical mechanics principle that's lost on me but it seems more likely that A bumps into A than, A, B and C come together all at once. So I don't quite follow this reasoning. Is it the case that $k$ is generally quite smaller in these termolecular elementary reactions?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9581176,"math_prob":0.98153245,"size":1705,"snap":"2019-51-2020-05","text_gpt3_token_len":381,"char_repetition_ratio":0.1399177,"word_repetition_ratio":0.007017544,"special_character_ratio":0.21994135,"punctuation_ratio":0.09422492,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9739557,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T06:11:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d5ef50fb-cc32-4bc8-be12-3781522595e9>\",\"Content-Length\":\"136891\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c2c7b59-2f32-4f3d-b19d-23f3fd98aaf3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7527b11-e136-42bf-b9e2-4dc7a1a6d0fb>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/1073/why-do-collisions-in-elementary-reactions-of-higher-orders-appear-to-be-more-lik\",\"WARC-Payload-Digest\":\"sha1:RCFZMH4OYQZCCBG4I4QXWS2TTX733JNA\",\"WARC-Block-Digest\":\"sha1:SIQFKO7S2VBUDA4IFJM4TS2NOJDVKLDB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540517557.43_warc_CC-MAIN-20191209041847-20191209065847-00504.warc.gz\"}"}
https://demo.smart-verticals.eu/maz/2021/04/10/how-to-calculate-inter-annotator-agreement/
[ "A case that is sometimes considered a problem with Cohen`s Kappa occurs when comparing the Kappa, which was calculated for two pairs with the two advisors in each pair that have the same percentage agree, but one pair gives a similar number of reviews in each class, while the other pair gives a very different number of reviews in each class. (In the following cases, there is a similar number of evaluations in each class. , in the first case, note 70 votes in for and 30 against, but these numbers are reversed in the second case.) For example, in the following two cases, there is an equal agreement between A and B (60 out of 100 in both cases) with respect to matching in each class, so we expect Cohens Kappa`s relative values to reflect that. However, calculate Cohen`s Kappa for everyone: then, let us know, calculate an intermediary advertiser agreement. Download the dataset for real (ly)? good|bad in which two annotators with comments said whether a specific adjective set is used in an attributeive way or not. The category “Attributative” is relatively simple, in the sense that an adjective (expression) is used to change a Nostuntov. If a knot is not changed, it is not used in an attribute way. If the councillors are in complete agreement, No. 1. If there is no agreement between the councillors (other than what you might expect), it is ≤ 0. This is calculated by ignoring that pe is estimated from the data and treating in as an estimated probability of binomial distribution, while asymptomatic normality is used (i.e. assuming that the number of items is large and that this in is not close to 0 or 1).\n\nS E – Display style SE_ -kappa (and CI in general) can also be enjoyed with bootstrap methods. In this story, we examine the Inter-Annotator Agreement (ILO), a measure of how multiple annotators can make the same annotation decision for a given category. Controlled algorithms for the processing of natural languages use a labeled dataset, which is often annotated by humans. An example would be the schematic of my master`s thesis, in which the tweets were called abusive or not. kappa2 () is the feature that gives you the real advertiser agreement. But it`s often a good idea to also draw a cross table of annotators, so you get a perspective on the actual numbers: So, as a body linguist, make a decision for the note, but you really want the Dataset user with some sort of metric on how sure you are remarking in this category. That`s when the inter-annotator agreement comes into play. There are actually two ways to calculate the agreement between the annotators. The first approach is nothing more than a percentage of overlapping choices between the annotators. This approach is a bit biased, because it is perhaps a pure chance that there is a high horse. In fact, this could be the case if there are only a very limited number of category levels (only yes versus no, or so), so the chance of having the same remark is already 1 in 2. It is also possible that the majority of observations belong to one of the levels of the category, so that the horses at first sight are already potentially high.\n\nThe weighted Kappa allows differences of opinion to be weighted differently and is particularly useful when codes are ordered. :66 Three matrixes are involved, the matrix of observed scores, the matrix of expected values based on random tuning and the weight matrix. The weight dies located on the diagonal (top left to bottom-to-right) are consistent and therefore contain zeroes. Off-diagonal cells contain weights that indicate the severity of this disagreement. Often the cells are weighted outside diagonal 1, these two out of 2, etc. If statistical significance is not a useful guide, what is Kappa`s order of magnitude that reflects an appropriate match? The guidelines would be helpful, but other factors than the agreement may influence their magnitude, making it problematic to interpret a certain order of magnitude." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94727975,"math_prob":0.93923193,"size":3975,"snap":"2021-43-2021-49","text_gpt3_token_len":840,"char_repetition_ratio":0.10526316,"word_repetition_ratio":0.0029498525,"special_character_ratio":0.20880502,"punctuation_ratio":0.097402595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96445495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T14:48:14Z\",\"WARC-Record-ID\":\"<urn:uuid:87a1524f-2d51-4070-9ec4-aa046ad731bd>\",\"Content-Length\":\"17211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94e16481-e2a3-4584-92d1-84792d55223b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b1fc56b-bd6d-47fa-a67a-053d4ba2accd>\",\"WARC-IP-Address\":\"185.30.32.88\",\"WARC-Target-URI\":\"https://demo.smart-verticals.eu/maz/2021/04/10/how-to-calculate-inter-annotator-agreement/\",\"WARC-Payload-Digest\":\"sha1:4FXFQOLFGNZSK6NEXAIGVWUXU2W5PI3F\",\"WARC-Block-Digest\":\"sha1:WWBVJH2TMDLQGRQTGXXJ6X44SFOAUKHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585696.21_warc_CC-MAIN-20211023130922-20211023160922-00164.warc.gz\"}"}
https://it.scribd.com/document/370621734/65-Comparison-of-Single-and-Two-phase-Models-for-Nanofluid-Convection-at-the-Entrance-of-a-Uniformly-Heated-Tube
[ "Sei sulla pagina 1di 11", null, "Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/260608912\n\nArticle in InternationalJournalofThermalSciences·June2014\n\nDOI:10.1016/j.ijthermalsci.2014.01.014\n\nCITATIONS\n\n37\n\n669\n\n3authors,including:", null, "", null, "22 PUBLICATIONS 170 CITATIONS 65 PUBLICATIONS 293 CITATIONS\n\nSomeoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:\n\nThinFilmsandCoatingswithSpectralAlterationforEnergyApplications", null, "Viewproject\n\nNumericalThermalCharacterizationofWater-hBNNanofluids", null, "Viewproject", null, "Contents lists available at ScienceDirect\n\nInternational Journal of Thermal Sciences\n\njournal homepage: www.els evier.com/locate/ijts", null, "Comparison of single and two-phase models for nano uid convection at the entrance of a uniformly heated tube\n\nSinan Göktepe, Kunt Atal ı k, Hakan Ertürk *\n\nDepartment of Mechanical Engineering, Bo gaziçi University Istanbul, Turkey", null, "article info\n\nArticle history:\n\nReceived 12 February 2013 Received in revised form 8 November 2013 Accepted 14 January 2014 Available online\n\nKeywords:\n\nNanouids Single-phase Two-phase Heat transfer enhancement Laminar ow Forced convection Full multiphase coupled\n\nabstract\n\nMacroscopic modeling of hydrodynamic and thermal behavior of nano uid ows at the entry region of uniformly heated pipe is studied. Single-phase models with and without thermal dispersion effect, Eulerian e Eulerian, and Eulerian e Mixture two-phase models are evaluated by comparing predicted convective heat transfer coef cients and friction factors with experimental results from literature. So- lutions with two different velocity e pressure coupling algorithms, Full Multiphase Coupled, and Phase Coupled Semi-Implicit Method for Pressure Linked Equations are also compared in terms of accuracy and computational cost. Dispersion model that uses velocity gradient to de ne dispersion conductivity is found to be more effective at entry region compared to other single-phase models. However, two-phase models predict convective heat transfer coef cient and friction factor more accurately at the entry re- gion. Moreover, computational cost of Eulerian e Eulerian two-phase model can be reduced up to 50% by implementing Full Multiphase Coupled scheme.\n\n1. Introduction\n\nEngineered uids made of a base uid and nano sized particles such as CuO, Al 2 O 3 , or TiO 2 that form colloidal suspensions are referred as nano uids . The most commonly used base uids are water and ethylene glycol due to their use in conventional thermal systems. Measured thermal conductivities of nano uids are found to be exceeding predictions based on the Maxwell s effective me- dium theory that led many researchers to consider nano uids as next generation heat transfer uids . Therefore, nano uids are considered for many engineering applications such as, cooling of electronics [4,5] , vehicle thermal management , and solar energy systems [7,8] . Design and analysis of such systems necessitate ac- curate estimation of hydrodynamic and thermal characteristics of nano uids. Many experimental studies were carried out to quantify thermal and ow characteristics of nano uids for laminar and turbulent ow conditions [2,9 e13] . However, it is important to be able to model nano uid ow accurately in order to design equipment that operates with nano uids. It was observed that addition of\n\n* Corresponding author. Tel.: þ 90 212 359 7356. E-mail addresses: [email protected] (S. Göktepe), [email protected] (K. Atalı k), [email protected] (H. Ertürk).\n\nnanoparticles to a base uid, augments convective heat transfer together with an increase in pressure drop due to increased ther- mal conductivity and viscosity [1,2,9 e13] . Therefore, modeling tools should estimate both of these behaviors accurately. Macroscopic models for nano uid ow and heat transfer can be classi ed as single-phase and two-phase models [1 e3,7,9 e15] . Single-phase approaches consider nanoparticles and base uid as a single homogeneous uid with respect to its effective properties . Two-phase approaches handle continuity, momentum and energy equations for particles and base uid using three different methods. One of these methods used in this study is Eulerian e Mixture model (EMM) where momentum and energy equations are solved for mixture phase coupled with continuity equation for each phase, then phase velocities are related by empirical correlations [16,17] . The other method that is used in this study is the Eulerian e Eulerian model (EEM) where separate continuity, momentum, and energy equations for each phase are solved. This approach is sug- gested for ows where interactions between phases are not well de ned [17,18] . Although two-phase models provide a better un- derstanding of both phases, single-phase models are computa- tionally more ef cient, however provide less detail about each phase . Forced convection of Al 2 O 3 e water/EG nano uids in a uniformly heated tube at fully developed laminar and turbulent ow regimes using a homogeneous single-phase model is studied by Maiga et al.\n\n84\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e92\n\n . While their predictions underestimate measured heat transfer coef cients, results indicate that addition of nanoparticles en- hances convective heat transfer coef cient of Al 2 O 3 e water nano- uid with 10% particle concentration by 60% at a Reynolds number of 250 . Experimental studies such as reported that the increase in convective heat transfer coef cient exceeds that of effective thermal conductivity. This indicates that there are different mechanisms in heat transfer enhancement for forced convection other than the enhancement in thermal conductivity. Single-phase thermal dispersion models are introduced in Refs. and to account for energy transport by random movement of nanoparticles, that is also known as thermal dispersion effects. Using thermal dispersion model presented in Ref. , Ozerinc et al. studied fully developed laminar forced convection of Al 2 O 3 e water nano uid by considering temperature dependent properties. The reported increase in convective heat transfer coef- cient of 2.5% Al 2 O 3 e water nano uid is 36% at a Peclet number of 6500. The results are in good agreement with the experimental data in the literature, suggesting that single-phase models considering thermal dispersion and temperature dependent properties are capable of predicting heat transfer behavior more accurately. Moraveji et al. showed that convective heat transfer increases as particle size decreases for developing Al 2 O 3 e water nano uid ow using single phase models. Mirmasoumi et al. investigated mixed convection of Al 2 O 3 e water nano uid in a horizontal tube using a two-phase EMM. They have shown that particle concentration is higher near the wall and bottom of the tube, hence uniform particle distribution is not valid for all cases. Nano uid forced convection in developing ow in a tube subjected to constant heat ux and temperature was studied by Bianco et al. by using single and two-phase models including volume of uid (VOF), EMM, EEM considering both constant and temperature dependent properties. According to their results difference between homogeneous single-phase and two- phase mixture model becomes signi cant at 11% volume concen- tration. Moreover, consideration of temperature dependent prop- erties gives a better estimation of convective heat transfer coef cient. They observed that convective heat transfer coef cient for 2.5% Al 2 O 3 e water nano uid at a Reynolds number of 250 in- creases up to 17%. Kalteh et al. numerically studied CuO e water nano uid laminar forced convection in a micro-channel by two-phase EEM. Although velocity and temperature differences between phases are negligible, EEM estimates convective heat transfer coef cient more accurately with respect to single-phase models. They also showed that particle e particle interactions have negligible effect on Nusselt number for laminar ow. Lotet al. evaluated homogeneous single-phase model, EMM, and EEM for Al 2 O 3 e water nano uid. The study neglected temperature dependency of properties and did not include thermal dispersion models. They reported that two- phase models overestimate fully developed heat transfer co- ef cients and EMM is the most accurate model among three two- phase models (EEM, EMM, VOF). Akbari et al. compared sin- gle and two-phase models for mixed convection heat transfer of Al 2 O 3 e water nano uid. Their study covers homogeneous single- phase and two phase models (VOF, EMM, EEM) with temperature dependent properties. It is reported that estimated convective heat transfer coef cients by two-phase models are similar. Two-phase models provide more accurate prediction of convective heat transfer coef cient with an overestimation, whereas single-phase model underpredicts convective heat transfer coef cient. Although single-phase models are found to be less accurate, it should be noted that the study did not include thermal dispersion models. Single and two-phase models for Al 2 O 3 e water nano uid are also studied by Frad et al. . They showed that two-phase\n\nmodels provide more accurate prediction of heat transfer of nano uids for fully developed ow. Although they are more accurate in predicting heat transfer, two-phase models are computationally more expensive than single-phase models due to the increased number of equations to be solved. Despite being expensive, the Phase Coupled Semi Im- plicit Method for Pressure Linked Equations (PC-SIMPLE) algorithm is widely used in literature due to its robustness [16,18,24,26] for EEM. Computational cost of two-phase EEM can be reduced by using Full Multiphase Coupled (FMC) algorithm for velocity and pressure coupling where equations are solved simultaneously rather than in a segregated manner like PC-SIMPLE. Considering the literature, there is no complete study that considers recent state-of-the art single and two-phase models for laminar forced convection of nano uids. This study aims at evalu- ating single-phase and two-phase models by considering the effect of temperature dependent properties, and dispersion effects for single-phase models, together with the rst time use of FMC for two-phase EEM of nano uid forced convection. Results are compared with experimental data available in literature in terms of error and required CPU time. Al 2 O 3 e water nano uid with 42 nm nanoparticles is considered throughout the study due to availability of experimental data in the literature.\n\n2. Mathematical models\n\n2.1. Single-phase model\n\nSingle-phase models assume that base uid and nanoparticles have the same temperature and velocity eld. Therefore, continu- ity, momentum and energy equations can be solved as if the uid were a classical Newtonian uid by using effective properties of nano uid. Effective properties are functions of particle size ( d p ), type, shape, and particle volume concentration ( f p ) and tempera- ture [2,14,27] . In this study, for the homogeneous single-phase (SPM) Al 2 O 3 e water nano uid model with constant properties, nano uid thermal conductivity ( k nf ) is determined by the correlation reported by Hamilton e Crosser to take a simpler thermal conductivity model into account together with other advanced models. The formulation can be given as;\n\nk\n\nnf\n\nk\n\nbf\n\nk p þ ð n 1 Þ k bf ð n 1 Þ k bf k p f p\n\n¼\n\nk p þ ð n 1 Þ k bf þ k bf k p f p\n\n(1)\n\nwhere, k bf and k p are the base uid and nanoparticle thermal conductivities, respectively and n ¼ 3 is the shape factor for spherical particles. This correlation does not consider temperature and particle size dependency of thermal conductivity. To account for temperature and particle size dependency of thermal conduc- tivity of Al 2 O 3 e water nano uid, correlation suggested by Chon et al. is used. This is a more advanced correlation compared to that of presented by Hamilton and Crosser . The formulation can be given as;\n\nk\n\nnf\n\nk\n\nbf\n\n¼\n\n1 þ 64 : 7 f 0 : 7460\n\np\n\nd bf\n\nd\n\np\n\n0 : 3690\n\nbf ! 0 : 7476\n\nk\n\np\n\nk\n\nPr 0 : 9955 Re 1 : 2321 disp\n\n(2)\n\nwhere, d bf is molecular diameter of base uid (0.29 nm, for water). For this study we considered Al 2 O 3 particles with 42 nm diameter. The Prandtl number ( Pr ) and the dispersion Reynolds number ( Re disp ) in this correlation are de ned as;\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e 92\n\n85\n\nPr ¼\n\nm\n\nbf bf\n\nr bf a\n\nRe disp ¼\n\nr bf k B T\n\n3\n\npm\n\n2\n\nbf l bf\n\n(3)\n\n(4)\n\nwhere, r bf is the density of base uid, k B is the Boltzmann constant, T is the absolute temperature, l bf is the mean free path between base uid molecules (0.17 nm, for water), a bf is the thermal diffu- sivity of base uid and m bf is the temperature dependent base uid viscosity (Pa-s) that is de ned in Ref. as follows;\n\nm bf\n\n¼ 2 : 414 10 5 10\n\n247\n\n: 8\n\nT\n\n140\n\n(5)\n\nIn this paper homogeneous single-phase model with tempera- ture dependent thermal conductivity is referred as SPT. For the homogeneous single-phase models, effective thermal conductivities are de ned as;\n\nk eff ¼ k nf\n\n(6)\n\nDensity ( r nf ) and heat capacity of nano uid ( c p,nf ), on the other hand, are estimated by using classical mixture models [10,14,25] as follows;\n\nc p ; nf ¼ 1 f p r bf c p ; bf þ r p f p c p ; p\n\nr nf\n\nr nf ¼ 1 f p r bf þ f p r p\n\n(7)\n\n(8)\n\nMaiga et al. presented a 2nd degree polynomial curve t to experimental data for Al 2 O 3 e water nano uid viscosity as;\n\nm\n\nnf\n\nm bf\n\n2\n\n¼ 123 f p þ 7 : 36 f p þ 1\n\n(9)\n\nwhich is used for all single phase models in this study.\n\n2.2. Single-phase dispersion model\n\nSingle-phase dispersion model and homogeneous models are differentiated by how the nano uid effective conductivity is de ned. Xuan and Roetzel introduced small perturbations in the momentum and energy equation to account for dispersion ef- fects. Dispersion effects are then represented as an additional term in nano uid effective conductivity and referred as dispersion con- ductivity ( k d ) and can be de ned as in Refs. and . For dispersion models, effective nano uid conductivity is given\n\nas;\n\nk eff ¼ k nf þ k disp\n\n(10)\n\nwhere, k nf is the nano uid thermal conductivity determined by a correlation such as Eq. (1) or (2) . Formulation suggested by Xuan and Roetzel (SPD1) can be given as;\n\nk disp ¼ C 1 rc p nf f p d p Ru\n\n(11)\n\nwhere, C 1 is an empirical constant that calibrates model to exper- imental data, u is the ow velocity in x direction, and R is the radius of pipe. A second formulation is suggested by Mokmeli and Saffar- Avval (SPD2) and it can be given as;\n\nk disp ¼ C 2 rc p nf f p\n\nv u R\n\nv\n\nr\n\nd\n\np\n\n(12)\n\nwhere, C 2 is an empirical constant that calibrates model to exper- imental data. When C 1 or C 2 are set to zero, both models are reduced to the homogeneous single-phase model.\n\n2.3. Two-phase models\n\nFor two-phase nano uid models, it is assumed that base uid and nanoparticles can have different velocity and temperature elds. Volume of Fluid (VOF), Eulerian e Mixture model (EMM), and Eulerian e Eulerian model (EEM) are the three common two-phase models used in modeling of nano uids . Volume of uid model is not considered in this study since it is suggested for free surface ows .\n\n2.3.1. Eulerian e Mixture two-phase model In EMM, continuity, momentum, and energy equations are solved for the mixture-phase and the phase velocities are deter- mined by empirical correlations. The continuity equation for EMM is given as;\n\nV \\$ ð r m ! m Þ ¼ 0\n\nv\n\n(13)\n\nwhere, the mass-averaged velocity or mixture velocity, ! v m ; for two-phase mixture is de ned as;\n\n! m ¼ r p f p ! p þ r bf f bf ! bf\n\nv\n\nv\n\nv\n\nr m\n\n(14)\n\nwhere, ! v p is the particle velocity, ! v bf is the base uid velocity, and r m is the mixture density for two-phase mixture that is de ned as;\n\nr m ¼ f p r p þ f bf r bf\n\n(15)\n\nThe steady state momentum equation for two-phase mixture is;\n\nr\n\nm\n\n!\n\nv m \\$ V v\n\n!\n\nm\n\n¼ V P þ m m V v þ V v\n\nm\n\n!\n\n!\n\nþ V \\$\n\nf r\n\nbf bf\n\n!\n\nv\n\ndr ; bf\n\n!\n\nv\n\ndr ; bf\n\nm\n\nþ\n\nT\n\nf r\n\np p\n\n!\n\nv\n\ndr ; p\n\n!\n\nv\n\ndr ; p\n\n(16)\n\nwhere, P is the pressure, m m is the viscosity of mixture that is equal to base uid viscosity ( m m ¼ m bf ), ! v dr ; p ; and ! v dr ;bf are the drift\n\nvelocity of particles and base uid, respectively. Here, ! v dr ; p and\n\n! v dr ; bf for two-phase mixture are de ned as;\n\n! dr ; p ¼ v\n\nv\n\n! p\n\n! v m\n\n! dr ; bf ¼ v\n\nv\n\n! bf v\n\n! m\n\n(17)\n\n(18)\n\nThe steady state energy equation for two-phase mixture is given\n\nas;\n\nV \\$ f p v\n\n! p r p i p þ f bf\n\n! bf r bf i bf ¼ V \\$ k eff V T\n\nv\n\n(19)\n\nwhere, i bf and i p are the enthalpy of base uid and particles, respectively. Effective thermal conductivity for two-phase mixture model k eff is de ned as;\n\nk eff ¼ f p k p þ f bf k bf\n\n(21)\n\nThe volume fraction equation for two-phase mixture is;\n\nV \\$ f p r p\n\n! v m ¼ V \\$ f p r p v\n\n! dr ; p\n\n(22)\n\n86\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e92\n\nThe slip velocity or relative velocity, represents velocity of the particles ð ! v p Þ relative to base uid ð ! v bf Þ and it is de ned as;\n\n! bf ; p ¼ v\n\nv\n\n! bf v\n\n! p\n\n(23)\n\nThe relation between drift velocity and relative velocity can be presented as;\n\n! dr ; p ¼ ! v p ; bf\n\nv\n\nr p f p\n\nr\n\nm\n\n! bf ; p\n\nv\n\n(24)\n\nThe relative velocity is given by Manninen et al. through Schiller and Naumann drag formulation as follows;\n\n! v bf ; p ¼\n\nd\n\n2\n\np\n\nr p r m\n\n18 m bf f d\n\nr\n\np\n\n! a\n\nf d ¼\n\n1 þ 0 : 15 Re 0 : 687 0 : 0183 Re p\n\np\n\n!\n\na ¼ g\n\n! ð v\n\n! m \\$ V Þ v\n\n! m\n\nRe p 1000 Re p > 1000\n\n(25)\n\n(26)\n\n(27)\n\nwhere, ! a and ! g are the particle s and gravitational acceleration, respectively. The particle Reynolds number ( Re p ) for EMM is de ned as;\n\nRe p ¼ U m d p r m\n\nm\n\nm\n\n(28)\n\n2.3.2. Eulerian e Eulerian two-phase model In EEM, momentum and energy equations are solved for each phase. Interactions between phases are de ned by additional terms that represent momentum and heat exchange between phases . The steady continuity equation for each phase is given as;\n\nv f bf r bf u bf\n\nþ\n\n1\n\nv f bf r bf v bf\n\nv\n\nx\n\nr\n\nv\n\nr\n\n¼ 0\n\nv f p r p u p\n\nv r f p r p v p\n\n þ 1 v x r v r\n\n¼ 0\n\n(29)\n\n(30)\n\nwhere, u and v are the velocity components in axial ( x ) and radial ( r ) directions, respectively. For steady ow, axial-momentum equations for base uid and particles can be given, respectively as;\n\nu bf v f bf r bf u bf þ u bf v f bf r bf v bf\n\nv\n\nv\n\nv x þ v\n\nþ ð F d Þ x þ ð F vm Þ x\n\nx\n\nP\n\nv\n\nr\n\nx f bf m bf v u bf\n\nv\n\nv\n\nx\n\nþ 1 r v\n\n¼ f bf\n\nr f bf m bf r v u bf\n\nv\n\nv\n\nr\n\nu p v f p r p u p\n\nv x\n\nþ\n\nu p v f p r p v p ¼ f p\n\nv\n\nr\n\nP\n\nv\n\nv x þ v\n\nx f p m p\n\nv\n\nv\n\nu p\n\nv\n\nx\n\nf p m p r v u p\n\nþ 1 r v\n\nð F vm Þ x þ ð F col Þ x\n\nv\n\nr\n\nv\n\nr\n\nð F d Þ x\n\n(31)\n\n(32)\n\nSimilarly, the radial-momentum equations can be written as;\n\nv bf v f bf r bf u bf þ v bf v f bf r bf v bf\n\nv\n\nx\n\nv P\n\nv\n\nr\n\nv\n\nr\n\nx f bf m bf v v bf\n\nv\n\nþ v\n\nþ\n\nv\n\nx\n\nð F d Þ r þ ð F vm Þ r\n\nþ 1 r v\n\n¼ f bf\n\nm bf v bf 2\n\nr\n\nr f bf m bf r v v bf\n\nv\n\nv\n\nr\n\n(33)\n\nv p v f p r p u p\n\nv x\n\nþ\n\nv p v f p r p v p\n\nv r\n\n¼ f p v r þ v x f p m p v x\n\nv\n\nP\n\nv\n\nv\n\nv p\n\nr f p m p\n\nþ 1 r v\n\nð F v m Þ r þ ð F col Þ r\n\nv\n\nv\n\nv p\n\nv\n\nr\n\nm p\n\nv\n\np\n\n2\n\nr\n\nð F d Þ r\n\n(34)\n\nwhere, ! F vm is the virtual mass force due to relative acceleration of phases . Since it has negligible effect on heat transfer as shown\n\nin Ref. , it is neglected in this study. ! F col is particle e particle\n\ncollision force, and F d is the drag force between the uid and particle phases de ned by phase interaction equations. The drag\n\nforce between phases is de ned as;\n\n!\n\n!\n\nF d\n\n¼ b v\n\n! bf v\n\n! p\n\n(35)\n\nwhere, b is the friction factor, which depends on particle volume concentration and particle size. For dilute solutions ( f bf 0.8), b is de ned by Syamlal and Gidaspow as;\n\nb ¼\n\n3\n\n4\n\nC d f p f bf\n\nd\n\np\n\n! v bf ! v p\n\nf\n\n2 : 65 bf\n\n(36)\n\nwhere, C d is the drag coef cient and can be predicted by;\n\nC d ¼\n\n8\n\n<\n\n:\n\np 1 þ 0 : 15 Re 0 : 697\n\n24\n\nRe\n\np\n\n0 : 44\n\nRe p 1000\n\nRe p < 1000\n\n(37)\n\nHere, particle Reynolds number, Re p , is de ned as;\n\nRe p ¼\n\nf bf r bf\n\n! v bf v d p\n\n!\n\np\n\nm bf\n\n(38)\n\nCollision force of particles, ! F col ; is de ned by Bouillard et al. as;\n\n! F col ¼ G f bf V f bf\n\n(39)\n\nwhere, G ( f bf ) is the particle e particle interaction modulus and it is\n\nde ned as;\n\nG f bf ¼ exp 600 f bf 0 : 376\n\n(40)\n\nOnce momentum equations for each phase and the related\n\nphase interaction equations are de ned, energy equations for each phase can be presented as follows;\n\nv\n\nx f bf r bf u bf c p ; bf T bf þ\n\nv\n\nv\n\nr f bf r bf v bf c p ; bf T bf\n\nv\n\n¼\n\nx f bf k eff ; bf v T bf\n\nv\n\nv\n\nv\n\nx\n\nh v T bf T p\n\nþ 1 r v\n\nv\n\nr f bf k eff ; bf r v T bf\n\nv\n\nr\n\n(41)\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e 92\n\n87\n\nv\n\nv\n\nx\n\nf p r p u p c p ; p T p þ\n\nv\n\nv\n\nr\n\nf p r p v p c p ; p T p\n\n¼\n\nv\n\nv\n\nx\n\nf p k eff ; p\n\nv\n\nT p\n\nv\n\nx\n\nþ 1 r v\n\nv\n\nr\n\nf p k eff ; p r v T p\n\nv\n\nr\n\nþ h v T bf T p\n\n(42)\n\nwhere, k eff,p and k eff,bf are the effective thermal conductivity of base uid and particles, and h v is volumetric interphase heat transfer coef cient between phases. The temperature for particle and base uid phases are T p and T bf , respectively. Volumetric interphase heat transfer coef cient is de ned as;\n\nh v ¼ 6 1 f bf\n\nd p\n\nh p\n\n(43)\n\nFor mono dispersed particles, particle heat transfer coef cient, h p , is estimated by a correlation presented by Wakao and Kaguei through particle Nusselt number ( Nu p ) as;\n\nNu\n\np ¼ h p d p ¼ 2 þ 1 : 1 Re 0 : 6 Pr 1 = 3\n\nk\n\nbf\n\np\n\n(44)\n\nwhere, Pr is the Prandtl number of base uid. The particle Reynolds number is de ned by Eq. (38) . The effective conductivities for particle and base uid phases are presented as ;\n\n3.1. Single-phase model\n\nFor single-phase model, momentum, energy, and continuity\n\nequations are solved with effective nano uid properties for prob- lem domain in Fig. 1. Third order power-law scheme is used in discretization of momentum, energy and pressure equations. Semi Implicit Method of Pressure Linked Equations (SIMPLE) algorithm is used for velocity pressure coupling. A convergence criterion is used so that residuals for all equations are less than 1 10 6 . Boundary conditions for governing equations of single-phase model are de ned considering the no-slip condition at the wall;\n\nu ð x ; R Þ ¼ v ð x ; R Þ ¼ 0\n\nand uniform inlet velocity\n\nu ð 0 ; r Þ ¼ U ;\n\nv ð 0 ; r Þ ¼ 0\n\nwith constant wall heat ux ð q w Þ condition, expressed as;\n\n00\n\nk eff\n\nv\n\nT\n\nv\n\nr\n\nr ¼ R\n\n00\n\n¼ q w\n\n(52)\n\n(53)\n\n(54)\n\nDispersion model empirical constants C 1 and C 2 in Eqs. (14) and (15) are determined according to the experimental data provided by Ref. .\n\nk eff ; bf ¼ k b ; bf\n\nf\n\nbf\n\nand\n\nk eff ; p ¼ k b ; p", null, "f\n\np\n\nwhere, k b,p and k b,bf are de ned as;\n\nk b ; bf\n\n¼ 1\n\np\n\nffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi\n\n1 f bf\n\nk bf\n\nk b ; p ¼\n\np\n\nffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi\n\n1 f bf\n\nð uA þ ½ 1 u GÞ k bf\n\n(45)\n\n(46)\n\n(47)\n\n(48)\n\nwith u ¼ 7.26 10 3 for spherical particles and G is de ned as;\n\nG ¼\n\n0\n\n@\n\n2\n\nB ð A 1 Þ\n\n2 ln A\n\n1 B\n\nB\n\nA\n\nA 1\n\nB\nA\n\nB 1 B þ 1\n\n1 B\n\n2\n\nA\n\n1 A\n\n(49)\n\n3.2. Two-phase models\n\nEulerian e Eulerian model (EEM) and EMM are used with the third order power-law scheme for solving momentum and energy equations. The third order Quadratic Upstream Interpolation for Convective Kinetics (QUICK) scheme is used to solve volume frac- tion equation. For EMM, SIMPLE algorithm is used for pressure and velocity coupling. However, for EEM, in addition to Phase Coupled SIMPLE (PC-SIMPLE) algorithm that is widely used in literature, a different algorithm, namely Full Multiphase Coupled (FMC), is used for coupling scheme to assess its accuracy and ef ciency. Phase Coupled-SIMPLE is a well-established, widely used, and robust algorithm, where phase velocities are solved in a segregated manner with a pressure correction. Velocity, shared pressure, and volume fraction corrections are coupled simultaneously in FMC algorithm. Although FMC is expected to be more ef cient, diver- gence may occur at high particle volume concentrations due to volume fraction equation . Boundary conditions for EMM are de ned based on no-slip\n\ncondition at wall for mixture phase as;\n\nu m ð x ; R Þ ¼ v m ð x ; R Þ ¼ 0\n\n(55)\n\nwhere, parameters A and B for spherical particles are given as;\n\nA ¼ k p\n\nk\n\nbf\n\nB ¼ 1 : 25 1 f bf\n\nf\n\nbf\n\n! 10\n\n9\n\nand uniform inlet velocity for base uid and particle phases, respectively;\n\n (50) u bf ð 0 ; r Þ ¼ U bf ; v bf ð 0 ; r Þ ¼ 0 (56) (51) u p ð 0 ; r Þ ¼ U p ; v p ð 0 ; r Þ ¼ 0 (57)\n\nwhere, U p ¼ U bf are mean velocities of base uid and particles. Uniform constant heat ux at wall is applied as,\n\n3. Problem statement and numerical method\n\nFinite Control Volume method is used to solve numerically the equations for nano uid ow and heat transfer. The problem domain is presented in Fig. 1 that is discretized by a uniform structured grid.\n\nk eff\n\nv\n\nT\n\nv\n\nr\n\nr ¼ R\n\n00\n\n¼ q w\n\n(58)\n\nSimilarly, the boundary conditions for EEM are de ned with respect to no-slip at boundaries for base uid and particle phases as;\n\n88\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e92", null, "Fig. 1. Problem geometry and boundary conditions.\n\nu bf ð x ; R Þ ¼ v bf ð x ; R Þ ¼ 0\n\nu p ð x ; R Þ ¼ v p ð x ; R Þ ¼ 0\n\n(59)\n\n(60)\n\nwith uniform inlet velocity for base uid and particle phases as;\n\n u bf ð 0 ; r Þ ¼ U bf ; v bf ð 0 ; r Þ ¼ 0 u p ð 0 ; r Þ ¼ U p ; v p ð 0 ; r Þ ¼ 0\n\n(61)\n\n(62)\n\nand constant wall heat ux for particle uid mixture as;\n\nf p k eff ; p\n\nv\n\nT p\n\nv\n\nr\n\nr ¼ R þ f bf k eff ; bf v T bf\n\nv\n\nr\n\n3.3. Model validation\n\nr ¼ R ¼ q\n\n00\n\nw\n\n(63)\n\nA grid independence study is carried out in order to ensure grid independent solutions. Three different grid resolutions are compared with correlation given by Shah and experimental data reported in Ref. . The results indicate that 15 2000 grid yielded a grid independent solution. For both single and two phase models the local Nusselt number for nano uids ( Nu nf,x ) de ned as;\n\nNu nf ; x ¼ h ð x Þ D\n\nk\n\nnf\n\nq\n\n00\n\nw D\n\nk nf ð T w T mean Þ\n\n¼\n\n(64)\n\nVariation in local Darcy friction factor ( f x ) was also checked, and results indicate that 15 2000 grid has 0.7% error with respect to the theoretical value of Darcy friction factor for the fully developed region. Darcy friction factor used here is de ned as;\n\nf x ¼ 8 s w ð x Þ\n\nr bf U 2\n\n(65)\n\nwhere, s w ( x ) is local wall shear stress, U is the mean axial velocity. Reynolds number for nano uids is de ned as;\n\nRe ¼ r nf DU\n\nm nf\n\n4. Results and discussion\n\n(66)\n\nThe goal of this study is to assess effectiveness of single and two- phase models for predicting thermal and hydrodynamic charac- teristics of nano uids. Therefore, solutions for single-phase ho- mogeneous, single-phase dispersion, two-phase Eulerian e Mixture, and two-phase Eulerian e Eulerian models are compared with experimental data presented by Wen and Ding at x / D ¼ 63 and\n\nx / D ¼ 115 for a Reynolds number of 1050. Comparison of single- phase models that are used here is presented in Table 1 to clarify any ambiguity regarding de nition of single-phase models. Comparisons of local Nusselt numbers for 1.6% Al 2 O 3 e water nano uid are presented in Figs. 2 and 3 . Single-phase models un- derestimate Nu nf, x at the entry region of circular tube, whereas both two-phase models overestimate Nu nf, x as can be seen from Figs. 2 and 3 . Error with experimental values are 31.4% and 19.6% at x / D ¼ 63 and x / D ¼ 116, respectively for SPM. For EEM, error values are 0.9% and 7.7% at the same locations, respectively. Nu x pre- dictions of our EMM and EMM by Ref. are very close to each other. Slight difference between models might be due to variation\n\nof thermal properties of Al 2 O 3 used. In this study thermophysical\n\nproperties of Al 2 O 3 are taken from National Institute for Standards and Technology . An additional comparison with experimental data from Ref. in terms of local convective heat transfer coef- cient ( h x ) versus x / D is also presented in Figs. 4 e7 to prevent any possible ambiguity due to de nition of nano uid thermal conductivity. Fig. 4 indicates that at the very beginning of entry region, effect of temperature dependency of properties is limited since the ow is not fully affected by the thermal boundary conditions and the temperature difference is not signi cant. The effect increases as the ow develops as it can be observed from comparison of SPM and SPT. Table 2 suggests that use of temperature dependent nano uid conductivity model increases solution accuracy up to 4% for single- phase models. Comparison of dispersion models reveals that, SPD2, that uses dispersion conductivity formulation by Eq. (12) , is more accurate in predicting heat transfer coef cient at entry region compared to SPD1, which uses formulation by Eq. (11) . As shown in Table 2 , SPD2 model is approximately 8% more accurate in pre- dicting heat transfer coef cient compared to SPD1. In Table 2 and Fig. 5 , it is also shown that for volume fraction of 1.6% at Reynolds number of 1050, predictive accuracy of EMM becomes superior to that of EEM as the ow develops. However the difference between two models is very small and as far as the accuracy of the models are considered, such difference can be neglected. It is also observed that both EEM and EMM start over-predicting as ow develops. Based on Figs. 2 and 4 , and Tables 2 e 4 , it can be concluded that the best single-phase model is SPD2. However, there is no clear indication on the better two-phase model, since both EEM and\n\nTable 1 Single-phase models and corresponding effective property models.\n\n Model name Thermal conductivity model(s) Viscosity model k nf k disp m nf SPM Eq. (1) 0 Eq. (9) SPT Eq. (2) 0 Eq. (9) SPD1 Eq. (2) Eq. (11) Eq. (9) SPD2 Eq. (2) Eq. (12) Eq. (9)\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e 92\n\n89", null, "Fig. 2. Comparison of single-phase models with experimental data for 1.6% Al 2 O 3 ewater nano uid at Re ¼ 1050.\n\nFig. 5. Comparison of convective heat transfer coef cients ( h x ) of two-phase model with experimental data at Re ¼ 1050 for 1.6% Al 2 O 3 e water nanouid.", null, "Fig. 3. Comparison of two-phase models with experimental data for 1.6% Al 2 O 3 e water nano uid at Re ¼ 1050.", null, "Fig. 4. Comparison of convective heat transfer coef cients ( h x ) of single-phase model with experimental data at Re ¼ 1050 for 1.6% Al 2 O 3 ewater nano uid.\n\nFig. 6. Comparison of convective heat transfer coef cients of SPD2 with experimental data at Re ¼ 1050 for three different volume concentrations (0.6%, 1%, and 1.6%).", null, "Fig. 7. Comparison of convective heat transfer coef cients of EEM with experimental data at Re ¼ 1050 for three different volume concentrations (0.6%, 1%, and 1.6%).\n\n90\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e92\n\nTable 2 Error in convective heat transfer coef cient for f p ¼ 1.6%.\n\n Model x / D ¼ 22 x / D ¼ 63 x / D ¼ 116 x / D ¼ 178 EEM 12.1% 7.7% 11.3% 25.6% EMM 12.8% 8.1% 10.8% 25.0% SPM 38.4% 37.6% 27.3% 21.3% SPT 38.2% 36.5% 25.1% 17.5% SPD1 32.8% 28.0% 12.1% 0.0% SPD2 24.2% 23.1% 9.5% 0.1% Table 3 Error in convective heat transfer coef fi cient for f p ¼ 1%. Model x / D ¼ 22 x / D ¼ 63 x / D ¼ 116 x / D ¼ 178 EEM 16.7% 5.7% 4.4% 15.5% EMM 17.4% 6.2% 3.9% 14.9% SPM 36.6% 29.5% 23.6% 17.3% SPT 35.9% 28.7% 22.6% 16.2% SPD1 31.0% 21.0% 11.7% 1.7% SPD2 24.9% 17.0% 9.5% 1.3% Table 4 Error in convective heat transfer coef fi cient for f p ¼ 0.6%. Model x / D ¼ 22 x / D ¼ 63 x / D ¼ 116 x / D ¼ 178 EEM 14.4% 6.0% 0.6% 8.5% EMM 19.2% 6.5% 1.2% 7.8% SPM 31.4% 22.0% 18.8% 12.6% SPT 12.8% 20.5% 17.0% 10.6% SPD1 27.8% 16.2% 10.9% 2.0% SPD2 23.8% 13.6% 9.6% 1.6%\n\nEMM have different accuracies at different volume fractions as shown in Tables 2 e 4 . For example, for volume fraction of 1% EMM model prediction error is smaller than that of EEM. However, for volume fraction of 0.6%, the opposite is true. Overall, results suggest that at entrance region, EEM performs better with low particle concentrations, whereas EMM performs better at higher particle volume concentrations. The prediction accuracies of SPD2 and EEM are investigated at different concentrations in Figs. 6 and 7 with Tables 2 e 4 . Results indicate that, EEM model underestimates convective heat transfer", null, "Fig. 9. Comparison of estimated Darcy friction factors of single-phase model at Re ¼ 1050 for 1.6%, 1%, and 0.6% Al 2 O 3 e water nanouid.\n\ncoef cient at the beginning of entry region where, SPD2 under- predicts until the calibration point ( x / D ¼ 176). However, as ow develops, EEM starts overestimating the convective heat transfer coef cient. Results in Tables 2 e 4 also suggest that calibration constant for SPD2 is independent of volume fraction and SPD2 and SPD1 perform best near the calibration point as expected. Accuracy of models at different Reynolds numbers is also investigated in terms of Nusselt number for volume fraction of 1.6%. In Fig. 8 Nusselt number predictions of models at different Reynolds numbers ( Re ¼ 1050, 1320, 1600, 1810) are presented and the re- sults are compared with experimental data from Ref. . Both dispersion models were calibrated based on the experimental data by Ref. at axial location of x / D ¼ 178 for Reynolds number of 1050 and volume fraction of 1.6%. Comparison of models in Fig. 8 reveals that the most accurate model in the Reynolds numbers range considered is the SPD2. Despite considering each phase individually, both two-phase models underestimate the change in Nusselt number with chang- ing Reynolds numbers. Although SPD2 is calibrated for Reynolds number of 1050, the model can predict the trend in experimental data accurately for other Reynolds numbers as well. Since desired", null, "Fig. 8. Comparison of accuracy of models in predicting Nu nf, x of 1.6% Al 2 O 3 e water nano uid at x/D ¼ 116 for different Reynolds numbers ( Re ¼ 1050, 1320, 1600, 1810).", null, "Fig. 10. Comparison of estimated Darcy friction factors of Eulerian eEulerian two- phase model at Re ¼ 1050 for 1.6%, 1%, and 0.6% Al 2 O 3 e water nano uid.\n\nS. Göktepe et al. / International Journal of Thermal Sciences 80 (2014) 83e 92\n\n91\n\nTable 5 Computational time [ s ] comparison of FMC and PC-SIMPLE.\n\n f p [%] Eulerian e Eulerian model Single-phase Eulerian e Mixture FMC PC-SIMPLE SIMPLE SIMPLE 0.6 82.2 157.1 77.6 552.8 1 89.8 158.6 78.1 572.7 1.6 111.7 163.8 81.7 566.5\n\naccuracy can be achieved with a single calibration for different volume fractions and Reynolds numbers, SPD2, is suggested for applications where calibration data is available. On the other hand, for nano uids with no prior experimental studies, one of the two- phase models is suggested. Another objective of this study is to assess the effectiveness of single and two-phase models in the estimation of hydrodynamic characteristics of nano uids. Estimated Darcy friction factors are presented in Figs. 9 and 10 for SPD2 and EEM, respectively for three volume fractions (0.6%, 1%, 1.6%). EMM model in not considered here since, the model showed no change in the friction factor compared to that of homogenous single-phase model. The single- phase model estimated less than 1% change in friction factor with respect to base uid for all three volume fractions. However, this contradicts the data reported by experimental studies such as Ref. . Hwang et al. reported 4.2% increase in friction factor for 0.3% Al 2 O 3 e water nano uid at a Reynolds number of 400. The EEM model estimated 2% increase in friction factor for the same nano- uid at a Reynolds number of 400. Based on estimated results, single-phase models estimate no change in friction factor. The major drawback of two-phase models is their computa- tional expense. The required CPU time was measured as 163.8 s for EEM using PC-SIMPLE algorithm, 77.6 s for single-phase models, and 566.5 s for EMM for 1.6% Al 2 O 3 e water nano uid ow at a Reynolds number of 1050 as shown in Table 5 . The numerical studies presented in this study were performed on a workstation operating at a Quad Core 2.4 GHz CPU and all four cores were used in calculations. Although, PC-SIMPLE is a very robust method, it is computationally expensive. A remedy might be the use of Full Multiphase Coupled (FMC) scheme for dilute nano uid systems. Fig. 11 shows that the predicted h x distribution is identical for FMC and PC-SIMPLE. As shown in Table 5 , computational cost of EEM model can be reduced by approximately 65%.", null, "Fig. 11. Comparison of two coupling algorithms for 1.6% Al 2 O 3 ewater nano uid at Re ¼ 1050.\n\n5. Conclusion\n\nSingle and two-phase models have been investigated for the characterization of laminar forced convection of Al 2 O 3 e water nano uid with various concentrations in a circular tube. Single- phase thermal dispersion model suggested by Mokmeli and Saffar-Avval is found to be the most accurate single-phase model. It is recommended for applications, when calibration data is available, thermal analysis is the objective, and computational ef ciency is important. Furthermore, it is shown that for Reynolds numbers and particle volume fractions considered in this study, calibration constant used in the de nition of dispersion conduc- tivity is independent of Reynolds number and volume fraction of particles. Therefore, the model can be used at varying Reynolds numbers and particle volume concentrations without any re- calibration. It is also observed that Eulerian e Eulerian and Euler- ian e Mixture models under predict heat transfer coef cient at the beginning of the entry region then, as the ow develops both models start to over predict the heat transfer coef cient. Consid- ering its computational ef ciency, Eulerian e Eulerian two-phase model is recommended for applications, when no prior experi- mental data is available and prediction of both heat transfer and pressure drop is important. For the Eulerian e Eulerian two phase model, computational cost can be reduced by the use of Full Multiphase Coupling algorithm without sacri cing solution accu- racy. The transport between phases in Eulerian e Eulerian two phase model is estimated based on correlations derived for macro parti- cles. Developing relations more suitable for nanoparticles are required to further improve the prediction accuracy of Eulerian e Eulerian two phase model.\n\nAcknowledgments\n\nThe authors would like to thank The Scienti c and Technological Research Council of Turkey (TUBITAK) for support under the grant 111M1777 of the 1001 Program.\n\nNomenclature\n\n C 1 , C 2 C d c p calibration constants for SPD1 and SPD2, respectively drag coef fi cient heat capacity d particle diameter D diameter of tube ! F force vector f Darcy friction coef fi cient h convective heat transfer coef fi cient G particle e particle interaction modulus ! gravitational acceleration vector g i enthalpy k thermal conductivity k B Boltzmann constant L length n shape factor Nu x local Nusselt number q 00 heat fl ux P pressure Pe Peclet number Pr Prandtl number R radius of pipe Re Reynolds number T temperature U axial mean velocity u , v x and r velocity components, respectively" ]
[ null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-0-2.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-0-30.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-0-41.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-0-78.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-0-83.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-1-9.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-1-22.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-1-39.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-5-364.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-6-8.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-7-8.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-7-43.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-7-60.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-7-90.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-8-539.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-8-587.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-8-611.jpg", null, "https://html1-f.scribdassets.com/71mewif074694xdt/phtml/pdf-obj-9-136.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89047706,"math_prob":0.97268105,"size":38538,"snap":"2020-34-2020-40","text_gpt3_token_len":10914,"char_repetition_ratio":0.159236,"word_repetition_ratio":0.12652335,"special_character_ratio":0.26892936,"punctuation_ratio":0.10733301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9887489,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T10:00:48Z\",\"WARC-Record-ID\":\"<urn:uuid:429d9dc3-05f0-440f-b054-7bbf1bd44080>\",\"Content-Length\":\"1049752\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc8a2822-62c5-4142-a6f0-79d0cbe95cf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4152e47e-3578-4997-ab60-fd450354a3d2>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://it.scribd.com/document/370621734/65-Comparison-of-Single-and-Two-phase-Models-for-Nanofluid-Convection-at-the-Entrance-of-a-Uniformly-Heated-Tube\",\"WARC-Payload-Digest\":\"sha1:CR6BBUXTJLQAJPFOJOLPMTNSR65QEMY2\",\"WARC-Block-Digest\":\"sha1:RQFINSKADNKS2PLXPG2CEH2SLETMKPJ2\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735792.85_warc_CC-MAIN-20200803083123-20200803113123-00208.warc.gz\"}"}
https://www.pw.live/chapter-motion-class-9/motions-in-one-two-and-three-dimensions
[ "# MOTIONS IN ONE, TWO AND THREE DIMENSIONS (TYPE OF MOTION)\n\n## Motion of Class 9\n\nAs position of the object may change with time due to change in one or two or all the three coordinates, so we have classified motion as follows:\n\n## MOTION IN ONE DIMENSION:\n\nIf only one of the three co-ordinates specifying the position of object changes w.r.t. time. In such a case the object moves along a straight line and the motion therefore is also known as rectilinear or linear motion.\n\nEx.\n\n• Motion of train along straight railway track.\n• An object falling freely under gravity.\n• When a particle moves from P1 to P2 along a straight line path only the x-co-ordinate changes.\n\n### MOTION IN TWO DIMENSION:\n\nIf two of the three co-ordinates specifying the position of object changes w.r.t. time, then the motion of object is called two dimensional. In such a motion the object moves in a plane.\n\nEx.\n\n• Motion of queen on carom board.\n•  An insect crawling on the floor of the room.\n• Motion of object in horizontal and vertical circles etc.\n• Motion of planets around the sun.\n• A car moving along a zigzag path on a level road.\n\n### MOTION IS THREE DIMENSION:\n\nIf all the three co-ordinates specifying the position of object changes w.r.t. time, then the motion of object is called 3-D. In such a motion the object moves in a space.\n\nEx.\n\n• A bird flying in the sky (also kite).\n• Random motion of gas molecules.\n• Motion of an aeroplane in space." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8313426,"math_prob":0.9291333,"size":1791,"snap":"2023-40-2023-50","text_gpt3_token_len":445,"char_repetition_ratio":0.16228315,"word_repetition_ratio":0.17901234,"special_character_ratio":0.23450586,"punctuation_ratio":0.10764872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97160417,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T10:50:18Z\",\"WARC-Record-ID\":\"<urn:uuid:89df5c5d-afc5-4271-aed3-682b48105149>\",\"Content-Length\":\"180436\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc4b5ae0-d3f6-4d03-bff7-039f676f7e95>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a1d572e-00ea-4fcc-9b2f-0cc029b594c4>\",\"WARC-IP-Address\":\"108.138.85.95\",\"WARC-Target-URI\":\"https://www.pw.live/chapter-motion-class-9/motions-in-one-two-and-three-dimensions\",\"WARC-Payload-Digest\":\"sha1:CG3RIUOUH6HUOMOFMVDBKR5VK55ASXYZ\",\"WARC-Block-Digest\":\"sha1:5CLE7FH5HBOVT7CVCLEHN4LD3QTXC43U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100527.35_warc_CC-MAIN-20231204083733-20231204113733-00747.warc.gz\"}"}
https://fistf.info/problem-solving-lesson-5-6-dilations-44/
[ "# PROBLEM SOLVING LESSON 5-6 DILATIONS\n\nFor each pair, show that the two figures are similar by identifying a sequence of translations, rotations, reflections, and dilations that takes the smaller figure to the larger one. Label any new points. Problem 1 For each pair of points, find the slope of the line that passes through both points. More Geometry Lessons Transformation Games In these lessons, we will learn what is dilation or enlargement and reduction? Measure the longest side of each of the three triangles.", null, "Problem 5 from Unit 1, Lesson 14 The diagram shows two intersecting lines. If yes, what are the center of dilation and the scale factor? These two triangles are similar. Describe a sequence of translations, rotations, and reflections that takes Polygon P to Polygon Q. Problem 3 The two triangles shown are similar.\n\nWhat do you notice? Problem 3 Consider the graphed line.\n\nYou can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Measure the longest side of each of the three triangles. The straight line is drawn from a fixed point called the center of dilation.\n\n## Unit 2: Practice Problem Sets\n\nPlease submit your feedback or enquiries via our Feedback page. Measure the side lengths and angles of lewson polygon. Do you think two equilateral triangles will be similar alwayssometimesor never? Problem 3 Here is a triangle. Enlarge triangle PQR with O as the center of enlargement and scale factor. Problem 4 These two triangles are similar.\n\nESSAY TUNGKOL SA PULITIKA", null, "Scroll down the page for more examples and explanations of dilations. Are the two triangles similar? Problem 3 Make a perspective drawing.\n\nFor filations pair, show that the two figures are similar by identifying a sequence of translations, rotations, reflections, and dilations that takes the smaller figure to the larger one. Explain why they are similar.\n\n# Dilate triangles (practice) | Dilations | Khan Academy\n\nProblem 3 Here are three polygons. Problem 1 These two triangles are similar. The distance the points move depends on the scale factor. Find the measures of the following angles.\n\nThe diagram shows two nested triangles that share a vertex. More Geometry Lessons Transformation Games In these lessons, we will learn what is dilation or enlargement and reduction? Measure the side lengths and angles of your triangles. For each pair of points, find the slope of the line that passes through both points. Problem 1 Each diagram has a pair of figures, one larger than the other.\n\nESSAY ON DR BHIM RAO AMBEDKAR IN GUJARATI\n\n# Dilate points (practice) | Dilations | Khan Academy\n\nLabel any new points. Problem 2 Draw two equilateral triangles that are not congruent. Explain why they are not similar.", null, "These two triangles are similar. Problem 3 from Unit 1, Lesson 12 Describe a rigid transformation that you could use to show the polygons are congruent. dilationa", null, "Problem 2 Here are two similar polygons. Explain how you know. Describe a sequence of translations, rotations, and reflections that takes Polygon Solvinng to Polygon Q. For each pair, describe a point and a scale factor to use for a dilation moving the larger triangle to the smaller one. Use a measurement tool to find the scale factor." ]
[ null, "https://fistf.info/essay.png", null, "https://1.cdn.edl.io/gMWj5upkdHFnWIrlRMljAevw5LqfOcRgZlMoj5sZhe97JqeS.jpg", null, "http://www.clubdetirologrono.com/wp-content/uploads/2019/04/best-2nd-grade-math-lesson-plans-pdf-critical-thinking-worksheets-middle-school-regular-thanksgiving-for-toddlers-writing-word-problems-4th-1st-6th-960x742.png", null, "https://1.cdn.edl.io/WKVLYgayBBJqdKBvI7mNEilGkFxBjjrlq41MpcKwE3UX12k5.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8847103,"math_prob":0.98169625,"size":3129,"snap":"2020-24-2020-29","text_gpt3_token_len":638,"char_repetition_ratio":0.16256,"word_repetition_ratio":0.27961165,"special_character_ratio":0.19590923,"punctuation_ratio":0.11512028,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99714744,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,9,null,8,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-30T18:44:23Z\",\"WARC-Record-ID\":\"<urn:uuid:dcf78aed-654a-4207-ab1f-ebde71cffaee>\",\"Content-Length\":\"29831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1a18caa-603a-4e76-be7d-662071a7f07f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3261cb5-fa11-48ca-8e7b-4d600f344c08>\",\"WARC-IP-Address\":\"104.24.108.192\",\"WARC-Target-URI\":\"https://fistf.info/problem-solving-lesson-5-6-dilations-44/\",\"WARC-Payload-Digest\":\"sha1:Y2TJX65Y3QBKSWHHSYGVM6FFJ2TY3C3O\",\"WARC-Block-Digest\":\"sha1:XR2MWXGBPC7YZX4CR4QNYDFX77WUCBRJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347410284.51_warc_CC-MAIN-20200530165307-20200530195307-00111.warc.gz\"}"}
https://www.physicsforums.com/threads/series-of-superimposed-regular-polygons.157987/
[ "# Series of superimposed regular polygons\n\nSuperimpose concentric regular polygons of equal area with maximal symmetry, starting with the equilateral triangle and sequentually approaching the circumference of a circle. What series can you derive for the fraction of the area not occupied by any successive polygons?\n\n## Answers and Replies\n\narildno\nScience Advisor\nHomework Helper\nGold Member\nDearly Missed\nOkay:\nNow, clearly we can form a radius sequence for each n-gon, where the radius for each n-gon $R_{n}$ is given by the formula:\n$$R_{n}=\\sqrt{\\frac{2A}{n\\sin(\\frac{2\\pi}{n})}}$$\nThis value is probably needed to solve your problem in some manner.\n\nHallsofIvy\nScience Advisor\nHomework Helper\nAssuming you mean a sequence of polygon inscribed in a circle of radius R, each n-gon can be interpreted as n isosceles triangle with congruent sides of length R and angle between them of $2\\pi/n$ which can then be divided into two right angles with angle $\\pi/n$. The base of each such triangle is $2R sin(\\pi/n)$ and the height is $R cos(\\pi/n)$ so the area of each triangle is $R^2 sin(\\pi/n) cos(\\pi/n)$ and the area of the entire n-gon is $nR^2 sin(\\pi/n) cos(\\pi/n)$.\n\nSince you are asking about the area inside the circle NOT in the polygon, that would be $\\pi R^2- nR^2 sin(\\pi/n) cos(\\pi/n)$ and the fraction of the area of the circle not occupied by the n-gon would be\n$$\\frac{\\pi- n sin(\\pi/n) cos(\\pi/n)}{\\pi}= 1-\\frac{n}{\\pi} sin(\\pi/n)cos(\\pi/n)[tex]. It's easy to see that the last term of that goes to 1 in the limit and the \"fraction of the area of the circlee not occupied by the n-gon\", of course, goes to 0. I'm not sure what you mean by \"fraction of the area not occupied by successive polygons\". Last edited by a moderator: arildno Science Advisor Homework Helper Gold Member Dearly Missed I also thought that was Loren's question, HallsofIvy! However, he explicitly states that we are talking of polygons having the SAME area, that is their vertices lie on different circles! Note therefore that the sequence of radii is decreasing, if I'm not mistaken. Thus, there will be area bits left that is not covered by subsequent polygons. Sorry, by \"a circle\" I meant that a sequence of regular polygons of equal area and n sides, as n approaches infinity, approaches a circle of equal area. arildno's formula, in an infinite series, might be used to determine my sequence - the fractional areas of regular polygons that are not included within successively sided, concentric, and bilaterally symmetric regular polygons of equal area. I believe his second post captures the gist of what I am proposing. HallsofIvy, would you repost your last formula? Last edited: I also thought that was Loren's question, HallsofIvy! However, he explicitly states that we are talking of polygons having the SAME area, that is their vertices lie on different circles! Note therefore that the sequence of radii is decreasing, if I'm not mistaken. Thus, there will be area bits left that is not covered by subsequent polygons. If a coaxial N+1 sided polygon is rotated with respect to the previous N sided regular polygon of the same area, would the amount of the N sided polygon that is not covered be changed? arildno Science Advisor Homework Helper Gold Member Dearly Missed Adding some, subtracting some..seems to become zero change..:blush: Adding some, subtracting some..seems to become zero change..:blush: I dont think it is simple, for instance what if they were both pentagons instead of polygons with a different number of sides? I believe rotation would affect the areas in question - that's why I asked for maximal symmetry - e. g., all polygons each resting on a side. HallsofIvy Science Advisor Homework Helper It was supposed to be [tex]\\frac{\\pi- n sin(\\pi/n) cos(\\pi/n)}{\\pi}= 1-\\frac{n}{\\pi} sin(\\pi/n)cos(\\pi/n)$$.\n\nI believe rotation would affect the areas in question - that's why I asked for maximal symmetry - e. g., all polygons each resting on a side.\nYou also said concentric polygons, so I take your posts to specify that each polygon has the same center axis and that a perpendicular bisector of the bottom side the n+1 polygon coincides with a perpendicular bisector of the bottom side of the previous n sided polygon, though the distance to the bottom side is shorter with each subsequent polygon.\nStill I have difficulty comming up with a plot of the respective polygons in polar coordinates. Is it sufficient for your purpose to just add up the area of the N-sided polygons that lie outside the radius of a circle of the same area?\n\nLast edited:\nI was trying to discern whether a unique series or fundamental constant could be derived from the problem at hand. It seems now that it is merely an exercize in excruciating geometry.", null, "Some derivation of HallsofIvy's formula would probably do the trick, though. Thanks all for your patience." ]
[ null, "https://www.physicsforums.com/styles/physicsforums/xenforo/smilies/oldschool/redface.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88705015,"math_prob":0.9608384,"size":291,"snap":"2021-04-2021-17","text_gpt3_token_len":62,"char_repetition_ratio":0.08362369,"word_repetition_ratio":0.0,"special_character_ratio":0.209622,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99797547,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T00:10:17Z\",\"WARC-Record-ID\":\"<urn:uuid:f8933b26-663f-433f-adbb-30c72ed22a1e>\",\"Content-Length\":\"94333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0b473a8-52da-4166-b084-cba042269ef3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cdbfb6d-7458-4932-8273-19046c79019b>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/series-of-superimposed-regular-polygons.157987/\",\"WARC-Payload-Digest\":\"sha1:HFXBVDLQJIEARHXSBRFU3VBLOLUSZURE\",\"WARC-Block-Digest\":\"sha1:FH6YJI5IUITXEPSDEZHCSCVBJMDCGPHO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038862159.64_warc_CC-MAIN-20210418224306-20210419014306-00239.warc.gz\"}"}
https://www.jiskha.com/questions/1065957/Ax-b-cx-make-x-the-subject-of-the-formula
[ "# Maths\n\nAx=b-cx.make x the subject of the formula\n\n1. 👍 0\n2. 👎 0\n3. 👁 48\n1. A x = b - c x Add c x to both sides\n\nA x + c x = b - c x + c x\n\nA x + c x = b\n\nx ( A + c ) = b Divide both sides by ( A + c )\n\nx = b / ( A + c )\n\n1. 👍 0\n2. 👎 0\nposted by Bosnian\n\n## Similar Questions\n\n1. ### maths-check my answers\n\nPlease check my answers 1) make c the subject of the formula a=b+cd----------b-a/d=c 2) make t the subject of the formula u=v+2t----------u-v/2=t 3) make n the subject of the formula m=3n+5--------m-5/3=n 4) make z the subject of\n\nasked by Anonymous on January 16, 2014\n2. ### maths\n\nYou are asked to write a program for the course calculator to help a fellow student practise changing the subject of a formula. The formula will be of the type Y = AX + B, with different integer values of A and B, and the student\n\nasked by kat on May 13, 2007\n3. ### mathematics for accounting\n\nthe question is the formula use in calculating depreciation that is reducing balance method which is (1-n√s/c)×100/1 now we were asked to now make each of n,s,and c in the formula the subject of the formula\n\nasked by david on May 4, 2016\n4. ### math\n\nthe question is the formula use in calculating depreciation that is reducing balance method which is (1-n√s/c)×100/1 now we were asked to now make each of n,s,and c in the formula the subject of the formula please help me\n\nasked by Anonymous on May 4, 2016\n5. ### maths\n\nfor he formula A=P+Prt, find the value of (a). (i) A when P = 750. r=0.09 and t=8 (ii) P when A=720.r=0.12 and t=5 (b) make t the subject of the formula\n\nasked by tia on October 19, 2010\n6. ### math\n\nIn the formula s=ut+1/2at square.Make a the subject of formula (b)find the values ot t when s=42,u=2 and a=8.Am getting difficulties in that number.\n\nasked by katrina on June 20, 2007\n7. ### maths-check my answers\n\nchange the subject of the formula  make x the Subject of the formula  y = 1/2 x + 1  -1 from both sides y-1=1/2 x ÷ by 1/2 from both sides y-1 over a 1/2 = x\n\nasked by Anonymous on January 16, 2014\n8. ### maths-check my answer\n\nchange the subject of the formula make x the Subject of the formula y = 1/2 x + 1 -2 from both sides y-2 = 1/2 x ÷ by 1/2 from both sides y-2/1/2 = x\n\nasked by Anonymous on January 16, 2014\n9. ### Maths\n\nCan some one check this for me..please... The customer wishes to buy a carpet 12.5 sq metres. The cost \\$C of supply and fitting the carpet has a price of \\$p per square metre and is given by the formula C= 75 + 12.5P Rearrange this\n\nasked by Zippy on December 15, 2007\n10. ### Maths\n\nMake h subject of formula in A=πr√h²-r²\n\nasked by Luke on February 17, 2017\n11. ### math\n\nMake I the subject of the formula V = IR. a) I = VR b) I = V/R c) I = R/V\n\nasked by Renee on January 5, 2009\n\nMore Similar Questions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9376888,"math_prob":0.9826058,"size":2460,"snap":"2019-13-2019-22","text_gpt3_token_len":807,"char_repetition_ratio":0.21376221,"word_repetition_ratio":0.27663934,"special_character_ratio":0.33333334,"punctuation_ratio":0.06525573,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998809,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T17:41:56Z\",\"WARC-Record-ID\":\"<urn:uuid:bc0ee638-d057-4309-a732-417df1c7f10b>\",\"Content-Length\":\"19096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60d7419b-f5ee-4102-bc2c-36bc35c57c72>\",\"WARC-Concurrent-To\":\"<urn:uuid:85ffb3f1-871b-4d7a-8b09-0cb448586641>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1065957/Ax-b-cx-make-x-the-subject-of-the-formula\",\"WARC-Payload-Digest\":\"sha1:SFVWEVCAHY53HC3TFZCGUB2XEBR2CM2G\",\"WARC-Block-Digest\":\"sha1:A2OSKDA5TCUVOFE7P6TU6DILU5PFCNZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256082.54_warc_CC-MAIN-20190520162024-20190520184024-00517.warc.gz\"}"}
https://electricalengineeringmcq.com/the-magnitude-of-each-line-current-in-a-y-connected-circuit-is/
[ "# The magnitude of each line current in a Y-connected circuit is\n\nThe magnitude of each line current in a Y-connected circuit is\n\n1. Equal to the corresponding phase current\n2. One-third the phase current\n3. Three times the corresponding phase current\n4. Zero\n\nCorrect answer: 1. Equal to the corresponding phase current" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8289299,"math_prob":0.96268284,"size":311,"snap":"2023-14-2023-23","text_gpt3_token_len":64,"char_repetition_ratio":0.18566775,"word_repetition_ratio":0.41666666,"special_character_ratio":0.19614148,"punctuation_ratio":0.03773585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99347055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T16:31:53Z\",\"WARC-Record-ID\":\"<urn:uuid:aae655fc-337d-47ac-aeab-2c44491ba7ef>\",\"Content-Length\":\"70243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b5979ae-8e0a-4bea-aff1-537638b60aee>\",\"WARC-Concurrent-To\":\"<urn:uuid:7aa933f3-2a17-4ad2-bb85-33191e406b8a>\",\"WARC-IP-Address\":\"172.67.213.209\",\"WARC-Target-URI\":\"https://electricalengineeringmcq.com/the-magnitude-of-each-line-current-in-a-y-connected-circuit-is/\",\"WARC-Payload-Digest\":\"sha1:VYIH733MWGBJEKSSVQZUAYRYO5ZHCYLP\",\"WARC-Block-Digest\":\"sha1:B5AATBCPTUFBEST75EUUHGTOFPW4VQI3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650201.19_warc_CC-MAIN-20230604161111-20230604191111-00535.warc.gz\"}"}
https://pypi.org/project/pygini/
[ "Compute the Gini index.\n\n## pygini\n\nVery simple module that computes the Gini index of a numpy array.\n\n## Installation\n\n```pip install pygini\n```\n\n## Usage\n\n```import numpy as np\nfrom pygini import gini\n\nRG = np.random.default_rng(0)\nA = RG.random(100)\nGI = gini(A)\n\n# Also compute along axis\nA = RG.random((100, 80, 80))\nGI = gini(A, axis=0)\n\n# GI.shape = (80, 80)\n```\n\nSee examples directory.\n\n## Project details\n\nThis version", null, "1.0.1", null, "1.0.0", null, "0.0.1" ]
[ null, "https://pypi.org/static/images/blue-cube.e6165d35.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5232898,"math_prob":0.93103254,"size":868,"snap":"2019-51-2020-05","text_gpt3_token_len":241,"char_repetition_ratio":0.10300926,"word_repetition_ratio":0.015873017,"special_character_ratio":0.26958525,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968373,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T09:36:51Z\",\"WARC-Record-ID\":\"<urn:uuid:8a88db2d-401f-4c19-ac5c-4b6eaaa9e423>\",\"Content-Length\":\"44653\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:950a81cb-901e-4bf3-90d5-8e2e2874c769>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1de845e-88d1-455a-a37f-d010a6f47ce5>\",\"WARC-IP-Address\":\"151.101.128.223\",\"WARC-Target-URI\":\"https://pypi.org/project/pygini/\",\"WARC-Payload-Digest\":\"sha1:ZCA4PM7CRDEGIK3Q3O65ALRLW622X7MW\",\"WARC-Block-Digest\":\"sha1:MWPYOJBCZ7X2LRUJIZ7NQFOVFPIAC2HN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251778168.77_warc_CC-MAIN-20200128091916-20200128121916-00140.warc.gz\"}"}
https://socratic.org/questions/the-width-of-a-rectangle-is-5-less-than-twice-its-length-if-the-area-of-the-rect
[ "# The width of a rectangle is 5 less than twice its length. If the area of the rectangle is 126 cm^2, what is the length of the diagonal?\n\nSep 10, 2015\n\n$\\sqrt{277} \\text{cm\" ~~ 16.64\"cm}$\n\n#### Explanation:\n\nIf $w$ is the width of the rectangle, then we are given that:\n\n$w \\left(w + 5\\right) = 126$\n\nSo we would like to find a pair of factors with product $126$ which differ by $5$ from one another.\n\n$126 = 2 \\cdot 3 \\cdot 3 \\cdot 7 = 14 \\cdot 9$\n\nSo the width of the rectangle is $9 \\text{cm}$ and the length is $14 \\text{cm}$\n\nAlternative method\n\nInstead of factoring in this way, we could take the equation:\n\n$w \\left(w + 5\\right) = 126$\n\nrearrange it as ${w}^{2} + 5 w - 126 = 0$\n\nand solve using the quadratic formula to get:\n\n$w = \\frac{- 5 \\pm \\sqrt{{5}^{2} - \\left(4 \\times 1 \\times 126\\right)}}{2 \\times 1} = \\frac{- 5 \\pm \\sqrt{25 + 504}}{2}$\n\n$= \\frac{- 5 \\pm \\sqrt{529}}{2} = \\frac{- 5 \\pm 23}{2}$\n\nthat is $w = - 14$ or $w = 9$\n\nWe are only interested in the positive width so $w = 9$, giving us the same result as the factoring.\n\nFinding the diagnonal\n\nUsing Pythagoras theorem, the length of the diagonal in cm will be:\n\n$\\sqrt{{9}^{2} + {14}^{2}} = \\sqrt{81 + 196} = \\sqrt{277}$\n\n$277$ is prime, so this does not simplify any further.\n\nUsing a calculator find $\\sqrt{277} \\approx 16.64$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8433772,"math_prob":1.0000014,"size":611,"snap":"2022-27-2022-33","text_gpt3_token_len":142,"char_repetition_ratio":0.13673806,"word_repetition_ratio":0.0,"special_character_ratio":0.20785597,"punctuation_ratio":0.07377049,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T18:21:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a837c30c-ca3a-48c6-a3b7-c60a6600d599>\",\"Content-Length\":\"36096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a760fa6-618c-408e-b99b-3fa89bddbc65>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f7d300d-faa3-43fd-8c38-641421bea644>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/the-width-of-a-rectangle-is-5-less-than-twice-its-length-if-the-area-of-the-rect\",\"WARC-Payload-Digest\":\"sha1:7IOTJCVTVOYVRQGKDRVMKVNSQRBI76RL\",\"WARC-Block-Digest\":\"sha1:UHXUXOKJEBJGFJGZDH5X7UU5XQB5VYAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572063.65_warc_CC-MAIN-20220814173832-20220814203832-00750.warc.gz\"}"}
https://economics.stackexchange.com/questions/5158/for-what-demand-function-is-a-monopoly-most-harmful
[ "# For what demand function is a monopoly most harmful?\n\nConsider a firm with zero marginal cost. If it gives the product for free, then all the demand is satisfied and the social welfare increases by the maximum possible amount; call this increase $W$.\n\nBut because the firm is a monopoly, it reduces the demand and increases the price in order to optimize its revenue. Now the social welfare increases by a smaller amount, say, $V$.\n\nDefine the relative loss of welfare (deadweight loss) as: $W/V$. This ratio depends on the shape of the demand function. So my question is: is this ratio bounded, or can it be arbitrarily large? In particular:\n\n• If $W/V$ is bounded, then for what demand function is it maximized?\n• If $W/V$ is unbounded, then for what family of demand functions can it become arbitrarily large?\n\nHere is what I tried so far. Let $u(x)$ be the consumers' marginal utility function (which is also the inverse demand function). Assume that it is finite, smooth, monotonically decreasing, and scaled to the domain $x\\in[0,1]$. Let $U(x)$ be its anti-derivative. Then:", null, "• $W = U(1)-U(0)$, the total area under $u$.\n• $V = U(x_m)-U(0)$, where $x_m$ is the amount produced by the monopoly. This is the area under $u$ except the \"deadweight loss\" part.\n• $x_m = \\arg \\max (x \\cdot u(x))$ = the quantity which maximizes the producer's revenue (the marked rectangle).\n• $x_m$ can usually be calculated using the first-order condition: $u(x_m) = -x_m u'(x_m)$.\n\nTo get some feeling of how $W/V$ behaves, I tried some function families.\n\nLet $u(x)=(1-x)^{t-1}$, where $t>1$ is a parameter. Then:\n\n• $U(x)=-(1-x)^{t}/t$.\n• The first-order condition gives: $x_m=1/t$.\n• $W=U(1)-U(0) = 1/t$\n• $V=U(x_m)-U(0)=(1-(\\frac{t-1}{t})^{t})/t$\n• $W/V=1/[1-(\\frac{t-1}{t})^{t}]$\n\nWhen $t\\to\\infty$, $W/V \\to 1/(1-1/e)\\approx 1.58$, so for this family, $W/V$ is bounded.\n\nBut what happens with other families? Here is another example:\n\nLet $u(x)=e^{-t x}$, where $t>0$ is a parameter. Then:\n\n• $U(x)=-e^{-t x}/t$.\n• The first-order condition gives: $x_m=1/t$.\n• $W=U(1)-U(0) = (1-e^{-t})/t$\n• $V=U(x_m)-U(0)=(1-e^{-1})/t$\n• $W/V=(1-e^{-t})/(1-e^{-1})$\n\nWhen $t\\to\\infty$, again $W/V \\to 1/(1-1/e)\\approx 1.58$, so here again $W/V$ is bounded.\n\nAnd a third example, which I had to solve numerically:\n\nLet $u(x)=\\ln(a-x)$, where $a>2$ is a parameter. Then:\n\n• $U(x)=-(a-x)log(a-x)-x$.\n• The first-order condition gives: $x_m=(a-x_m)\\ln(a-x_m)$. Using this desmos graph, I found out that $x_m \\approx 0.55(a-1)$. Of course this solution is only valid when $0.55(a-1)\\leq 1$; otherwise we get $x_m=1$ and there is no deadweight loss.\n• Using the same graph, I found out that $W/V$ is decreasing with $a$, so its supremum value is when $a=2$, and it is approximately 1.3.\n\nIs there another family of finite functions for which $W/V$ can grow infinitely?\n\n• Zero marginal cost does not imply zero production cost. Who bears the burden of this cost if the product is given away for free, and in what sense does social welfare is maximized then? – Alecos Papadopoulos Apr 16 '15 at 18:15\n• \"Let u(x) be the consumers' utility function (which is also the inverse demand function).\" $$.$$Isn´t it the consumers $\\texttt{marginal}$ utility function ? – callculus Apr 16 '15 at 18:18\n• Without having read most of it, harmful depends on the concept of social welfare, and how we weight those two. If we only look at household surplus, a smaller price-elasticity allows the firms to reap more of the surpluses. Consequently, the demand function D(p) = x, is \"worst\", if we focus consumer surplus. – FooBar Apr 16 '15 at 18:38\n• @AlecosPapadopoulos By $W$ I meant increase in social welfare due only to the trade (maybe I should have called it $\\Delta W$). In this sense, the production costs are irrelevant. – Erel Segal-Halevi Apr 16 '15 at 19:28\n• @calculus You are right, I corrected this, thanks! – Erel Segal-Halevi Apr 16 '15 at 19:36\n\n$P=\\begin{cases} \\frac{1}{Q} & \\text{if } Q>1 \\\\ 2-Q & \\text{if } Q\\leq 1 \\\\ \\end{cases}$.\nThe monopolist prices at $P=1$, but the consumers' surplus if $P=0$ is infinite, because the area under the demand curve contains $\\int_1^\\infty \\frac{1}{Q}dQ=\\infty$." ]
[ null, "https://i.stack.imgur.com/uSaVw.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82832944,"math_prob":0.9995937,"size":2863,"snap":"2020-45-2020-50","text_gpt3_token_len":937,"char_repetition_ratio":0.09653725,"word_repetition_ratio":0.037777778,"special_character_ratio":0.34788683,"punctuation_ratio":0.13190185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997985,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T13:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:37571ac6-6a50-4c50-835a-48818b8c93d0>\",\"Content-Length\":\"156339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9aad9f7-788e-4b07-954c-181cfa28d722>\",\"WARC-Concurrent-To\":\"<urn:uuid:e609f8e7-f729-4c3d-8d3b-ff2602011ac9>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://economics.stackexchange.com/questions/5158/for-what-demand-function-is-a-monopoly-most-harmful\",\"WARC-Payload-Digest\":\"sha1:WCUBC6XN2MJYNTOA5C63BPRSHIAZTC7Q\",\"WARC-Block-Digest\":\"sha1:BBRTPZOADRKQ3ARVHPZILPFTSRD7SX4E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107918164.98_warc_CC-MAIN-20201031121940-20201031151940-00585.warc.gz\"}"}
https://euclideanspace.com/maths/algebra/matrix/functions/diagonalise/index.htm
[ "# Maths - Matrix Diagonalisation\n\nSome matrices can be transformed to diagonal matrices, that is, a matrix where the terms not on the leading diagonal are zero.\n\nFor a symmetrical matrix we can rotate it to get a diagonal matrix, do some operation, then rotate it back to its original coordinates. This rotation matrix is the eigen matrix or the orthonormal basis of [A], in other words:\n\n[D] = [Q]-1 [A] [Q]\n\nwhere:\n\n• [D] = Diagonal matrix, diagonal terms are eigenvectors of A\n• [A] = Symmetrical Matrix\n• [Q] = Orthogonal matrix, columns are eigenvectors of A\n• [Q]-1 = inverse of [Q]\n\n## Derivation\n\nThe length of a vector squared is given by:\n\n|V|² = Vt * V\n\nwhere\n\n• V = vector\n• Vt = transpose of vector\n• |V| = length of vector\n\nThis length will be unchanged if the coordinates are rotated by a matrix [R]. In this case the vector V is replaced by [R]V and the transposed vector Vt is replaced by Vt[R]t (transposing both operands reverses the order) and the unchanged length is therefore:\n\n|V|² = Vt * V = Vt[R]t[R]V\n\ntherefore:\n\n[R]t[R] = \n\nwhere\n\n• = identity matrix\n• [R]t = transpose of rotation matrix\n\n## Inertia Tensor\n\nAn example of diagonalisation is an inertia tensor.\n\n1. Find the eigenvalues a by solving 0 = det{[A] - a) for a. The values of a are the principal moments of inertia.\n2. Find the eigenvectors v of A by solving A v = a v for v.\n3. Normalize the eigenvectors.\n4. Form the matrix C whose whose columns consist of the normalized\n5. D = Ct A C is the diagonal matrix of principal moments of inertia.\n\nIn principle, you can write down D directly after (1), however, completing (1) to (5) gives a check on your work.\n\nNote: Ct is the transpose of C.\n\nFor this case where the only off diagonal terms are 12 and 21, you\nknow it only needs a rotation about axis 3 to diagonalize it. Use a\nsimilarity transformation:\nA'JA where A is the 3x3 rotation matrix about z.\nSolve to find\nj12 = 0 = j11cos²(a) -j22sin²(a) solve for a\nj22 = j22cos²(a) - j11sin²(a)\nj11 = j11cos²2(a) - j22sin²(a)\nI don't know if that's easier.\n\n metadata block see also: Correspondence about this page Book Shop - Further reading. Where I can, I have put links to Amazon for books that are relevant to the subject, click on the appropriate country flag to get more details of the book or to buy it from them.", null, "", null, "", null, "", null, "", null, "", null, "", null, "Mathematics for 3D game Programming - Includes introduction to Vectors, Matrices, Transforms and Trigonometry. (But no euler angles or quaternions). Also includes ray tracing and some linear & rotational physics also collision detection (but not collision response). Other Math Books Specific to this page here:\n\nThis site may have errors. Don't use for critical systems." ]
[ null, "https://euclideanspace.com/Library/1584500379.01.TZZZZZZZ.jpg", null, "https://euclideanspace.com/Library/us50.gif", null, "https://euclideanspace.com/Library/amazon-uk-flag-small.gif", null, "https://euclideanspace.com/Library/amazon-de-flag-small.gif", null, "https://euclideanspace.com/Library/amazon-jp-flag-small.gif", null, "https://euclideanspace.com/Library/amazon-fr-flag-small.gif", null, "https://euclideanspace.com/Library/ca-flag.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8695084,"math_prob":0.99193555,"size":2007,"snap":"2022-27-2022-33","text_gpt3_token_len":577,"char_repetition_ratio":0.14128807,"word_repetition_ratio":0.010610079,"special_character_ratio":0.28649727,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996779,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T14:31:56Z\",\"WARC-Record-ID\":\"<urn:uuid:d7c0bd5a-3fef-4706-86b7-753b0b203411>\",\"Content-Length\":\"16975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e33f3e8-a69e-4a06-aa9c-4978c580de84>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7bfd433-2148-4781-8eea-4966cfd9ec28>\",\"WARC-IP-Address\":\"217.160.0.191\",\"WARC-Target-URI\":\"https://euclideanspace.com/maths/algebra/matrix/functions/diagonalise/index.htm\",\"WARC-Payload-Digest\":\"sha1:QIOBCIGUQ2OJLTGGMLMHIKQQJT5OIYFA\",\"WARC-Block-Digest\":\"sha1:NDOJV7O36BOPIKOTNWHY3QTYHANRMIPW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103556871.29_warc_CC-MAIN-20220628142305-20220628172305-00233.warc.gz\"}"}
https://cs50.stackexchange.com/questions/4451/i-dont-understand-how-the-worst-case-performance-of-bubble-sort-is-on2/4454
[ "# I don't understand how the worst case performance of bubble sort is O(n^2)\n\nThe course materials say the worst case of bubble sort is O(n2) – when every item in the list is in the worst possible position. I'm having trouble wrapping my mind around this.\n\nIf I have a list of 5 numbers – 5, 4, 3, 2, 1 – and I want them to be sorted in ascending order, they would be in the worst possible position. If I go through each of the swaps it would look like this:\n\n5 4 3 2 1\n\n4 5 3 2 1\n\n4 3 5 2 1\n\n4 3 2 5 1\n\n4 3 2 1 5\n\n3 4 2 1 5\n\n3 2 4 1 5\n\n3 2 1 4 5\n\n2 3 1 4 5\n\n2 1 3 4 5\n\n1 2 3 4 5\n\nThis is not 25 swaps. So why is bubble sort said to be O(n2)?\n\nWell, the number of iterations you did was basically\n\nn + (n - 1) + (n - 2) + ... + 1\n\nMathematically, this is equal to n (n + 1) / 2 which is equal to n^2 + n / 2 and since we don't care about small numbers (n / 2) in this case, we can ignore it (if n = 1000, n^2 = 1,000,000 while n/2 = 500 which is not a big deal). So we can say it's a O(n^2) algorithm.\n\n• Actually I think the worst case takes (n-1) + (n-2) + ... + 1 = n^2/2 - n/2 times of swapping. – Charlie Lee Jan 16 '18 at 12:38\n• @CharlieLee the number of iterations and the number of swaps are not the same. We use the number of iterations here because I think it's more representing of the amount of work. For example, in the best case there are no swaps, but we still have to do n iterations to confirm. – kzidane Jan 16 '18 at 15:08\n• Thanks for the explanation! – Charlie Lee Jan 18 '18 at 9:48\n\nThe Landau-Notation Big O considers the asymptotic runtime, since that doesn´t help you, that means: You are right n2 isn´t the real runtime, it is (n*n-1) / 2. But if you consider that n is a real real big number, tending to infinity, factors or divisors of n just doesn´t matter anymore, so in the Big O notation you don´t mention them anymore.\n\nBubble sort is one of the easiest sorting technique from the point of view of implementation, but the one of the worst to get into practical use. It has best, worst (, and hence average) case all equal to `O(n^2)`.\n\nLets see how? This pseudocode is copied from Wikipedia (and I made little modifications in it). For this question I am not dealing with the optimized code mentioned there, but believe me, that too runs in `O(n^2)`. This code will arrange the elements in ascending order.\n\n``````procedure bubbleSort( A : list of sortable items )\nn = length(A)\nrepeat\nfor i = 0 to n-2 inclusive do\n/* if this pair is out of order */\nif A[i] > A[i + 1] then\n/* swap them and remember something changed */\nswap( A[i-1], A[i] )\nend if\nend for\nuntil not swapped\nend procedure\n``````\n\nIn Bubble Sort, firstly, We scan the list elements from starting to the second last element. During the scan, we compare the `i`th element with the `i+1`th element(i.e. the element next to it). If `i`th element is greater than its next element, then we swap them(swapping takes constant time, not linear). So during the first scan, we have ran into `n-1` operations(or unit jobs), And we have sorted the list, right?\n\nActually no. You took the largest element in the list to its correct position but we can't say anything about the rest of the elements. So we do the same thing again, scan it in the same way. Then after second scan, we would have taken the second largest element of the list to its correct position. And we keep on repeating this strategy `n-1` times, as that would make sure that we have got `n-1` elements at their desired position in the list. Now since we have ran a strategy `n-1` times (the strategy that itself takes `n-1` time units), then the total time taken should be\n\n`````` (n-1)*(n-1)\n= n*n + 1 - 2*n // considering part that effects the most\n= n*n\n``````\n\nAnd so, Bubble Sort takes `O(n^2)` time.\n\nLet us consider the list you mentioned, 5 4 3 2 1, lets sort it.\n\n``````5 4 3 2 1\n\n4 5 3 2 1\n4 3 5 2 1\n4 3 2 5 1\n4 3 2 1 5\n\n3 4 2 1 5\n3 2 4 1 5\n3 2 1 4 5\n3 2 1 4 5\n\n2 3 1 4 5\n2 1 3 4 5\n2 1 3 4 5\n2 1 3 4 5\n\n1 2 3 4 5\n1 2 3 4 5\n1 2 3 4 5\n1 2 3 4 5\n``````\n\nAnd that makes up to 16 steps which is exactly equal to `(n-1)^2`(and that's not 5^2 = 25!). The thing that you should note, is that if any algorithm runs in `O(n^2)`, that does not literally mean that you go on squaring the input size. No algorithm could have been directly written in its exact form(`n^2` or `n*log(n)` or ...) because the implementations vary from programmer to programmer, but the sole complexity remains the same(here `n*n`).\n\nFor example, consider above equation `(n-1)*(n-1)`. Now let me add that swapping itself took k units of time, then our equation would have been\n\n`(n-1)*(n-1 + (k)) = (n-1)*(n+k-1) = n*n + (k - 2)*n - k + 1`\n\nOfc, considering the part of the equation which effects it the most, we again land on `n^2`, no matter how one implements swap()!!!(unless constant time).\n\nFor any of your implementation of any algorithm (say of `n^2`), you can figure out exact number of steps in a similar way, that would result in form of\n\n`f(n) = a*n^2 + b*n + c`\n\nPutting correct values of `a`, `b`, `c`, you can get exact number of steps that your own implementation takes for a particular input. But when you tell someone the complexity of your code, then it means you need to tell the part of f(n) that has the most weight-age in that equation.\n\nGood Luck.\n\nP.S. :\n\n1. Consider ^ to be exponentiation and not XOR.\n\n2. Consider list and arrays to be same for now.\n\n• Thanks for this! – the pillow Jan 21 '20 at 5:18\n\nIf we actually count the number of iterations, then for n = 5, the total number of iterations is 10. Take a bubble sort implementation in java as for arr = [5,4,3,2,1]\n\n`````` for(int j=arr.length-1; j > 0; j--){\nfor(int i = 0; i<j;i++){\n\nif(arr[i] > arr[i+1]){\nswap(arr,i,i+1); //does swapping\n}\n}\n}\n``````\n\nThe above code iterates for j = {(0,1,2,3) + (0,1,2) + (0,1) + (0)} which is equal to 10 and which is equal to 5*(5-1)/2.\n\nHence, T(bubbleSort) = n*(n-1)/2 -> (n^2)/2 - ((n-1)^2)/2. This is the real equation.\n\nBig(O) is for representing the progression not the actual value which is obtained by the order of the equation or the first term in the equation which is for above case n^2." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90553313,"math_prob":0.99507207,"size":3456,"snap":"2021-31-2021-39","text_gpt3_token_len":1026,"char_repetition_ratio":0.11181924,"word_repetition_ratio":0.06506365,"special_character_ratio":0.307581,"punctuation_ratio":0.0931677,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99647063,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T15:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:982e8002-579f-41f8-ab15-6671314718bb>\",\"Content-Length\":\"169965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66c62457-946d-414b-8809-12562f4412c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:837e831b-2a29-4b1f-92c4-d55655e3a756>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs50.stackexchange.com/questions/4451/i-dont-understand-how-the-worst-case-performance-of-bubble-sort-is-on2/4454\",\"WARC-Payload-Digest\":\"sha1:PKCAZJD74QSSD4WJPAFTXYDZ7FMWYAGI\",\"WARC-Block-Digest\":\"sha1:63QL4HEGGOFJZQ3C4XYM3556Y6ACSCSF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046149929.88_warc_CC-MAIN-20210723143921-20210723173921-00069.warc.gz\"}"}
http://www.talkstats.com/threads/design-of-experiments-with-existing-data.77486/
[ "Design of Experiments with existing data\n\nlorenzovonmt\n\nNew Member\nHello, I have an experiment generated by CFD (Computational Fluid Dynamics) that includes 3 input factors that are varied to obtain 1 response.\nI wish to use DOE (Design of Experiments) to determine how much the variation in the response can be explained by varying the 3 input factors.\nI realize I can use the 2 or 3K factorial design to determine this but I already have a specific set of data I want to run the experiment on.\n\nWhat method could I use to perform this analysis? I've attached the data set below.\nI'm currently using Minitab for my analysis but I'm willing to try other software.", null, "Last edited:\n\nGretaGarbo\n\nHuman\nI wish to use DOE to determine how much the variation in the response can be explained by varying the 3 input factors.\nDo you just want to use the usual R^2 - the multiple correlation coefficient?\n\nhlsmith\n\nLess is more. Stay pure. Stay poor.\nPlease define acronyms in the future! Welcome to the forum.\n\nlorenzovonmt\n\nNew Member\n@GretaGarbo @Miner The model that generated that data is non-linear so when I ran the linear regression on the data, the results didn't make much sense to me. For example, these are the coefficients calculated by the regression model:\nCoefficients", null, "Let's take the coefficient for X2 for example, it's equal to -4.02. If I'm not mistaken this means that a one-unit change in X2 will result in a 4.02% reduction in the response Y.\n\nHowever, if we look at designs 7 and 8 in the original data I posted, the only difference between the two designs is a one-unit change in X2, while X1 and X3 are kept constant. But the percent change in Y between designs 7 and 8 is 40%, not 4.02 or something closer. This is reflected throughout the data which is why the linear regression model didn't make sense to me.\n\nThese are some more results from the linear regression", null, "", null, "", null, "I've edited the original post to include the acronyms.\n\nkatxt\n\nActive Member\nLet's take the coefficient for X2 for example\nPerhaps X2 wasn't the best choice to investigate because it's not a significant predictor.\nYou could also try putting an interaction in your regression.\n\nDason\n\nLet's take the coefficient for X2 for example, it's equal to -4.02. If I'm not mistaken this means that a one-unit change in X2 will result in a 4.02% reduction in the response Y.\nYou are mistaken. It means that a one unit change in X2 will reduce the expected value by 4.02.\n\nlorenzovonmt\n\nNew Member\nYou are mistaken. It means that a one unit change in X2 will reduce the expected value by 4.02.\nThanks for the correction.\n\nPerhaps X2 wasn't the best choice to investigate because it's not a significant predictor.\nYou could also try putting an interaction in your regression.\nOk, since X3 is the only significant predictor, does the coefficient of X3 make sense if you compare it to the original data?\n\nkatxt\n\nActive Member\nSort of. A graph of Y vs X3 has a slope of about -1 which matches your table. It doesn't help that most of the points are on 34.\n\nlorenzovonmt\n\nNew Member\nI see. Most of the points are on 34 because I was performing a local sensitivity analysis by changing one parameter at a time to examine the effect on the response.\n\nSo the conclusion from this analysis is that the X1 and X2 are not significant predictors of the response, however, X3 is?\n\nDason\n\nJust fyi for the future if you come here (or to a statistician) before you actually conduct the experiment we can help you design something that will optimize power for a set sample size.\n\nMiner\n\nTS Contributor\nI see. Most of the points are on 34 because I was performing a local sensitivity analysis by changing one parameter at a time to examine the effect on the response.\n\nSo the conclusion from this analysis is that the X1 and X2 are not significant predictors of the response, however, X3 is?\nOne factor at a time experiments are inefficient and often unable to detect interactions.\n\nlorenzovonmt\n\nNew Member\nJust fyi for the future if you come here (or to a statistician) before you actually conduct the experiment we can help you design something that will optimize power for a set sample size.\nAlright so maybe I should explain the experiment from scratch. I performed an optimization experiment that included modifying 3 input parameters (X1,X2,X3) to achieve a response (Y). I generated 50 of such experiments (attached below). My goal is to figure out which of the 3 input parameters has the biggest effect on the response. So I took one of the 50 experiments and performed local sensitivity analysis by varying one factor at a time, which is the data I posted in the original post.\n\nOne factor at a time experiments are inefficient and often unable to detect interactions.\n\nAttachments\n\n• 937 bytes Views: 2\n\nkatxt\n\nActive Member\nThe Y vs X1 has an interesting hook. Try multiple regression as before on your 50 experiments but with an X1squared term included.\n\nlorenzovonmt\n\nNew Member\nThe Y vs X1 has an interesting hook. Try multiple regression as before on your 50 experiments but with an X1squared term included.\nTo get an X1 squared, should I square the column before performing the regression?\n\nkatxt\n\nActive Member\nIt doesn't look as if you put the squared term in. What did you call it? The x1 graph has an obvious minimum. Did you draw it?\n\nkatxt\n\nActive Member\nOK. You really need both the X1 and the X1squared terms in the model.\nSomething like this. Note the minimum about 100. Both the X1 and the X1squared terms are significant.\n\nAttachments\n\n• 202.8 KB Views: 1" ]
[ null, "http://www.talkstats.com/data/attachments/3/3563-3910339967f353eba2be427216c0f21a.jpg", null, "http://www.talkstats.com/data/attachments/3/3565-1af8ab4754b4000f594ab1378126d2f1.jpg", null, "http://www.talkstats.com/data/attachments/3/3566-77fdab7a15e0f3ef6fb46c57627a80a5.jpg", null, "http://www.talkstats.com/data/attachments/3/3567-46287c924c878dd7d9f865ce1dac9a1b.jpg", null, "http://www.talkstats.com/data/attachments/3/3568-ef5d02281ed14fc1bc6f5a590f6dba0f.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9327913,"math_prob":0.8303401,"size":1084,"snap":"2022-05-2022-21","text_gpt3_token_len":248,"char_repetition_ratio":0.10185185,"word_repetition_ratio":0.83505154,"special_character_ratio":0.21217713,"punctuation_ratio":0.061611373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9756016,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T23:58:08Z\",\"WARC-Record-ID\":\"<urn:uuid:3c1e680d-95bb-4570-aa54-4d8b7f627fee>\",\"Content-Length\":\"109120\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe839c3c-ecbd-4b0a-9920-c09f783b50a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba1b2e43-d8ca-43a3-bda4-f189c7ddd721>\",\"WARC-IP-Address\":\"199.167.200.62\",\"WARC-Target-URI\":\"http://www.talkstats.com/threads/design-of-experiments-with-existing-data.77486/\",\"WARC-Payload-Digest\":\"sha1:KHRQLRYTKN7F7XRYAXO6NXT6BBTJMJH7\",\"WARC-Block-Digest\":\"sha1:6QNV7CNJQP7K5UVVQSD7CRDM5UVG6NYV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305006.68_warc_CC-MAIN-20220126222652-20220127012652-00449.warc.gz\"}"}
https://rdrr.io/cran/DLMtool/man/CSRA.html
[ "# CSRA: Catch at size reduction analysis In DLMtool: Data-Limited Methods Toolkit\n\n## Description\n\nWhat depletion level and corresponding equlibrium F arise from data regarding mean length of current catches, natural mortality rate, steepness of the stock recruitment curve, maximum length, maximum growth rate, age at maturity, age based vulnerability, maturity at age, maximum age and number of historical years of fishing.\n\n## Usage\n\n `1` ```CSRA(M,h,Linf,K,t0,AM,a,b,vuln,mat,ML,CAL,CAA,maxage,nyears) ```\n\n## Arguments\n\n `M` A vector of natural mortality rate estimates `h` A vector of sampled steepness (Beverton-Holt stock recruitment) `Linf` A vector of maximum length (von Bertalanffy growth) `K` A vector of maximum growth rate (von Bertalanffy growth) `t0` A vector of theoretical age at length zero (von Bertalanffy growth) `AM` A vector of age at maturity `a` Length-weight conversion parameter a (W=aL^b) `b` Length-weight conversion parameter b (W=aL^b) `vuln` A matrix nsim x nage of the vulnerabilty at age (max 1) to fishing. `mat` A matrix nsim x nage of the maturity at age (max 1) `ML` A vector of current mean length estimates `CAL` A catch-at-length matrix nyears x (1 Linf unit) length bins `CAA` A catch-at-age matrix nyears x maximum age `maxage` Maximum age `nyears` Number of historical years of fishing\n\n## Author(s)\n\nT. Carruthers\n\nDLMtool documentation built on Dec. 6, 2019, 9:06 a.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70905215,"math_prob":0.9492981,"size":1425,"snap":"2020-10-2020-16","text_gpt3_token_len":387,"char_repetition_ratio":0.13581985,"word_repetition_ratio":0.04255319,"special_character_ratio":0.23789474,"punctuation_ratio":0.1119403,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9766292,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T00:47:01Z\",\"WARC-Record-ID\":\"<urn:uuid:ca2e8ef2-3bd5-4dc1-ac85-cf62819de159>\",\"Content-Length\":\"61735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:376b59fd-7a37-4b97-aae7-7159a6826069>\",\"WARC-Concurrent-To\":\"<urn:uuid:35eefe84-01cf-48d4-aafd-8b4d7d60cc8d>\",\"WARC-IP-Address\":\"104.28.6.171\",\"WARC-Target-URI\":\"https://rdrr.io/cran/DLMtool/man/CSRA.html\",\"WARC-Payload-Digest\":\"sha1:ZFKD7D62SE2ECJEIA32OEUD2F5VRZCRS\",\"WARC-Block-Digest\":\"sha1:NNZN3YE3WZ7INEVJAJ2DDYGYCCO5KXIQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146176.73_warc_CC-MAIN-20200225233214-20200226023214-00413.warc.gz\"}"}
https://iffgit.fz-juelich.de/fleur/fleur/commit/6645cae4e13e72f06131351a67a3379cad118694?view=inline&w=1
[ "### Merge branch 'kerker' into 'develop'\n\n```Merge branch kerker into develop\n\nSee merge request fleur/fleur!6```\nparents dbcba7f2 107ac463\n ... ... @@ -24,6 +24,7 @@ MODULE m_constants INTEGER, PARAMETER :: POTDEN_TYPE_POTTOT = 1 ! 0 < POTDEN_TYPE <= 1000 ==> potential INTEGER, PARAMETER :: POTDEN_TYPE_POTCOUL = 2 INTEGER, PARAMETER :: POTDEN_TYPE_POTX = 3 INTEGER, PARAMETER :: POTDEN_TYPE_POTYUK = 4 INTEGER, PARAMETER :: POTDEN_TYPE_DEN = 1001 ! 1000 < POTDEN_TYPE ==> density CHARACTER(2),DIMENSION(0:103),PARAMETER :: namat_const=(/& ... ...\n ... ... @@ -146,7 +146,8 @@ input%pallst = .false. ; obsolete%lwb = .false. ; vacuum%starcoeff = .false. input%strho = .false. ; input%l_f = .false. ; atoms%l_geo(:) = .true. noco%l_noco = noco%l_ss ; input%jspins = 1 input%itmax = 9 ; input%maxiter = 99 ; input%imix = 7 ; input%alpha = 0.05 ; input%minDistance = 0.0 input%itmax = 9 ; input%maxiter = 99 ; input%imix = 7 ; input%alpha = 0.05 input%preconditioning_param = 0.0 ; input%minDistance = 0.0 input%spinf = 2.0 ; obsolete%lepr = 0 ; input%coretail_lmax = 0 sliceplot%kk = 0 ; sliceplot%nnne = 0 ; vacuum%nstars = 0 ; vacuum%nstm = 0 input%isec1 = 99 ; nu = 5 ; vacuum%layerd = 1 ; iofile = 6 ... ...\n ... ... @@ -330,6 +330,7 @@ SUBROUTINE r_inpXML(& END SELECT input%alpha = evaluateFirstOnly(xmlGetAttributeValue('/fleurInput/calculationSetup/scfLoop/@alpha')) input%preconditioning_param = evaluateFirstOnly(xmlGetAttributeValue('/fleurInput/calculationSetup/scfLoop/@preconditioning_param')) input%spinf = evaluateFirstOnly(xmlGetAttributeValue('/fleurInput/calculationSetup/scfLoop/@spinf')) ! Get parameters for core electrons ... ...\n ... ... @@ -573,7 +573,7 @@ 8061 FORMAT (6x,i3,9x,i3,6x,i2,7x,f6.2,7x,f6.2) END IF input%preconditioning_param = 0.0 chform = '(5x,l1,'//chntype//'f6.2)' ! chform = '(5x,l1,23f6.2)' ... ...\n ... ... @@ -168,8 +168,8 @@ SUBROUTINE w_inpXML(& 110 FORMAT(' ') WRITE (fileNum,110) input%rkmax,stars%gmaxInit,xcpot%gmaxxc,input%gw_neigd ! 120 FORMAT(' ') ! 120 FORMAT(' ') SELECT CASE (input%imix) CASE (1) mixingScheme='straight' ... ... @@ -182,7 +182,7 @@ SUBROUTINE w_inpXML(& CASE DEFAULT mixingScheme='errorUnknownMixing' END SELECT WRITE (fileNum,120) input%itmax,input%minDistance,input%maxiter,TRIM(mixingScheme),input%alpha,input%spinf WRITE (fileNum,120) input%itmax,input%minDistance,input%maxiter,TRIM(mixingScheme),input%alpha,input%preconditioning_param,input%spinf ! 130 FORMAT(' ') ... ...\n ... ... @@ -537,6 +537,7 @@ ... ...\n ... ... @@ -799,10 +799,11 @@ ... ...\nThis source diff could not be displayed because it is too large. You can view the blob instead.\n ... ... @@ -76,7 +76,7 @@ CONTAINS ! Types, these variables contain a lot of data! TYPE(t_input) :: input TYPE(t_field) :: field TYPE(t_field) :: field, field2 TYPE(t_dimension):: DIMENSION TYPE(t_atoms) :: atoms TYPE(t_sphhar) :: sphhar ... ... @@ -119,6 +119,8 @@ CONTAINS oneD,coreSpecInput,wann,l_opti) CALL timestop(\"Initialization\") if( input%preconditioning_param /= 0 .and. input%film ) call juDFT_error('Currently no preconditioner for films', calledby = 'fleur' ) IF (l_opti) CALL optional(mpi,atoms,sphhar,vacuum,dimension,& stars,input,sym,cell,sliceplot,obsolete,xcpot,noco,oneD) ... ... @@ -236,13 +238,13 @@ CONTAINS !---< gwf CALL timestart(\"generation of potential\") CALL vgen(hybrid,field,input,xcpot,DIMENSION, atoms,sphhar,stars,vacuum,& sym,obsolete,cell, oneD,sliceplot,mpi ,results,noco,inDen,vTot,vx,vCoul) CALL vgen( hybrid, field, input, xcpot, DIMENSION, atoms, sphhar, stars, vacuum, & sym, obsolete, cell, oneD, sliceplot, mpi, results, noco, inDen, vTot, vx, & vCoul ) CALL timestop(\"generation of potential\") #ifdef CPP_MPI CALL MPI_BARRIER(mpi%mpi_comm,ierr) #endif ... ... @@ -251,7 +253,6 @@ CONTAINS forcetheoloop:DO WHILE(forcetheo%next_job(it==input%itmax,noco)) CALL timestart(\"generation of hamiltonian and diagonalization (total)\") CALL timestart(\"eigen\") vTemp = vTot ... ... @@ -418,19 +419,21 @@ CONTAINS CALL forcetheo%postprocess() CALL enpara%mix(mpi,atoms,vacuum,input,vTot%mt(:,0,:,:),vtot%vacz) IF (mpi%irank.EQ.0) THEN field2 = field ! ----> mix input and output densities CALL timestart(\"mixing\") CALL mix(stars,atoms,sphhar,vacuum,input,sym,cell,noco,oneD,hybrid,archiveType,inDen,outDen,results) CALL mix( field2, xcpot, dimension, obsolete, sliceplot, mpi, & stars, atoms, sphhar, vacuum, input, sym, cell, noco, & oneD, hybrid, archiveType, inDen, outDen, results ) CALL timestop(\"mixing\") if( mpi%irank == 0 ) then WRITE (6,FMT=8130) it WRITE (16,FMT=8130) it 8130 FORMAT (/,5x,'******* it=',i3,' is completed********',/,/) WRITE(*,*) \"Iteration:\",it,\" Distance:\",results%last_distance CALL timestop(\"Iteration\") !+t3e ENDIF ! mpi%irank.EQ.0 end if ! mpi%irank.EQ.0 #ifdef CPP_MPI ... ...\n ... ... @@ -250,6 +250,7 @@ CALL MPI_BCAST(input%jspins,1,MPI_INTEGER,0,mpi%mpi_comm,ierr) CALL MPI_BCAST(atoms%n_u,1,MPI_INTEGER,0,mpi%mpi_comm,ierr) CALL MPI_BCAST(atoms%lmaxd,1,MPI_INTEGER,0,mpi%mpi_comm,ierr) call MPI_BCAST( input%preconditioning_param, 1, MPI_DOUBLE, 0, mpi%mpi_comm, ierr ) #endif CALL ylmnorm_init(atoms%lmaxd) ! ... ...\nThis diff is collapsed.\n ... ... @@ -4,7 +4,9 @@ ! of the MIT license as expressed in the LICENSE file in more detail. !-------------------------------------------------------------------------------- MODULE m_vgen USE m_juDFT CONTAINS !> FLAPW potential generator !! The full potential is generated by the following main steps: ... ... @@ -16,8 +18,11 @@ CONTAINS !! TE_VCOUL : charge density-coulomb potential integral !! TE_VEFF: charge density-effective potential integral !! TE_EXC : charge density-ex-corr.energy density integral SUBROUTINE vgen(hybrid,field,input,xcpot,DIMENSION, atoms,sphhar,stars,& vacuum,sym,obsolete,cell,oneD,sliceplot,mpi, results,noco,den,vTot,vx,vCoul) SUBROUTINE vgen( hybrid, field, input, xcpot, DIMENSION, atoms, sphhar, stars, & vacuum, sym, obsolete, cell, oneD, sliceplot, mpi, results, noco, & den, vTot, vx, vCoul ) USE m_rotate_int_den_to_local USE m_bfield USE m_vgen_coulomb ... ... @@ -28,71 +33,72 @@ CONTAINS USE m_mpi_bc_potden #endif IMPLICIT NONE TYPE(t_results),INTENT(INOUT) :: results CLASS(t_xcpot),INTENT(IN) :: xcpot TYPE(t_hybrid),INTENT(IN) :: hybrid TYPE(t_mpi),INTENT(IN) :: mpi TYPE(t_dimension),INTENT(IN) :: dimension TYPE(t_oneD),INTENT(IN) :: oneD TYPE(t_obsolete),INTENT(IN) :: obsolete TYPE(t_sliceplot),INTENT(IN) :: sliceplot TYPE(t_input),INTENT(IN) :: input TYPE(t_field),INTENT(INOUT) :: field !efield can be modified TYPE(t_vacuum),INTENT(IN) :: vacuum TYPE(t_noco),INTENT(IN) :: noco TYPE(t_sym),INTENT(IN) :: sym TYPE(t_stars),INTENT(IN) :: stars TYPE(t_cell),INTENT(IN) :: cell TYPE(t_sphhar),INTENT(IN) :: sphhar TYPE(t_atoms),INTENT(IN) :: atoms TYPE(t_results), INTENT(INOUT) :: results CLASS(t_xcpot), INTENT(IN) :: xcpot TYPE(t_hybrid), INTENT(IN) :: hybrid TYPE(t_mpi), INTENT(IN) :: mpi TYPE(t_dimension), INTENT(IN) :: dimension TYPE(t_oneD), INTENT(IN) :: oneD TYPE(t_obsolete), INTENT(IN) :: obsolete TYPE(t_sliceplot), INTENT(IN) :: sliceplot TYPE(t_input), INTENT(IN) :: input TYPE(t_field), INTENT(INOUT) :: field !efield can be modified TYPE(t_vacuum), INTENT(IN) :: vacuum TYPE(t_noco), INTENT(IN) :: noco TYPE(t_sym), INTENT(IN) :: sym TYPE(t_stars), INTENT(IN) :: stars TYPE(t_cell), INTENT(IN) :: cell TYPE(t_sphhar), INTENT(IN) :: sphhar TYPE(t_atoms), INTENT(IN) :: atoms TYPE(t_potden), INTENT(INOUT) :: den TYPE(t_potden),INTENT(INOUT) :: vTot,vx,vCoul ! .. TYPE(t_potden), INTENT(INOUT) :: vTot,vx,vCoul TYPE(t_potden) :: workden,denRot if (mpi%irank==0) WRITE (6,FMT=8000) 8000 FORMAT (/,/,t10,' p o t e n t i a l g e n e r a t o r',/) CALL vTot%resetPotDen() CALL vCoul%resetPotDen() CALL vx%resetPotDen() ALLOCATE(vx%pw_w,vTot%pw_w,mold=vTot%pw) ALLOCATE(vCoul%pw_w(SIZE(den%pw,1),1)) ALLOCATE( vx%pw_w, vTot%pw_w, mold=vTot%pw ) ALLOCATE( vCoul%pw_w(SIZE(den%pw,1),1) ) CALL workDen%init(stars,atoms,sphhar,vacuum,input%jspins,noco%l_noco,0) CALL workDen%init( stars, atoms, sphhar, vacuum, input%jspins, noco%l_noco, 0 ) !sum up both spins in den into workden CALL den%sum_both_spin(workden) CALL den%sum_both_spin( workden ) CALL vgen_coulomb(1,mpi,DIMENSION,oneD,input,field,vacuum,sym,stars,cell,sphhar,atoms,workden,vCoul,results) CALL vgen_coulomb( 1, mpi, DIMENSION, oneD, input, field, vacuum, sym, stars, cell, & sphhar, atoms, workden, vCoul, results ) CALL vCoul%copy_both_spin(vTot) CALL vCoul%copy_both_spin( vTot ) IF (noco%l_noco) THEN CALL denRot%init(stars,atoms,sphhar,vacuum,input%jspins,noco%l_noco,0) CALL denRot%init( stars, atoms, sphhar, vacuum, input%jspins, noco%l_noco, 0 ) denRot=den CALL rotate_int_den_to_local(DIMENSION,sym,stars,atoms,sphhar,vacuum,cell,input,& noco,oneD,denRot) CALL rotate_int_den_to_local( DIMENSION, sym, stars, atoms, sphhar, vacuum, cell, input, & noco, oneD, denRot ) ENDIF call vgen_xcpot(hybrid,input,xcpot,DIMENSION, atoms,sphhar,stars,& vacuum,sym, obsolete,cell,oneD,sliceplot,mpi,noco,den,denRot,vTot,vx,results) call vgen_xcpot( hybrid, input, xcpot, DIMENSION, atoms, sphhar, stars, & vacuum, sym, obsolete, cell, oneD, sliceplot, mpi, noco, den, denRot, vTot, vx, results ) !ToDo, check if this is needed for more potentials as well... CALL vgen_finalize(atoms,stars,vacuum,sym,noco,input,vTot,denRot) DEALLOCATE(vcoul%pw_w,vx%pw_w) CALL vgen_finalize( atoms, stars, vacuum, sym, noco, input, vTot, denRot ) DEALLOCATE( vcoul%pw_w, vx%pw_w ) CALL bfield(input,noco,atoms,field,vTot) CALL bfield( input, noco, atoms, field, vTot ) ! broadcast potentials #ifdef CPP_MPI CALL mpi_bc_potden(mpi,stars,sphhar,atoms,input,vacuum,oneD,noco,vTot) CALL mpi_bc_potden(mpi,stars,sphhar,atoms,input,vacuum,oneD,noco,vCoul) CALL mpi_bc_potden(mpi,stars,sphhar,atoms,input,vacuum,oneD,noco,vx) CALL mpi_bc_potden( mpi, stars, sphhar, atoms, input, vacuum, oneD, noco, vTot ) CALL mpi_bc_potden( mpi, stars, sphhar, atoms, input, vacuum, oneD, noco, vCoul ) CALL mpi_bc_potden( mpi, stars, sphhar, atoms, input, vacuum, oneD, noco, vx ) #endif END SUBROUTINE vgen END MODULE m_vgen\n ... ... @@ -29,6 +29,8 @@ math/differentiate.f90 math/fft2d.F90 math/fft3d.f90 math/fft_interface.F90 math/SphBessel.f90 math/DoubleFactorial.f90 ) if (FLEUR_USE_FFTMKL) set(fleur_F90 \\${fleur_F90} math/mkl_dfti.f90) ... ...\n module m_DoubleFactorial implicit none contains real(kind=8) function DoubleFactorial( n_upper, n_lower ) ! calculates ( 2 * n_upper + 1 ) !! / ( 2 * n_lower + 1 ) !! or just ( 2 * n_upper + 1 ) !!, if n_lower is not present integer :: n_upper integer, optional :: n_lower integer :: i, i_lower i_lower = 1 if( present(n_lower) ) i_lower = n_lower + 1 DoubleFactorial = 1. do i = i_lower, n_upper DoubleFactorial = DoubleFactorial * ( 2 * i + 1 ) end do end function DoubleFactorial end module m_DoubleFactorial\n !-------------------------------------------------------------------------------- ! Copyright (c) 2016 Peter Grünberg Institut, Forschungszentrum Jülich, Germany ! This file is part of FLEUR and available as free software under the conditions ! of the MIT license as expressed in the LICENSE file in more detail. !-------------------------------------------------------------------------------- module m_SphBessel !------------------------------------------------------------------------- ! SphBessel calculates spherical Bessel functions of the first, ! second and third kind (Bessel, Neumann and Hankel functions). ! ModSphBessel calculates modified spherical Bessel functions ! of the first and second kind. ! ! jl : spherical Bessel function of the first kind (Bessel) ! nl : spherical Bessel function of the second kind (Neumann) ! hl : spherical Bessel function of the third kind (Hankel) ! il : modified spherical Bessel function of the first kind ! kl : modified spherical Bessel function of the second kind ! ! z : Bessel functions are calculated for this value ! lmax: Bessel functions are calculated for all the indices l ! from 0 to lmax ! ! intent(in): ! z : complex or real scalar ! lmax: integer ! ! intent(out): ! * SphBessel( jl, nl, hl, z, lmax ) ! jl: complex or real, dimension(0:lmax) ! nl: complex or real, dimension(0:lmax) ! hl: complex, dimension(0:lmax) ! * ModSphBessel( il, kl, z, lmax ) ! il: complex or real, dimension(0:lmax) ! kl: complex or real, dimension(0:lmax) ! ! All subroutines are pure and therefore can be called for a range of ! z-values concurrently, f.e. this way: ! allocate( il(0:lmax, size(z)), kl(0:lmax, size(z)) ) ! do concurrent (i = 1: size(z)) ! call ModSphBessel( il(:,i), kl(:,i), z(i), lmax ) ! end do ! ! details on implementation: ! For |z| <= 1 the taylor expansions of jl and nl are used. ! For |z| > 1 the explicit expressions for hl(+), hl(-) are used. ! For modified spherical Bessel functions il and kl the relations ! il(z) = I^{-l} * jl(I*z) ! kl(z) = -I^{l} * hl(I*z) ! are used. ! ! authors: ! originally written by R. Zeller (1990) ! modernised and extended by M. Hinzen (2016) !------------------------------------------------------------------------- implicit none complex, parameter :: CI = (0.0, 1.0) interface SphBessel module procedure :: SphBesselComplex, SphBesselReal end interface interface ModSphBessel ! variant Complex2 takes workspace as an argument. ! this is not possible for the subroutine working on reals. module procedure :: ModSphBesselComplex, ModSphBesselReal, ModSphBesselComplex2 end interface contains pure subroutine SphBesselComplex ( jl, nl, hl, z, lmax ) complex, intent(in) :: z integer, intent(in) :: lmax complex, dimension(0:lmax), intent(out) :: jl, nl, hl complex :: termj, termn, z2, zj, zn real :: rl, rn real, dimension(0:lmax) :: rnm integer :: l, m, n zj = 1.0 zn = 1.0 / z z2 = z * z jl(:) = 1.0 nl(:) = 1.0 if ( abs( z ) < lmax + 1.0 ) then SERIAL_L_LOOP: do l = 0, lmax rl = l + l termj = 1.0 termn = 1.0 EXPANSION: do n = 1, 25 rn = n + n termj = -termj / ( rl + rn + 1.0 ) / rn * z2 termn = termn / ( rl - rn + 1.0 ) / rn * z2 jl(l) = jl(l) + termj nl(l) = nl(l) + termn end do EXPANSION jl(l) = jl(l) * zj nl(l) = -nl(l) * zn hl(l) = jl(l) + nl(l) * CI zj = zj * z / ( rl + 3.0 ) zn = zn / z * ( rl + 1.0 ) end do SERIAL_L_LOOP end if rnm(:) = 1.0 PARALLEL_L_LOOP: do concurrent (l = 0: lmax) if ( abs( z ) >= l + 1.0 ) then hl(l) = 0.0 nl(l) = 0.0 SERIAL_M_LOOP: do m = 0, l hl(l) = hl(l) + (-1) ** m * rnm(l) nl(l) = nl(l) + rnm(l) rnm(l) = rnm(l) / ( m + 1.0 ) * ( l * ( l + 1 ) - m * ( m + 1 ) ) / ( CI * ( z + z ) ) end do SERIAL_M_LOOP hl(l) = hl(l) * (-CI) ** l * exp( CI * z ) / ( CI * z ) nl(l) = nl(l) * CI ** l * exp( -CI * z ) / ( -CI * z ) jl(l) = ( hl(l) + nl(l) ) / 2.0 nl(l) = ( hl(l) - jl(l) ) * (-CI) end if end do PARALLEL_L_LOOP end subroutine SphBesselComplex pure subroutine SphBesselReal ( jl, nl, hl, x, lmax ) real, intent(in) :: x integer, intent(in) :: lmax real, dimension(0:lmax), intent(out) :: jl, nl complex, dimension(0:lmax), intent(out) :: hl complex, dimension(0:lmax) :: jl_complex, nl_complex complex :: z z = x ! internal conversion from real to complex call SphBesselComplex( jl_complex, nl_complex, hl, z, lmax ) jl = jl_complex ! internal conversion from complex to real nl = nl_complex ! internal conversion from complex to real end subroutine SphBesselReal pure subroutine ModSphBesselComplex ( il, kl, z, lmax ) complex, intent(in) :: z integer, intent(in) :: lmax complex, dimension(0:lmax), intent(out) :: il, kl complex, dimension(0:lmax) :: nl integer :: l call SphBesselComplex( il, nl, kl, CI * z, lmax ) do l = 0, lmax il(l) = (-CI) ** l * il(l) kl(l) = - CI ** l * kl(l) end do end subroutine ModSphBesselComplex !another implementation of ModSphBesselComplex, where nl is allocated outside for performance reasons pure subroutine ModSphBesselComplex2 ( il, kl, nl, z, lmax ) complex, intent(in) :: z integer, intent(in) :: lmax complex, dimension(0:lmax), intent(out) :: il, kl, nl integer :: l call SphBesselComplex( il, nl, kl, CI * z, lmax ) do l = 0, lmax il(l) = (-CI) ** l * il(l) kl(l) = - CI ** l * kl(l) end do end subroutine ModSphBesselComplex2 pure subroutine ModSphBesselReal ( il, kl, x, lmax ) real, intent(in) :: x integer, intent(in) :: lmax real, dimension(0:lmax), intent(out) :: il, kl complex, dimension(0:lmax) :: jl, nl, hl integer :: l complex :: z z = CI * x call SphBesselComplex( jl, nl, hl, z, lmax ) do l = 0, lmax il(l) = (-CI) ** l * jl(l) kl(l) = - CI ** l * hl(l) end do end subroutine ModSphBesselReal end module m_SphBessel\n ... ... @@ -11,6 +11,7 @@ if (\\${FLEUR_USE_MPI}) mpi/mpi_bc_st.F90 mpi/mpi_bc_pot.F90 mpi/mpi_col_den.F90 mpi/mpi_reduce_potden.F90 mpi/mpi_make_groups.F90 mpi/mpi_dist_forcetheorem.F90 ) ... ...\n !-------------------------------------------------------------------------------- ! Copyright (c) 2016 Peter Grünberg Institut, Forschungszentrum Jülich, Germany ! This file is part of FLEUR and available as free software under the conditions ! of the MIT license as expressed in the LICENSE file in more detail. !-------------------------------------------------------------------------------- MODULE m_mpi_reduce_potden CONTAINS SUBROUTINE mpi_reduce_potden( mpi, stars, sphhar, atoms, input, vacuum, oneD, noco, potden ) ! It is assumed that, if some quantity is allocated for some mpi rank, that it is also allocated on mpi rank 0. #include\"cpp_double.h\" USE m_types USE m_constants USE m_juDFT IMPLICIT NONE TYPE(t_mpi), INTENT(IN) :: mpi TYPE(t_oneD), INTENT(IN) :: oneD TYPE(t_input), INTENT(IN) :: input TYPE(t_vacuum), INTENT(IN) :: vacuum TYPE(t_noco), INTENT(IN) :: noco TYPE(t_stars), INTENT(IN) :: stars TYPE(t_sphhar), INTENT(IN) :: sphhar TYPE(t_atoms), INTENT(IN) :: atoms TYPE(t_potden), INTENT(INOUT) :: potden INCLUDE 'mpif.h' INTEGER :: n INTEGER :: ierr(3) REAL, ALLOCATABLE :: r_b(:) EXTERNAL CPP_BLAS_scopy,CPP_BLAS_ccopy,MPI_REDUCE ! reduce pw n = stars%ng3 * size( potden%pw, 2 ) allocate( r_b(n) ) call MPI_REDUCE( potden%pw, r_b, n, MPI_DOUBLE_COMPLEX, MPI_SUM, 0, mpi%mpi_comm, ierr ) if( mpi%irank == 0 ) call CPP_BLAS_ccopy( n, r_b, 1, potden%pw, 1 ) deallocate( r_b ) ! reduce mt n = atoms%jmtd * ( sphhar%nlhd + 1 ) * atoms%ntype * input%jspins allocate( r_b(n) ) call MPI_REDUCE( potden%mt, r_b, n, MPI_DOUBLE, MPI_SUM, 0, mpi%mpi_comm, ierr ) if( mpi%irank == 0 ) call CPP_BLAS_scopy( n, r_b, 1, potden%mt, 1 ) deallocate( r_b ) ! reduce pw_w if( allocated( potden%pw_w ) ) then n = stars%ng3 * size( potden%pw_w, 2 ) allocate( r_b(n) ) call MPI_REDUCE( potden%pw_w, r_b, n, MPI_DOUBLE_COMPLEX, MPI_SUM, 0, mpi%mpi_comm, ierr ) if( mpi%irank == 0 ) call CPP_BLAS_ccopy( n, r_b, 1, potden%pw_w, 1 ) deallocate( r_b ) end if" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5819853,"math_prob":0.98531574,"size":281,"snap":"2020-24-2020-29","text_gpt3_token_len":91,"char_repetition_ratio":0.15884477,"word_repetition_ratio":0.0,"special_character_ratio":0.34163702,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814533,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T07:41:38Z\",\"WARC-Record-ID\":\"<urn:uuid:52797416-9ba2-4774-b324-244e686953f5>\",\"Content-Length\":\"1049922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e110d6b1-cbf6-4298-8d39-56d58e4d360a>\",\"WARC-Concurrent-To\":\"<urn:uuid:70609269-ff62-4cd7-952a-8f0d89d4d4c4>\",\"WARC-IP-Address\":\"134.94.161.83\",\"WARC-Target-URI\":\"https://iffgit.fz-juelich.de/fleur/fleur/commit/6645cae4e13e72f06131351a67a3379cad118694?view=inline&w=1\",\"WARC-Payload-Digest\":\"sha1:E5EHMBDIQDIZPLQBPB3HRTACZ756DC4V\",\"WARC-Block-Digest\":\"sha1:2437HK32SVV5PRXATRSQLTH3SBG7TD43\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896905.46_warc_CC-MAIN-20200708062424-20200708092424-00058.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2007/Oct/msg00635.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Integrate question\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg82307] Re: Integrate question\n• From: Jean-Marc Gulliet <jeanmarc.gulliet at gmail.com>\n• Date: Wed, 17 Oct 2007 04:04:44 -0400 (EDT)\n• Organization: The Open University, Milton Keynes, UK\n• References: <ff1pru\\$924\\[email protected]>\n\n```Oskar Itzinger wrote:\n\n> Mathematica 5.2 under IRIX complains that\n>\n> Integrate[x/(3 x^2 - 1)^3,{x,0,1}]\n>\n> doesn't converge on [0,1].\n>\n> However, Mathematica 2.1 under Windows gives the corrrect answer, (1/16).\n>\n> When did Mathematica lose the ability to do said integral?\n\nFWIW, Mathematica for Windows 5.2 as well as 6.0.1 cannot do it either,\nalthough it does not seem that hard to get the correct answer (the\nindefinite integral and the limits are evaluated in a breeze).\n\nIn:= int = Integrate[x/(3 x^2 - 1)^3, x]\n\nOut= -(1/(12 (-1 + 3 x^2)^2))\n\nIn:= Limit[int, x -> 1] - int /. x -> 0\n\nOut= 1/16\n\nRegards,\n--\nJean-Marc\n\n```\n\n• Prev by Date: Re: Is this normal for Limit?\n• Next by Date: Re: Logical evaluation\n• Previous by thread: Re: Integrate question\n• Next by thread: Re: Integrate question" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/7.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80128133,"math_prob":0.8493576,"size":933,"snap":"2019-35-2019-39","text_gpt3_token_len":321,"char_repetition_ratio":0.10226049,"word_repetition_ratio":0.0,"special_character_ratio":0.3772776,"punctuation_ratio":0.2028302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9678872,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T03:40:19Z\",\"WARC-Record-ID\":\"<urn:uuid:42ee2e7b-9e87-4fd4-bbaf-14154541ce3d>\",\"Content-Length\":\"42619\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23f5f6d0-ad22-462f-a741-b91e94649563>\",\"WARC-Concurrent-To\":\"<urn:uuid:a72a7d26-597d-431d-b2cb-1f4b528ade22>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2007/Oct/msg00635.html\",\"WARC-Payload-Digest\":\"sha1:TXVLZJLO3AGUJ3VPBTE763ICW64Y62CR\",\"WARC-Block-Digest\":\"sha1:XWKTBLKCKZH4LLRPJPXOHDWJD4LWD4IK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573011.59_warc_CC-MAIN-20190917020816-20190917042816-00153.warc.gz\"}"}
https://deepai.org/machine-learning-glossary-and-terms/ohms-law
[ "", null, "", null, "# Ohm's Law\n\n## What is Ohm’s Law?\n\nOhm's Law states that current through a conductor across two points is directly proportional to voltage, given a constant resistance. The law is named after the German physicist Georg Ohm whose experiments inspired its framework. Ohm's Law is represented by the equation I = V/R where I is the current in amperes, V is the voltage measured between the two points of the conductor, and R is the resistance defined in Ohms. The law states that the resistance remains a constant, independent of the current. Ohm's Law is used as a general principle for understanding conductivity of materials over a varying range of electrical currents. Materials can be defined as either ohmic, or non-ohmic, depending on whether they follow the rules of the law.\n\n## Applications of Ohm's Law\n\nOhm's Law is sometimes exemplified and denoted in a few variations. For example, Ohm's Law can be defined as either:\nI = V/R\nV = IR\nR = V/I\nThe interchangeability of the definitions is sometimes displayed as a trifurcated triangle, with V on the top, and I and R denoted below. The interchangeable definition is displayed as:\n\nIt is common to see these various definitions in the process of circuit analysis. Circuit analysis is the voltages and currents through every component in a network. Each component can be defined as either ohmic, or non-ohmic.\n\n### Ohm's Law and Linear Approximations\n\nOhm's Law can be visualized using linear functions. If a component is truly ohmic, its resistance will not increase, regardless of any increase or decrease in voltage. In short, the ratio of V to I is constant, resulting in a straight line across a graph. If a component is non-ohmic, then the plotted line may curve, representing a non-constant ratio between the current and the voltage. The graphs below display the differences between ohmic, and non-ohmic components.\n\nBy Sbyrnes321 - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=17718257" ]
[ null, "https://deepai.org/static/images/logo.png", null, "https://deepai.org/static/images/glossary-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93040156,"math_prob":0.97335416,"size":1938,"snap":"2021-04-2021-17","text_gpt3_token_len":432,"char_repetition_ratio":0.11995863,"word_repetition_ratio":0.01923077,"special_character_ratio":0.20485036,"punctuation_ratio":0.11590297,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974499,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T08:42:12Z\",\"WARC-Record-ID\":\"<urn:uuid:37383583-3192-4ad0-98c8-7023abe00d32>\",\"Content-Length\":\"90653\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f61de62-a8aa-48cc-ba90-1c5027fb69dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea1b7c8a-818f-4032-b95f-8809fbe6fe13>\",\"WARC-IP-Address\":\"54.148.107.65\",\"WARC-Target-URI\":\"https://deepai.org/machine-learning-glossary-and-terms/ohms-law\",\"WARC-Payload-Digest\":\"sha1:UJB3CH4JRKY6GIY2LFGS5CF55C3Q4WCD\",\"WARC-Block-Digest\":\"sha1:YUS3XTCU7PYGLMQVOF56IADOFQ5NJ77K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703536556.58_warc_CC-MAIN-20210123063713-20210123093713-00717.warc.gz\"}"}
https://www.meritnation.com/ask-answer/question/prove-that-tan-1-1-4-tan-1-2-9-1-2-cos-1-3-5/inverse-trigonometric-functions/1754260
[ "# Prove that :tan-1(1/4) + tan-1(2/9) = 1/2 Cos-1 (3/5)\n\nTo prove:", null, "LHS:", null, "This can be written as:", null, "= RHS\n\nHence proved.\n\n• 67\n\nL.H.S = tan-1(1/4)+tan-1(2/9) = tan-1(1/4+2/9)/(1-1/4*2/9) = tan-1(1/2) = 1/2*2tan-1(1/2)\n\n= 1/2cos-1(1-1/4)/(1+1/4) = 1/2cos-1(3/5)\n\n• -12\nWhat are you looking for?" ]
[ null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/4080826/2013_01_17_17_38_10/mathmlequation4436461692688713025.png", null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/4080826/2013_01_17_17_38_10/mathmlequation4436461692688713025.png", null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/4080826/2013_01_17_17_38_10/mathmlequation7845104935358036474.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81535965,"math_prob":0.99964607,"size":352,"snap":"2023-40-2023-50","text_gpt3_token_len":179,"char_repetition_ratio":0.18390805,"word_repetition_ratio":0.0,"special_character_ratio":0.5255682,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9715679,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,10,null,10,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T01:34:07Z\",\"WARC-Record-ID\":\"<urn:uuid:83a05385-0303-4fc8-8895-27f4c8ababf2>\",\"Content-Length\":\"102275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4474506-803c-4e33-848b-3cb04c16222d>\",\"WARC-Concurrent-To\":\"<urn:uuid:b322f322-aaeb-4e8b-8c79-7657dd758109>\",\"WARC-IP-Address\":\"18.67.76.95\",\"WARC-Target-URI\":\"https://www.meritnation.com/ask-answer/question/prove-that-tan-1-1-4-tan-1-2-9-1-2-cos-1-3-5/inverse-trigonometric-functions/1754260\",\"WARC-Payload-Digest\":\"sha1:7JPENGYN6EA5VDM7I7I74G4RFNP26GN7\",\"WARC-Block-Digest\":\"sha1:IEKU3PE62SLDICOMMIELTNIH32RDDZX4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506539.13_warc_CC-MAIN-20230923231031-20230924021031-00414.warc.gz\"}"}
https://studysoup.com/tsg/143646/chemistry-the-central-science-12-edition-chapter-4-problem-3pe
[ "×\n×\n\n# (a) What compound precipitates when aqueous solutions of", null, "ISBN: 9780321696724 27\n\n## Solution for problem 3PE Chapter 4\n\nChemistry: The Central Science | 12th Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "Chemistry: The Central Science | 12th Edition\n\n4 5 0 325 Reviews\n16\n2\nProblem 3PE\n\n(a) What compound precipitates when aqueous solutions of Fe2(SO4)3 and LiOH are mixed?\n\n(b) Write a balanced equation for the reaction.\n\n(c) Will a precipitate form when solutions of Ba(NO3)2 and KOH are mixed?\n\nStep-by-Step Solution:\nStep 1 of 3\nStep 2 of 3\n\nStep 3 of 3\n\n##### ISBN: 9780321696724\n\nThis textbook survival guide was created for the textbook: Chemistry: The Central Science, edition: 12. This full solution covers the following key subjects: Mixed, Solutions, lioh, equation, form. This expansive textbook survival guide covers 49 chapters, and 5471 solutions. The full step-by-step solution to problem: 3PE from chapter: 4 was answered by , our top Chemistry solution expert on 04/03/17, 07:58AM. The answer to “(a) What compound precipitates when aqueous solutions of Fe2(SO4)3 and LiOH are mixed? (b) Write a balanced equation for the reaction. (c) Will a precipitate form when solutions of Ba(NO3)2 and KOH are mixed?” is broken down into a number of easy to follow steps, and 34 words. Since the solution to 3PE from 4 chapter was answered, more than 377 students have viewed the full step-by-step answer. Chemistry: The Central Science was written by and is associated to the ISBN: 9780321696724.\n\nUnlock Textbook Solution" ]
[ null, "https://studysoup.com/cdn/48cover_2421674", null, "https://studysoup.com/cdn/48cover_2421674", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93144894,"math_prob":0.64233935,"size":1499,"snap":"2020-10-2020-16","text_gpt3_token_len":371,"char_repetition_ratio":0.15317726,"word_repetition_ratio":0.39148936,"special_character_ratio":0.24883255,"punctuation_ratio":0.12627986,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97189766,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T00:56:01Z\",\"WARC-Record-ID\":\"<urn:uuid:e009239b-1b3d-47cc-91ce-63868e993f8d>\",\"Content-Length\":\"78798\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3a61a1c-d668-4faf-b803-73c39d3c288b>\",\"WARC-Concurrent-To\":\"<urn:uuid:73753d83-8f3b-4b58-a67c-ec646ace159d>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/143646/chemistry-the-central-science-12-edition-chapter-4-problem-3pe\",\"WARC-Payload-Digest\":\"sha1:U3VZOCFHTCHBRI3IHDGAS67N2U3KCR5A\",\"WARC-Block-Digest\":\"sha1:7CAPSIWKSYMVTBZ37SKHIBR5MKAFVFJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148163.71_warc_CC-MAIN-20200228231614-20200229021614-00256.warc.gz\"}"}
https://stats.stackexchange.com/questions/10407/probability-for-finding-a-double-as-likely-event
[ "# Probability for finding a double-as-likely event\n\nRepeating an experiment with $n$ possible outcomes $t$ times independently, where all but one outcomes have probability $\\frac{1}{n+1}$ and the other outcome has the double probability $\\frac{2}{n+1}$, is there a good approximate formula for the probability that the outcome with the higher probability happens more often than any other one?\n\nFor me, $n$ is typically some hundreds, and $t$ is chosen depending on $n$ such that the probability that the most likely outcome occurs most often is between 10% and 99.999%.\n\nIn the moment I use a small program that calculates a crude approximation by assuming that the counts for how often each outcome shows up in $t$ trials are independent and approximate the counts using the Poisson distribution. How can I improve on this?\n\nEDIT: I'd strongly appreciate comments/votes on the two (maybe soon more) answers given.\n\nEDIT 2: As none of the two answers is convincing me, but as I don't want to let the 100 points bounty to vanish (and as nobody voted for/against one of the two answers), I'll just pick one of the answers. I'd still appreciate other answers.\n\n• With large n the independent Poisson approximation is probably fine. Did you try simulation studies of how well the formula is working? – Aniko May 6 '11 at 15:33\n• This question is closely related to the Generalized Birthday Problem. en.wikipedia.org/wiki/Birthday_problem – charles.y.zheng May 7 '11 at 20:40\n• @Aniko: I haven't run extensive simulations yet. But the examples I tried are roughly correct. – j.p. May 8 '11 at 16:51\n• The central difficulty in your problem (and in the birthday problem) is the difficulty of determining the distribution of the maximum (supremum norm) of a multinomial random variable, which involves summing over partitions. – charles.y.zheng May 8 '11 at 20:57\n• A hard bound on the Poisson model is as follows. Let $Z_1 \\sim \\mathrm{Poi}(2t/(n+1))$ and $Z_i \\sim \\mathrm{Poi}(t/(n+1))$ for $2 \\leq i \\leq n$. All the $Z_i$ are mutually independent. Then $\\mathbb{P}(Z_1 > \\max_{i\\geq 2} Z_i) \\geq 1 - (n-1) \\exp(-c t / (n+1))$ where $c = (\\sqrt{2}-1)^2$. As you can see, it only works well for $t \\geq 6 n \\log n$ or so. – cardinal May 15 '11 at 15:22\n\n## 3 Answers\n\nPartition the outcomes by the frequency of occurrences $x$ of the \"double outcome\", $0 \\le x \\le t$. Conditional on this number, the distribution of the remaining $t-x$ outcomes is multinomial across $n-1$ equiprobable bins. Let $p(t-x, n-1, x)$ be the chance that no bin out of $n-1$ equally likely ones receives more than $x$ outcomes. The sought-for probability therefore equals\n\n$$\\sum_{x=0}^{t} \\binom{t}{x}\\left(\\frac{2}{n+1}\\right)^x \\left(\\frac{n-1}{n+1}\\right)^{t-x} p(t-x,n-1,x).$$\n\nIn Exact Tail Probabilities and Percentiles of the Multinomial Maximum, Anirban DasGupta points out (after correcting typographical errors) that $p(n,K,x)K^n/n!$ equals the coefficient of $\\lambda^n$ in the expansion of $\\left(\\sum_{j=0}^{x}\\lambda^j/j!\\right)^K$ (using his notation). For the values of $t$ and $n$ involved here, this coefficient can be computed in at most a few seconds (making sure to discard all $O(\\lambda^{n+1})$ terms while performing the successive convolutions needed to obtain the $K^{\\text{th}}$ power). (I checked the timing and corrected the typos by reproducing DasGupta's Table 4, which displays the complementary probabilities $1 - p(n,K,x)$, and extending it to values where $n$ and $K$ are both in the hundreds.)\n\nQuoting a theorem of Kolchin et al., DasGupta provides an approximation for the computationally intensive case where $t$ is substantially larger than $n$. Between the exact computation and the approximation, it looks like all possibilities are covered.\n\n• Thanks for the answer! Looks very good, but I have to check the details. What do you mean with \"I ... corrected the typos by reproducing DasGupta's Table 4, ...\"? (By the way, if you had answered 2-3 hours earlier, you'd save me some headaches about what to do with my bounty.) – j.p. May 16 '11 at 14:15\n• @pul His inequalities are in the wrong direction: what he claims are $p(n,K,x)$ are really $1 - p(n,K,x-1)$. Sorry about the bounty problem: I knew how to answer this one when it first appeared but needed to check the results first and had no time to do anything about it until the weekend. – whuber May 16 '11 at 15:41\n\nI agree with some comments, in that the Poisson approximation sounds nice here (not a 'crude' approximation). It should be asympotically exact, and it seems the most reasonable thing to do, as an exact analytic solutions seems difficult.\n\nAs an intermediate alternative, (if you really need it) I suggest I fisrt order correction to the Poisson approximation, in the following way (I've done something similar some time ago, and it worked).\n\nAs suggested by a comment, your model is (not approximately but exactly) Poisson if we condition on the sum. That is:\n\nLet $X_t$ ($t$ is a parameter here) be a vector of $n$ independent Poisson variables, the first one with $\\lambda = 2t/(n+1)$, the others with $\\lambda = t/(n+1)$. Let $s=\\sum x$, so $E(s)=t$. It is clear that $X_t$ is not equivalent to other model (because our model is restricted to $s=t$), but it is a good approximation. Further, the distribution of $X_t | s$ is equivalent to our model. Indeed, we can write\n\n$\\displaystyle P(X_t) = \\sum_s P(X_t | s) P(s)$\n\nThis can also be writen for the event in consideration (that $x_1$ is the maximum).\n\nWe know to compute the LHS, and $P(s)$, but we are interested in the other term. Our first order Poisson approximation comes from assuming that $P(s)$ concentrates about the mean so that it can be assimilated to a delta, and then $P(X_t) \\approx P(X_t | s=t)$\n\nTo refine the aprroximation, we can see the above as a convolution of two functions: our unknown $P(X_t | s)$, which we assume smooth around $s=t$, and a quasi delta function, say a gaussian with small variance. Now, we have our first order approximation (for continuous variables) :\n\n$h(x) = g(x) * N(x_0,\\sigma^2)$ (convolution)\n\n$h(x_0) \\approx g(x_0) + g(x_0)''\\sigma^2/2$\n\n$g(x_0) \\approx h(x_0) - h''(x_0)''\\sigma^2/2$\n\nApplying this to the previous equation can lead to a refined approximation to our desired probability.\n\n• Could you please tell me how to find $\\sigma^2$? – j.p. May 15 '11 at 15:10\n• $\\sigma^2$ is the variance of $s$, which is the sum of $n$ independent poisson – leonbloy May 15 '11 at 15:39\n• @leonbloy: OK, in our case we have therefore $\\sigma^2 = t$ (thanks!). And how do I get h\"? – j.p. May 15 '11 at 15:56\n• I'd approximate the second derivative by the second difference : $A_{t+1}-2A_t+A{t-1}$, the probability of your 'success' event evaluated at different values of $t$ – leonbloy May 15 '11 at 16:30\n• @leonbloy: I'm not really convinced of your answer (yet???), but before letting the bounty points vanish into nowhere, I'll accept your answer. – j.p. May 15 '11 at 17:00\n\nJust a word of explanation: Part out of curiosity, part for lack of a better more theoretical method, i approached the problem in a completely empirical/inductive way. I'm aware that there is the risk of getting stuck in a dead end without gaining much insight, but i thought, i'll just present what i got so far anyway, in case it is useful to someone.\n\nStarting by computing the exact probabilities for $n,t\\in\\{1,...,8\\}$ we get", null, "Due to the underlying multinomial distribution, multiplying the entries in the table by $(n+1)^t$ leaves us with a purely integer table:", null, "Now we find that there is a polynomial in $n$ for every column which acts as the sequence function for that column:", null, "Dividing the sequence functions by $(n+1)^t$ gives us sequence functions for the original probabilities for the first $t$'s. These rational polynomials can be simplified by decomposing them into partial fractions and substituting $x$ for $1/(n+1)$, leaving us with:", null, "or as coefficient table", null, "Starting with the $x^2$ column there are sequence functions for these coefficients again:", null, "That's how far i got. There are definitely exploitable patterns here that allow sequence functions to occur, but i'm not sure if there is a nice closed form solution for these sequence functions.\n\n• Thanks for the effort! I'm not sure what your results imply for $n$ and $t$ in the hundreds. – j.p. May 15 '11 at 15:14\n• I actually tried to approximate the probability for bigger $n,t$ by just taking the $x^2$ part of the series into account, but that only works for small probabilities, for the probabilities that you're interested in, the approximation is way off. – Thies Heidecke May 15 '11 at 15:55\n• I'm not convinced of neither your answer (as small $n$ don't seem to help for the $n$'s I need) nor the other (as I don't understand (yet??) its correctness/helpfulness). As I have more hope to get something out of the other answer and as I don't want to let the 100 points vanish, I'll probably accept the other answer. Sorry for not picking yours! – j.p. May 15 '11 at 16:55" ]
[ null, "https://i.stack.imgur.com/FczWE.png", null, "https://i.stack.imgur.com/Nn20U.png", null, "https://i.stack.imgur.com/RMLVI.png", null, "https://i.stack.imgur.com/RxbR5.png", null, "https://i.stack.imgur.com/epKgZ.png", null, "https://i.stack.imgur.com/FtVpS.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96620363,"math_prob":0.9957738,"size":1265,"snap":"2019-35-2019-39","text_gpt3_token_len":289,"char_repetition_ratio":0.13164155,"word_repetition_ratio":0.0,"special_character_ratio":0.2347826,"punctuation_ratio":0.079166666,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984884,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T00:47:34Z\",\"WARC-Record-ID\":\"<urn:uuid:48034577-be57-433d-a5f3-aa5dd1b5295e>\",\"Content-Length\":\"166992\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d57d44c4-6558-4f33-a03b-40fefae4d213>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0f1e264-d4b7-4dcb-8041-3dedf3eb1aae>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/10407/probability-for-finding-a-double-as-likely-event\",\"WARC-Payload-Digest\":\"sha1:C6D7VVTAOTPWCDVGNJROARCZFVMA3CIL\",\"WARC-Block-Digest\":\"sha1:4UXMN5FAKZJYZPUEFXUH3O6V2JXQUN4V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330913.72_warc_CC-MAIN-20190826000512-20190826022512-00545.warc.gz\"}"}
https://byjus.com/rd-sharma-solutions/class-8-maths-chapter-18-practical-geometry/
[ "", null, "# RD Sharma Solutions for Class 8 Maths Chapter 18 Practical Geometry (Constructions)\n\nAs the chapter is about Constructions, in Chapter 18 of RD Sharma Class 8 Maths, we shall learn how to construct a quadrilateral with given elements. Students are provided with exercise-wise solutions to help understand the concepts clearly from the exam point of view. The solutions are prepared by the experienced faculty team at BYJU’S, who has explained the concepts in detail which is very helpful for preparing for their board exams. Students can download RD Sharma Class 8 PDF from the links given below.\n\nChapter 18- Practical Geometry (Constructions) contains five exercises and the RD Sharma Class 8 Solutions present in this page provide solutions to the questions present in each exercise. Now, let us have a look at the concepts discussed in this chapter.\n\n• Constructing a quadrilateral when four sides and one diagonal are given.\n• Constructing a quadrilateral when its three sides and the two diagonals are given.\n• Constructing a quadrilateral when its four sides and one angle are given.\n• Constructing a quadrilateral when its three sides and their included angles are given.\n• Constructing a quadrilateral when its three angles and their two included sides are given.\n\n## Download the Pdf of RD Sharma Solutions for Class 8 Maths Chapter 18 Practical Geometry (Constructions)", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "### Access answers to Maths RD Sharma Solutions For Class 8 Chapter 18 Practical Geometry (Constructions)\n\nEXERCISE 18.1 PAGE NO: 18.4\n\n1. Construct a quadrilateral ABCD in which AB = 4.4 cm, BC = 4 cm, CD = 6.4 cm, DA = 3.8 cm and BD = 6.6 cm.\n\nSolution:\n\nThe given details are AB = 4.4 cm, BC = 4 cm, CD = 6.4 cm, DA = 3.8 cm and BD = 6.6 cm.\n\nDivide the quadrilateral into two triangles i.e., ΔABD and ΔBCD\n\nStep 1- By using SSS congruency rule, Draw line BD of length 6.6 cm.\n\nStep 2- Cut an arc with B as the centre and radius BC = 4cm. Do the same by taking D as centre and radius CD = 6.4 cm.\n\nStep 3- Now join the intersection point from B and D and label it as C.\n\nStep 4- Now for vertex A, cut an arc by taking B as the center and radius BA = 4.4cm. Do the same by taking D as center and radius DA = 3.8cm.\n\nStep 5- Join the intersection point from B and D and label it as A.", null, "2. Construct a quadrilateral ABCD in which AB = BC = 5.5 cm, CD = 4 cm, DA = 6.3 cm, AC = 9.4 cm Measure BD.\n\nSolution:\n\nThe given details are AB = BC = 5.5 cm, CD = 4 cm, DA = 6.3 cm, AC = 9.4 cm Measure BD.\n\nStep 1- Draw a line segment AB = 5.5cm\n\nStep 2- With B as center and radius BC = 5.5cm cut an arc. Mark that point as C.\n\nStep 3- With A as center and radius AC = 9.4cm cut an arc to intersect at point C.\n\nStep 4- With C as center and radius CD = 4cm cut an arc. Mark that point as D.\n\nStep 5- With A as center and radius AD = 6.3cm cut an arc to intersect at point D.\n\nStep 6- Now join BC, CD and AD\n\nMeasure of BD is 5.1cm.", null, "3. Construct a quadrilateral XYZW in which XY = 5 cm, YZ = 6 cm, ZW = 7 cm, WX = 3 cm and XZ = 9 cm.\n\nSolution:\n\nThe given details are XY = 5cm, YZ = 6cm, ZW = 7cm, WX = 3cm and XZ = 9cm.\n\nStep 1- Draw line XZ of length 9cm.\n\nStep 2- Cut an arc by taking X as the centre radius XY = 5cm. Do the same by taking Z as centre and radius ZY = 6cm.\n\nStep 3- Now join the intersection point from X and Z and label it as Y.\n\nStep 4- For vertex W, cut an arc by taking X as the center and radius XW = 3cm. Similarly, taking Z as the center and radius ZW = 7cm.\n\nStep 5- Join the intersection point from X and Z and label it as W.", null, "4. Construct a parallelogram PQRS such that PQ = 5.2 cm, PR = 6.8 cm, and QS = 8.2 cm.\n\nSolution:\n\nThe given details are PQ = 5.2 cm, PR = 6.8 cm, and QS = 8.2 cm.\n\nSteps to construct a parallelogram:\n\nStep 1- Draw line QS of length 8.2 cm.\n\nStep 2- Divide the line segment QS into half i.e 4.1 cm and mark that point as O. Now by taking O as center cut an arc on both the sides of O with a radius of 3.4cm each. And mark that points as P and R.\n\nStep 3- cut an arc by taking Q as a center and radius QR = 5.2cm to intersect with point R.\n\nStep 4- cut an arc by taking Q as a center and radius QP = 5.2cm to intersect with point P.\n\nStep 5- Join sides PQ, PS, QR and RS.", null, "5. Construct a rhombus with side 6 cm and one diagonal 8 cm. Measure the other diagonal.\n\nSolution:\n\nThe given details are side 6 cm and one diagonal 8 cm.\n\nWe know all the sides of a rhombus are equal and diagonals bisect each other.\n\nSteps to construct a rhombus:\n\nStep 1- Draw a line XZ of length 8 cm.\n\nStep 2- By taking a radius of 6 cm, cut an arc by taking X as the center. Do the same by taking Z as centre with radius of 6 cm.\n\nStep 3- Now join the intersection point from X and Z and label it as Y.\n\nStep 4- Now for vertex W, by taking radius of 6 cm and cut an arc by taking X as the center. Do the same by taking Z as center and radius of 6 cm.\n\nStep 5- Join the intersection point from X and Z and label it as W.\n\nStep 6- Now join XY, XW, XZ and ZY", null, "6. Construct a kite ABCD in which AB = 4 cm, BC = 4.9 cm, AC = 7.2 cm.\n\nSolution:\n\nThe given details are AB = 4 cm, BC = 4.9 cm, AC = 7.2 cm.\n\nSteps to construct a kite:\n\nStep 1- Draw line AC of length 7.2 cm.\n\nStep 2- By taking a radius of 4 cm and cut an arc by taking A as the center. Do the same by taking C as centre with radius of 4.9 cm.\n\nStep 3- Now join the intersection point from A and C and label it as B.\n\nStep 4- Now for vertex D, cut an arc by taking A as the center. Do the same by taking C as center with radius of 4.9 cm.\n\nStep 5- Join the intersection point from A and C and label it as D.", null, "7. Construct, if possible, a quadrilateral ABCD given AB = 6 cm, BC = 3.7 cm, CD = 5.7 cm, AD = 5.5 cm and BD = 6.1 cm. Give reasons for not being able to construct it, if you cannot.\n\nSolution:\n\nThe given details are AB = 6 cm, BC = 3.7 cm, CD = 5.7 cm, AD = 5.5 cm and BD = 6.1 cm.\n\nStep 1- Draw a line AB of length 6cm.\n\nStep 2- With A as a center cut an arc of radius 5.5cm and mark that point as D.\n\nStep 3- With B as a center cut an arc of radius 6.1cm to intersect with point D.\n\nStep 4- With B as a center cut an arc of radius 3.7cm and mark that point as C.\n\nStep 5- With D as a center cut an arc of radius 5.7cm to intersect with point C.\n\nStep 6- Now join AD, BD, BC and DC", null, "8. Construct, if possible, a quadrilateral ABCD in which AB = 6 cm, BC = 7 cm, CD = 3 cm, AD = 5.5 cm and AC = 11 cm. Give reasons for not being able to construct, if you cannot. (Not possible, because in triangle ACD, AD + CD<AC).\n\nSolution:\n\nThe given details are AB = 6 cm, BC = 7 cm, CD = 3 cm, AD = 5.5 cm and AC = 11 cm.\n\nSuch a Quadrilateral cannot be constructed because, in a triangle, the sum of the length of its two sides must be greater than that of the third side.\n\nIn triangle ACD,\n\nAD + CD = 5.5 + 3 = 8.5 cm\n\nGiven, AC = 11 cm\n\nSo, AD + CD < AC which is not possible.\n\n∴ The construction is not possible\n\nEXERCISE 18.2 PAGE NO: 18.6\n\n1. Construct a quadrilateral ABCD in which AB = 3.8 cm, BC = 3.0 cm, AD = 2.3 cm, AC = 4.5 cm and BD = 3.8 cm.\n\nSolution:\n\nThe given details are AB = 3.8 cm, BC = 3.0 cm, AD = 2.3 cm, AC = 4.5 cm and BD = 3.8 cm.\n\nStep 1- Draw a line AC = 6cm.\n\nStep 2- Cut an arc of radius 3.8cm with A as the center to mark that point as B.\n\nStep 3- Cut an arc of radius 3cm with C as the center to intersect with point B.\n\nStep 4- Cut an arc of radius 3.8cm with B as the center to mark that point as D.\n\nStep 5- Cut an arc of radius 2.3cm with A as the center to intersect with point D.\n\nStep 6- Now join AB, BD, AD and DC", null, "2. Construct a quadrilateral ABCD in which BC = 7.5 cm, AC = AD = 6 cm, CD = 5 cm and BD = 10 cm.\n\nSolution:\n\nThe given details are BC = 7.5 cm, AC = AD = 6 cm, CD = 5 cm and BD = 10 cm.\n\nStep 1- Draw a line AC = 6cm.\n\nStep 2- Cut an arc of radius 6cm with A as the center to mark that point as D.\n\nStep 3- Cut an arc of radius 5cm with C as the center to intersect at point D.\n\nStep 4- Cut an arc of radius 10cm with D as the center to mark that point as B.\n\nStep 5- Cut an arc of radius 7.5cm with C as the center to intersect at point B.\n\nStep 6- Now join AD, CD, DB and AB", null, "3. Construct a quadrilateral ABCD when AB = 3 cm, CD = 3 cm, DA = 7.5 cm, AC = 8 cm and BD = 4 cm.\n\nSolution:\n\nThe given details are AB = 3 cm, CD = 3 cm, DA = 7.5 cm, AC = 8 cm and BD = 4 cm.\n\nConsider a triangle ABD from the given data,\n\nSo, AB + BD = 3+4 = 7cm\n\nWe know that sum of lengths of two sides of a triangle is always greater than the third side.\n\n∴ The construction is not possible.\n\n4. Construct a quadrilateral ABCD given AD = 3.5 cm, BC = 2.5 cm, CD = 4.1 cm, AC = 7.3 cm and BD = 3.2 cm.\n\nSolution:\n\nThe given details are AD = 3.5 cm, BC = 2.5 cm, CD = 4.1 cm, AC = 7.3 cm and BD = 3.2 cm.\n\nStep 1- Draw a line CD = 4.1cm\n\nStep 2- Cut an arc of radius 7.3cm with C as the center to mark that point as A.\n\nStep 3- Cut an arc of radius 3.5cm with D as the center to intersect at point A.\n\nStep 4- Cut an arc of radius 3.2cm with D as the center to mark that point as B.\n\nStep 5- Cut an arc of radius 2.5cm with C as the center to intersect at point B.\n\nStep 6- Now join CA, DA, DB, CB and AB", null, "5. Construct a quadrilateral ABCD given AD = 5 cm, AB = 5.5 cm, BC = 2.5 cm, AC = 7.1 cm and BD = 8 cm.\n\nSolution:\n\nThe given details are AD = 5 cm, AB = 5.5 cm, BC = 2.5 cm, AC = 7.1 cm and BD = 8 cm.\n\nStep 1- Draw a line AB = 5.5cm\n\nStep 2- Cut an arc of radius 2.5cm with B as the center to mark that point as C.\n\nStep 3- Cut an arc of radius 7.1cm with A as the center to intersect at point C.\n\nStep 4- Cut an arc of radius 8cm with B as the center to mark that point as D.\n\nStep 5- Cut an arc of radius 5cm with A as the center to intersect at point D.\n\nStep 6- Now join BC, AC, BD, AD and CD", null, "6. Construct a quadrilateral ABCD in which BC = 4 cm, CA = 5.6 cm, AD = 4.5 cm, CD = 5 cm and BD = 6.5 cm.\n\nSolution:\n\nThe given details are BC = 4 cm, CA = 5.6 cm, AD = 4.5 cm, CD = 5 cm and BD = 6.5 cm.\n\nStep 1- Draw a line BC = 4cm\n\nStep 2- Cut an arc of radius 6.5cm with B as the center to mark that point as D.\n\nStep 3- Cut an arc of radius 5cm with C as the center to intersect at point D.\n\nStep 4- Cut an arc of radius 5.6cm with C as the center to mark that point as A.\n\nStep 5- Cut an arc of radius 4.5cm with D as the center to intersect at point A.\n\nStep 6- Now join BD, CD, CA, DA and AB", null, "EXERCISE 18.3 PAGE NO: 18.8\n\n1. Construct a quadrilateral ABCD in which AB = 3.8 cm, BC = 3.4 cm, CD = 4.5 cm, AD = 5 cm and ∠B = 80°.\n\nSolution:\n\nThe given details are AB = 3.8 cm, BC = 3.4 cm, CD = 4.5 cm, AD = 5 cm and ∠B = 80°.\n\nStep 1- Draw a line AB = 3.8cm\n\nStep 2- Construct and angle of 80o at B.\n\nStep 3- Cut an arc of radius 3.4cm with B as the center to mark that point as C.\n\nStep 4- Cut an arc of radius 5cm with A as the center to mark that point as D.\n\nStep 5- Cut an arc of radius 4.5cm with C as the center to intersect at point D.\n\nStep 6- Now join BC, AD and CD", null, "2. Construct a quadrilateral ABCD given that AB = 8 cm, BC = 8 cm, CD = 10 cm, AD = 10 cm and ∠A = 45°.\n\nSolution:\n\nThe given details are AB = 8 cm, BC = 8 cm, CD = 10 cm, AD = 10 cm and ∠A = 45°.\n\nStep 1- Draw a line AB = 8cm\n\nStep 2- Construct and angle of 45o at A.\n\nStep 3- Cut an arc of radius 10cm with A as the center to mark that point as D.\n\nStep 4- Cut an arc of radius 10cm with D as the center to mark that point as C.\n\nStep 5- Cut an arc of radius 8cm with B as the center to intersect at point C.\n\nStep 6- Now join AD, DC and BC", null, "3. Construct a quadrilateral ABCD in which AB = 7.7 cm, BC = 6.8 cm, CD = 5.1 cm, AS = 3.6 cm and ∠C = 120°.\n\nSolution:\n\nThe given details are AB = 7.7 cm, BC = 6.8 cm, CD = 5.1 cm, AS = 3.6 cm and ∠C = 120°.\n\nStep 1- Draw a line DC = 5.1cm\n\nStep 2- Construct and angle of 120o at C.\n\nStep 3- Cut an arc of radius 6.8cm with C as the center to mark that point as B.\n\nStep 4- Cut an arc of radius 7.7cm with B as the center to mark that point as A.\n\nStep 5- Cut an arc of radius 3.6cm with D as the center to intersect at point A.\n\nStep 6- Now join CB, BA and DA", null, "4. Construct a quadrilateral ABCD in which AB = BC = 3 cm, AD = CD = 5 cm and ∠B = 120°.\n\nSolution:\n\nThe given details are AB = BC = 3 cm, AD = CD = 5 cm and ∠B = 120°.\n\nStep 1- Draw a line AB = 3cm\n\nStep 2- Construct and angle of 120o at B.\n\nStep 3- Cut an arc of radius 3cm with B as the center to mark that point as C.\n\nStep 4- Cut an arc of radius 5cm with C as the center to mark that point as D.\n\nStep 5- Cut an arc of radius 5cm with A as the center to intersect at point D.\n\nStep 6- Now join BC, CD and DA", null, "5. Construct a quadrilateral ABCD in which AB = 2.8 cm, BC = 3.1 cm, CD = 2.6 cm and DA = 3.3 cm and ∠A = 60°.\n\nSolution:\n\nThe given details are AB = 2.8 cm, BC = 3.1 cm, CD = 2.6 cm and DA = 3.3 cm and ∠A = 60°.\n\nStep 1- Draw a line AB = 2.8cm\n\nStep 2- Construct and angle of 60o at A.\n\nStep 3- Cut an arc of radius 3.3cm with A as the center to mark that point as D.\n\nStep 4- Cut an arc of radius 2.6cm with D as the center to mark that point as C.\n\nStep 5- Cut an arc of radius 3.1cm with B as the center to intersect at point C.\n\nStep 6- Now join AD, DC and CB", null, "6. Construct a quadrilateral ABCD in which AB = BC = 6 cm, AD = DC = 4.5 cm and ∠B = 120°.\n\nSolution:\n\nThe given details are AB = BC = 6 cm, AD = DC = 4.5 cm and ∠B = 120°.\n\nStep 1- Draw a line AB = 6cm\n\nStep 2- Construct and angle of 120o at B.\n\nStep 3- Cut an arc of radius 6cm with B as the center to mark that point as C.\n\nHere, AC is about 10.3cm in length which is greater than AD + CD = 4.5+4.5=9cm\n\nWe know that sum of the two sides of a triangle is always greater than the third side.\n\n∴ Construction is not possible.", null, "EXERCISE 18.4 PAGE NO: 18.10\n\n1. Construct a quadrilateral ABCD in which AB = 6 cm, BC = 4 cm, CD = 4 cm, ∠B = 95° and ∠C = 90°.\n\nSolution:\n\nThe given details are AB = 6 cm, BC = 4 cm, CD = 4 cm, ∠B = 95° and ∠C = 90°.\n\nStep 1- Draw a line BC = 4cm\n\nStep 2- Construct and angle of 95o at B.\n\nStep 3- Cut an arc of radius 6cm with B as the center to mark that point as A.\n\nStep 4- Construct and angle of 90o at C.\n\nStep 5- Cut an arc of radius 4cm with C as the center to mark that point as D.\n\nStep 6- Now join BA, CD and AD", null, "2. Construct a quadrilateral ABCD where AB = 4.2cm, BC = 3.6 cm, CD = 4.8 cm, ∠B = 30° and ∠C = 150°.\n\nSolution:\n\nThe given details are AB = 4.2cm, BC = 3.6 cm, CD = 4.8 cm, ∠B = 30° and ∠C = 150°.\n\nStep 1- Draw a line BC = 3.6cm\n\nStep 2- Construct and angle of 30o at B.\n\nStep 3- Cut an arc of radius 4.2cm with B as the center to mark that point as A.\n\nStep 4- Construct and angle of 150o at C.\n\nStep 5- Cut an arc of radius 4.8cm with C as the center to mark that point as D.\n\nStep 6- Now join BA, CD and AD", null, "3. Construct a quadrilateral PQRS in which PQ = 3.5 cm, QR = 2.5 cm, RS = 4.1 cm, ∠Q = 75° and ∠R = 120°.\n\nSolution:\n\nThe given details are PQ = 3.5 cm, QR = 2.5 cm, RS = 4.1 cm, ∠Q = 75° and ∠R = 120°.\n\nStep 1- Draw a line QR = 2.5cm\n\nStep 2- Construct and angle of 75o at Q.\n\nStep 3- Cut an arc of radius 3.5cm with Q as the center to mark that point as P.\n\nStep 4- Construct and angle of 120o at R.\n\nStep 5- Cut an arc of radius 4.1cm with R as the center to mark that point as S.\n\nStep 6- Now join QP, RS and PS", null, "4. Construct a quadrilateral ABCD given BC = 6.6 cm, CD = 4.4 cm, AD = 5.6 cm ∠D = 100° and ∠C = 95\n\nSolution:\n\nThe given details are BC = 6.6 cm, CD = 4.4 cm, AD = 5.6 cm ∠D = 100° and ∠C = 95\n\nStep 1- Draw a line DC = 4.4cm\n\nStep 2- Construct and angle of 100o at D.\n\nStep 3- Cut an arc of radius 5.6cm with D as the center to mark that point as A.\n\nStep 4- Construct and angle of 95o at C.\n\nStep 5- Cut an arc of radius 6.6cm with C as the center to mark that point as B.\n\nStep 6- Now join DA, CB and AB", null, "5. Construct a quadrilateral ABCD in which AD = 3.5 cm, AB = 4.4 cm, BC = 4.7 cm, ∠A = 125° and ∠B = 120°.\n\nSolution:\n\nThe given details are AD = 3.5 cm, AB = 4.4 cm, BC = 4.7 cm, ∠A = 125° and ∠B = 120°.\n\nStep 1- Draw a line AB = 4.4cm\n\nStep 2- Construct and angle of 125o at A.\n\nStep 3- Cut an arc of radius 3.5cm with A as the center to mark that point as D.\n\nStep 4- Construct and angle of 120o at B.\n\nStep 5- Cut an arc of radius 4.7cm with B as the center to mark that point as C.\n\nStep 6- Now join AD, BC and CD", null, "6. Construct a quadrilateral PQRS in which ∠Q = 45° and ∠R = 90°, QR = 5 cm, PQ = 9 cm and RS = 7 cm.\n\nSolution:\n\nThe given details are ∠Q = 45° and ∠R = 90°, QR = 5 cm, PQ = 9 cm and RS = 7 cm.\n\nStep 1- Draw a line QR = 5cm\n\nStep 2- Construct and angle of 45o at Q.\n\nStep 3- Cut an arc of radius 9cm with Q as the center to mark that point as P.\n\nStep 4- Construct and angle of 90o at R.\n\nStep 5- Cut an arc of radius 7cm with R as the center to mark that point as S.\n\nStep 6- Now join QP, RS\n\nSince the line segment QP and RS are not intersecting at each other, quadrilateral cannot be formed.", null, "7. Construct a quadrilateral ABCD in which AB = BC = 3 cm, AD = 5 cm, ∠A = 90° and ∠B = 105°.\n\nSolution:\n\nThe given details are AB = BC = 3 cm, AD = 5 cm, ∠A = 90° and ∠B = 105°.\n\nStep 1- Draw a line AB = 3cm\n\nStep 2- Construct and angle of 90o at A.\n\nStep 3- Cut an arc of radius 5cm with A as the center to mark that point as D.\n\nStep 4- Construct and angle of 105o at B.\n\nStep 5- Cut an arc of radius 3cm with B as the center to mark that point as C.\n\nStep 6- Now join AD, BC and CD", null, "8. Construct a quadrilateral BDEF, where DE = 4.5 cm, EF = 3.5 cm, FB = 6.5 cm, ∠F = 50° and ∠E = 100°.\n\nSolution:\n\nThe given details are DE = 4.5 cm, EF = 3.5 cm, FB = 6.5 cm, ∠F = 50° and ∠E = 100°.\n\nStep 1- Draw a line EF = 3.5cm\n\nStep 2- Construct and angle of 100o at E.\n\nStep 3- Cut an arc of radius 4.5cm with E as the center to mark that point as D.\n\nStep 4- Construct and angle of 50o at F.\n\nStep 5- Cut an arc of radius 6.5cm with F as the center to mark that point as B.\n\nStep 6- Now join DE, FB and DB", null, "EXERCISE 18.5 PAGE NO: 18.13\n\n1. Construct a quadrilateral ABCD given that AB = 4 cm, BC = 3 cm, ∠A = 75°, ∠B = 80° and ∠C = 120°.\n\nSolution:\n\nThe given details are AB = 4 cm, BC = 3 cm, ∠A = 75°, ∠B = 80° and ∠C = 120°.\n\nStep 1- Draw a line AB = 4cm\n\nStep 2- Construct and angle of 75o at A.\n\nStep 3- Construct and angle of 80o at B.\n\nStep 4- Cut an arc of radius 3cm with B as the center to mark that point as C.\n\nStep 5- Construct and angle of 120o at C such that it meets the line segment AX, mark that point as D.\n\nStep 6- Now join BC, CD and DA", null, "2. Construct a quadrilateral ABCD where AB = 5.5 cm, BC = 3.7 cm, ∠A = 60°, ∠B = 105° and ∠D = 90°.\n\nSolution:\n\nThe given details are AB = 5.5 cm, BC = 3.7 cm, ∠A = 60°, ∠B = 105° and ∠D = 90°.\n\nWe know that ∠A + ∠B + ∠C + ∠D = 360o\n\n∴ ∠C = 105o\n\nStep 1- Draw a line AB = 5.5cm\n\nStep 2- Construct and angle of 60o at A.\n\nStep 3- Construct and angle of 105o at B.\n\nStep 4- Cut an arc of radius 3.7cm with B as the center to mark that point as C.\n\nStep 5- Construct and angle of 105o at C such that it meets the line segment AX, mark that point as D.\n\nStep 6- Now join BC, CD and DA", null, "3. Construct a quadrilateral PQRS where PQ = 3.5 cm, QR = 6.5 cm, ∠P = ∠R = 105° and ∠S = 75°.\n\nSolution:\n\nThe given details are PQ = 3.5 cm, QR = 6.5 cm, ∠P = ∠R = 105° and ∠S = 75°.\n\nWe know that ∠P + ∠Q + ∠R + ∠S = 360o\n\n∴ ∠Q = 75o\n\nStep 1- Draw a line PQ = 3.5cm\n\nStep 2- Construct and angle of 105o at P.\n\nStep 3- Construct and angle of 75o at Q.\n\nStep 4- Cut an arc of radius 6.5cm with Q as the center to mark that point as R.\n\nStep 5- Construct and angle of 105o at R such that it meets the line segment PX, mark that point as S.\n\nStep 6- Now join QR, RS and PS", null, "4. Construct a quadrilateral ABCD when BC = 5.5 cm, CD = 4.1 cm, ∠A = 70°, ∠B = 110° and ∠D = 85°.\n\nSolution:\n\nThe given details are BC = 5.5 cm, CD = 4.1 cm, ∠A = 70°, ∠B = 110° and ∠D = 85°.\n\nWe know that ∠A + ∠B + ∠C + ∠D = 360o\n\n∴ ∠C = 95o\n\nStep 1- Draw a line BC = 5.5cm\n\nStep 2- Construct and angle of 110o at B.\n\nStep 3- Construct and angle of 95o at C.\n\nStep 4- Cut an arc of radius 4.1cm with C as the center to mark that point as D.\n\nStep 5- Construct and angle of 85o at D such that it meets the line segment BX, mark that point as A.\n\nStep 6- Now join CD, DA and BA", null, "5. Construct a quadrilateral ABCD ∠A = 65°, ∠B = 105°, ∠C = 75°, BC = 5.7 cm and CD = 6.8 cm.\n\nSolution:\n\nThe given details are ∠A = 65°, ∠B = 105°, ∠C = 75°, BC = 5.7 cm and CD = 6.8 cm.\n\nWe know that ∠A + ∠B + ∠C + ∠D = 360o\n\n∴ ∠D = 115o\n\nStep 1- Draw a line BC = 5.7cm\n\nStep 2- Construct and angle of 105o at B.\n\nStep 3- Construct and angle of 75o at C.\n\nStep 4- Cut an arc of radius 6.8cm with C as the center to mark that point as D.\n\nStep 5- Construct and angle of 115o at D such that it meets the line segment BX, mark that point as A.\n\nStep 6- Now join CD, DA and BA", null, "6. Construct a quadrilateral PQRS in which PQ = 4 cm, QR = 5 cm ∠P = 50°, ∠Q = 110° and ∠R = 70°.\n\nSolution:\n\nThe given details are PQ = 4 cm, QR = 5 cm ∠P = 50°, ∠Q = 110° and ∠R = 70°.\n\nStep 1- Draw a line PQ = 4cm\n\nStep 2- Construct and angle of 50o at P.\n\nStep 3- Construct and angle of 110o at Q.\n\nStep 4- Cut an arc of radius 5cm with Q as the center to mark that point as R.\n\nStep 5- Construct and angle of 70o at R such that it meets the line segment PX, mark that point as S.\n\nStep 6- Now join QR, RS and PS", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-1-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-1-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-1-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-1-4.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-1-5.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-2-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-2-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-2-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-3-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-3-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-3-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-4.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-4-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-4-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-4-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-4-4.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-4-5.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-5.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-5-1.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-5-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-class-8-maths-chapter-18-Ex-5-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-class-8-maths-chapter-18-Ex-5-4.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-1.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-2.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-3.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-4.png", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-solutions-for-class-8-maths-chapter-18-5.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-6.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-7.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-8.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-9.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-10.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-11.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-12.png", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-solutions-for-class-8-maths-chapter-18-13.png", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-solutions-for-class-8-maths-chapter-18-14.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-15.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-16.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-17.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-18.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-19.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-20.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-21.png", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-solutions-for-class-8-maths-chapter-18-22.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-23.png", null, "https://cdn1.byjus.com/wp-content/uploads/2020/10/rd-sharma-solutions-for-class-8-maths-chapter-18-24.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-25.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-26.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-27.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-28.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-29.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-30.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/12/rd-sharma-solutions-for-class-8-maths-chapter-18-31.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9101859,"math_prob":0.99621516,"size":21840,"snap":"2021-43-2021-49","text_gpt3_token_len":7616,"char_repetition_ratio":0.23603225,"word_repetition_ratio":0.6451742,"special_character_ratio":0.35050365,"punctuation_ratio":0.14490955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988084,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116],"im_url_duplicate_count":[null,null,null,7,null,7,null,7,null,7,null,4,null,7,null,7,null,7,null,7,null,7,null,7,null,3,null,7,null,7,null,3,null,7,null,3,null,3,null,3,null,7,null,7,null,7,null,7,null,3,null,7,null,7,null,7,null,7,null,7,null,7,null,4,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,3,null,3,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,3,null,7,null,3,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T06:14:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6b94cc54-ef62-4766-8c10-1d5694ab32fa>\",\"Content-Length\":\"760422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f19f0805-9a15-4044-8207-fcd34def124b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5276cba-8299-44b1-9f1b-8dc4b8f9feb2>\",\"WARC-IP-Address\":\"162.159.130.41\",\"WARC-Target-URI\":\"https://byjus.com/rd-sharma-solutions/class-8-maths-chapter-18-practical-geometry/\",\"WARC-Payload-Digest\":\"sha1:IQEMWEWAAH2NZZWV2FTM2YOE5RF4ZAG3\",\"WARC-Block-Digest\":\"sha1:N26RPLHTR7CUTTNFIWCBIFPTXIWSXRFV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585561.4_warc_CC-MAIN-20211023033857-20211023063857-00307.warc.gz\"}"}
https://scholar.archive.org/search?q=On+the+Minimum+Volume+of+a+Perturbed+Unit+Cube.
[ "Filters\n\n17,302 Hits in 4.9 sec\n\n### On the Minimum Volume of a Perturbed Unit Cube [chapter]\n\nJin-Yi Cai\n2002 Lecture Notes in Computer Science\nWe give exact bounds to the minimum volume of a parallelepiped whose spanning vectors are perturbations of the n unit vectors by vectors of length at most .  ...  This extends Micciancio's recent sharp bounds to all possible values of . We also completely determine all possible perturbations with length at most that achieve this minimum volume.  ...  Suppose Q is the unit cube, spanned by the n unit vectors e 1 = (1, 0, . . . , 0), e 2 = (0, 1, . . . , 0), . . . , e n = (0, 0, . . . , 1), Q = { n i=1 a i e i | 0 ≤ a i ≤ 1, 1 ≤ i ≤ n}. (1) Suppose now  ...\n\n### Instability of the wet cube cone soap film\n\nKenneth Brakke\n2005 Colloids and Surfaces A: Physicochemical and Engineering Aspects\nA \"dry\" conical soap film on a cubical frame is not stable.  ...  This paper presents numerical simulation evidence that the the wet cone is unstable for low enough liquid fraction, with the critical liquid fraction being about 0.000278.  ...  Given the restricted nature of the perturbations considered, these values only give a lower bound on the true critical volume.  ...\n\n### Stearic acid solubility and cubic phase volume\n\nWalter F. Schmidt, Justin R. Barone, Barry Francis, James B. Reeves\n2006 Chemistry and Physics of Lipids\nBelow 4% SA/volume (e.g. in acetonitrile), the head and foot of each SA molecules on average is more than one solvent molecule away from the head and foot of a neighboring SA molecule.  ...  At 50% SA/cubic volume, -CH 2groups on SA molecules are separated from neighboring -CH 2 -groups on SA molecules by a monolayer of solvent molecules.  ...  The net dipole moment for the cube is essentially the same as for only one molecule of SA. On average for every 13 head groups of SA on 1 surface of the cube, 12 will be on the opposite cube.  ...\n\n### Effect of gravity on the orientation and detachment of cubic particles adsorbed at soap film or liquid interfaces\n\nIoan Tudur Davies, Christophe Raufaste\n2021 Soft Matter\nWe investigate the interaction that occurs between a light solid cube falling under gravity and a horizontal soap film that is pinned to a circular ring. We observe in both...  ...  CR acknowledges support by the French government, through the National Research Agency (ANR-20-CE30-0019) and through the UCA JEDI Investments in the Future project of the National Research Agency (ANR  ...  Acknowledgements We thank Simon Cox for stimulating discussions and Ken Brakke for developing and maintaining the Surface Evolver. TD acknowledges Supercomputing Wales facilities.  ...\n\n### Worst-case bounds for subadditive geometric graphs\n\nMarshall Bern, David Eppstein\n1993 Proceedings of the ninth annual symposium on Computational geometry - SCG '93\nWe consider graphs such as the minimum spanning tree, minimum Steiner tree, minimum matching, and traveling salesman tour for n points in the d-dimensional unit cube.  ...  This is a consequence of a general \"gap theorem\": for any subadditive geometric graph, either the worst-case sum of edge lengths is O(n (d−1)/d ) and the sum of dth powers is O(log n), or the sum of edge  ...  For each of the graphs, a regular grid of points in the unit cube shows that L(G(X)) is Ω(n (d−1)/d ).  ...\n\n### On the ratio of the string tension and the glueball mass squared in the continuum\n\nPierre van Baal\n1986 Nuclear Physics B\nWe present the weak coupling non-perturbative expression for the energy of 't Hooft type electric flux.  ...  Combining this with Liischer's perturbative scheme for the glueball mass M(0+), we propose a method to estimate the ratio o'/M(0+) 2, with o-the string tension.  ...  SU(2) Yang-Mills on a torus The space we work on is the 3-dimensional torus, specified by a cube L x L x L.  ...\n\n### Calculation of energy relaxation rates of fast particles by phonons in crystals\n\nM. P. Prange, L. W. Campbell, D. Wu, F. Gao, S. Kerisit\n2015 Physical Review B\nWe present ab initio calculations of the temperature-dependent exchange of energy between a classical charged point-particle and the phonons of a crystalline material.  ...  We discuss the influence of the form assumed for quasiparticle dispersion on theoretical estimates of electron cooling rates.  ...  The subscript s refers to the unit cell and κ refers to the sublattice which hosts nuclei of charge Z κ . The crystal has N repeated unit cells which occupy a volume V = N Ω.  ...\n\n### The optimal centroidal Voronoi tessellations and the gersho's conjecture in the three-dimensional space\n\nQiang Du, Desheng Wang\n2005 Computers and Mathematics with Applications\nWe provide abundant evidence to substantiate the claim of the conjecture: the body-centered-cubic lattice (or Par6) based centroidal Voronoi tessellation has the lowest cost (or energy) per unit volume  ...  In this paper, we conduct extensive numerical simulations to investigate the asymptotic structures of optimal centroidal Voronoi tessellations for a given domain.  ...  From the statistics of the energy per unit volume of the final CVTs, it can be seen that there is a tendency to approach the optimal BCC structure.  ...\n\n### Description of multi-particle systems using Voronoi polyhedra\n\nY.C Liao, D.J Lee, Bing-Hung Chen\n2001 Powder Technology\nThe Ž . Ž . methodology employed herein initiates with a fixed number of particles ranging from 64 to 256 on various lattice sites fcc, bcc and sc .  ...  The position of each particle is then perturbed using three different methods, with the magnitude being controlled by the variable p.  ...  Acknowledgements The authors appreciate Prof. C.Y. Mou of the Department of Chemistry, National Taiwan University, for providing the VP program.  ...\n\n### Shapes and Textures for Rendering Coral [chapter]\n\nNelson L. Max, Geoff Wyvill\n1991 Scientific Visualization of Physical Phenomena\nThe resulting contour surfaces are rendered by ray tracing, using a generalized volume textare to produce shading and \"bump mapped\" normal perturbations.  ...  Abstract A growth algorithm has been developed to build coral shapes out of a tree of spheres. A volume density defined by the spheres is contoured to give a \"soft object\".  ...  Department of Energy under contract number W-7405-Eng-48to the Lawrence Livermore NationalLaboratory.We are alsoindebtedto the Universityof Otago WilliamEvans Fund forfinancial supportand to Television  ...\n\n### Simulating the interaction between a descending super-quadric solid object and a soap film\n\nI. T. Davies\n2018 Proceedings of the Royal Society A\nWe vary the shape of the falling object from a sphere to a cube by changing a single shape parameter as well as varying the initial orientation and position of the object.  ...  We show that a cubic particle in a particular orientation experiences the largest drag force, and that this orientation is also the most likely outcome of dropping a cube from an arbitrary orientation  ...  Brakke for providing and supporting the Surface Evolver.  ...\n\n### Page 4071 of Mathematical Reviews Vol. , Issue 91G [page]\n\n1991 Mathematical Reviews\nIn this paper, the authors consider graphs G = (V,,E), whose vertices {x;,---,Xn} form a set of n points in the unit d-cube, i.e. V_ € 0, 1]?  ...  a circuit and a pair of dual variational principles, one of which we interpret as a principle of the minimum of the La- grange potential energy for statics, and the other, as a principle 90C Mathematical  ...\n\n### Interface stability in a slowly rotating low-gravity tank\n\nROGER F. GANS, FRED W. LESLIE\n1987 Journal of Spacecraft and Rockets\nA summary of the results is shown in Fig. 2, which gives L*, the length of the bubble normalized by the cube root of the bubble volume, as a function of 0*, the cube root of pQ? V/(8T)( =[V/R3yJe).  ...  f, + Uf,/A°), +fy4/(R?A) —f/(R7A)} (16) where A? = 1 + R’? = 1/g? on the free surface. Multiply the momentum equation by the complex conjugate of u and integrate over the liquid volume.  ...\n\n### A dissipative particle dynamics method for modeling the geometrical packing of filler particles in polymer composites\n\nJ. A. Elliott, A. H. Windle\n2000 Journal of Chemical Physics\nIn one case, entropically driven demixing was observed in a cube-sphere mixture.  ...  The technique is based on the calculation of the dissipative dynamics of an ensemble of fused soft spheres at constant temperature and pressure.  ...  They also wish to acknowledge the University of Cambridge High Performance Computing Facility for the use of their SGI Origin 2000 computer and associated staff support during the course of this project  ...\n\n### Lattice Delone simplices with super-exponential volume\n\nFrancisco Santos, Achill Schürmann, Frank Vallentin\n2007 European journal of combinatorics (Print)\nIn this short note we give a construction of an infinite series of Delone simplices whose relative volume grows super-exponentially with their dimension.  ...  This dramatically improves the previous best lower bound, which was linear.  ...  The first author was partially supported by the Spanish Ministry of Science and Education under grant MTM2005-08618-C02-02.  ...\n« Previous Showing results 1 — 15 out of 17,302 results" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8007186,"math_prob":0.93364024,"size":9752,"snap":"2022-27-2022-33","text_gpt3_token_len":2566,"char_repetition_ratio":0.109663524,"word_repetition_ratio":0.0040241447,"special_character_ratio":0.25656277,"punctuation_ratio":0.17708333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9526917,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T17:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:f470c096-1035-4815-820d-0639b4cfa59e>\",\"Content-Length\":\"120433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1b8d12c-e2bc-42f0-9117-791d0a854d79>\",\"WARC-Concurrent-To\":\"<urn:uuid:0019e073-2470-459a-8e33-ef9aba5781a3>\",\"WARC-IP-Address\":\"207.241.225.9\",\"WARC-Target-URI\":\"https://scholar.archive.org/search?q=On+the+Minimum+Volume+of+a+Perturbed+Unit+Cube.\",\"WARC-Payload-Digest\":\"sha1:T6OXSXIVZABXO4PUFAYIZNNUEAPP2LEH\",\"WARC-Block-Digest\":\"sha1:XSMK7U4LHXLPQOHGP75UGAIEYPHISAQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571198.57_warc_CC-MAIN-20220810161541-20220810191541-00556.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/cond-mat/0610264/
[ "# Spin-transfer in an open ferromagnetic layer: from negative damping to effective temperature\n\nJ.-E. Wegrowe, M. C. Ciornei, H.-J. Drouhin Laboratoire des Solides Irradiés, Ecole Polytechnique, CNRS-UMR 7642 & CEA/DSM/DRECAM, 91128 Palaiseau Cedex, France.\nJuly 3, 2022\n###### Abstract\n\nSpin-transfer is a typical spintronics effect that allows a ferromagnetic layer to be switched by spin-injection. Most of the experimental results about spin transfer (quasi-static hysteresis loops or AC resonance measurements) are described on the basis of the Landau-Lifshitz-Gilbert equation of the magnetization, in which additional current-dependent damping factors are added, and can be positive or negative. The origin of the damping can be investigated further by performing stochastic experiments, like one shot relaxation experiments under spin-injection in the activation regime of the magnetization. In this regime, the Néel-Brown activation law is observed which leads to the introduction of a current-dependent effective temperature. In order to justify the introduction of these counterintuitive parameters (effective temperature and negative damping), a detailed thermokinetic analysis of the different sub-systems involved is performed. We propose a thermokinetic description of the different forms of energy exchanged between the electric and the ferromagnetic sub-systems at a Normal/Ferromagnetic junction.\n\nThe derivation of the Fokker-Planck equation in the framework of the thermokinetic theory allows the damping parameters to be defined from the entropy variation and refined with the Onsager reciprocity relations and symmetry properties of the magnetic system. The contribution of the spin-polarized current is introduced as an external source term in the conservation laws of the ferromagnetic layer. Due to the relaxation time separation, this contribution can be reduced to an effective damping. The flux of energy transferred between the ferromagnet and the spin-polarized current can be positive or negative, depending on spin accumulation configuration. The effective temperature is deduced in the activation (stationary) regime, providing that the relaxation time that couples the magnetization to the spin-polarized current is shorter than the relaxation to the lattice.\n\n###### pacs:\n72.25.Hg, 75.47.De, 75.40.Gb\n\nIn the context of spintronics, the electrical resistance of magnetic nanostructures are tuned with the magnetization states. Giant magnetoresistance (GMR), or anisotropic magnetoresistance (AMR) allow the magnetization states of nano-layers to be measured with great precision. Such magnetoresistances are easily scalable reading processes and are used for magnetic sensors and random access memorie (MRAM) technology. The possibility of controlling the magnetic configuration of a magnetic nanostructure by injecting spins emerged only in recent studies, opening the way to a readily scalable writing process for MRAMs application. This approach is also extended to thermally assisted switching, in which the heat fluxes are also exploited in order to help the magnetization reversal. In order to control the magnetic configurations and their stabilities (for reading and writing processes), in such magnetic nanopillars, it is necessary to understand on one hand the processes responsible for the magnetization reversal (in the presence of a magnetic field and heat), and on the other hand, the processes governing spin-dependent electronic transport at normal/Ferromagnetic interfaces. Taken separately, both effects are rather well understood today. However, coupling the two processes leads one to consider a large variety of possible mechanisms, called spin-transfer, that involve an ensemble of non-equilibrium sub-systems in interaction, with different populations of electrons and different populations of spins. The present work tries to clarify this picture with a phenomenological analysis based on non-equilibrium thermodynamics of open systems.\n\nMagnetization reversal provoked by spin injection has been observed in magnetic nanostructures of various morphologies, from spin-valve multilayers Albert ; Myers ; Julie ; Sun ; APL ; Kent ; Deac to nanowires EPL ; Derek ; Marcel or point contacts Tsoi0 ; Tsoi ; PRLStiles ; Rippard , and different types of magnetic domain walls BergerDW ; JulieDW ; Vernier ; Klaui ; SShape ; Luc . In order to describe and interpret these observations, physicists where forced to add one or two current-dependent terms into the well-known dynamical equations that describe a ferromagnetic layer coupled to a heat bath (Fokker-Planck or corresponding Landau-Lifshitz-Gilbert equations). However, the question remains open about the deterministic (e.g. spin-torque) or stochastic (e.g. irreversible) nature of the terms to be added.\n\nIt has been observed that for a time window larger than the nanosecond time scale, and in the framework of one-shot measurements (i.e. non-averaged, or irreversible measurements), the magnetization reversal induced by spin-injection is an activated process, with two level fluctuations MSU ; SPIE ; Fabian ; Pufall or simple irreversible jumps Guittienne ; SPIE . In these experiments, governed by stochastic fluctuations and noise, the observed effect is accounted for by a current dependent effective temperature in the Néel-Brown activation law SPIE . In contrast, for quasi-static measurements (e.g. magnetoresistance measured as a function of the magnetic field or current with DC systems or lock-in detection system) and for high frequency measurements, oscillations and resonances indicate, in the frequency domain, the manifestation of quasi-ballistic precession effects Rippard ; Pufall ; Kiselev ; Covington ; Krivorotov . In these last experiments, the stochastic nature of the signal is averaged out, and the behavior is described in terms of current dependent negative damping within a generalized Landau-Lifshitz-Gilbert (LLG) equation. This negative damping formulation is motivated by the pioneering works of Berger Berger and Slonczewski Sloncz about the deterministic spin transfer torque theory. However, the deterministic approach cannot directly account for the magnetic relaxation measurements performed in the activation regime (as discussed in Sec. IV-A below). The hypothesis of the Slonczewski’s spin-torque (presented as a current-dependent deterministic term in the microscopic Landau-Lifshitz-Gilbert equation) is not useful as such in the description proposed here, i.e. in the context of open systems.\n\nIn order to justify the introduction of the counterintuitive phenomenological parameters (effective temperature and negative damping), a detailed analysis of the different sub-systems is performed on the basis of thermokinetic theory Prigogine53 ; Prigogine ; Guggenheim ; Stuck ; DeGroot ; Kuiken ; Parrott ; Mazur ; Gruber ; Vilar ; Rubi2 ; PRBThermo ; FourChan ; MTEPW . The first step (first section below) is to identify the relevant sub-systems of interest ( pointing out the difference between the spin-accumulation due to the diffusion of spin-dependent conduction electrons at an interface, and the magnetization of a ferromagnetic layer), the coupling between them, and the role of microscopic degree of freedom that will be reduced to the action of the environment. In section two, spin-injection and spin-dependent transport are described in the framework of the two spin-channel approximation (a conduction channel that carries spin up and a conduction channel that carries spin down, defined by the conductivities). Giant magnetoresistance, spin-accumulation, and corresponding entropy production, or heat transfer, are deduced. Beyond the two spin channel approximation, the analysis is extended to four channels with the introduction of two other electronic populations (typically -like for conduction electrons, and -like for the ferromagnetic order parameter) and the relaxation between them. In the same manner as spin-flip scattering coupled the spin-up and spin-down channels, this relaxation defines a dissipative coupling between the ferromagnet and the spin-dependent electric sub-systems. The third section is devoted to the detailed description of the ferromagnetic order parameter coupled to a heat bath (without spin-injection). Both the rotational Fokker-Planck equation and the corresponding LLG equation are derived in the framework of the thermokinetic theory, i.e. with the help of the first two laws of thermodynamics and the Onsager reciprocity relations only. The coupling of the ferromagnetic order parameter to the heat bath is introduced via the chemical potential with a typical Maxwell-Boltzmann diffusion term including the temperature Prigogine53 ; Mazur . The Néel-Brown law is deduced in the activation regime.\n\nThe last section is devoted to the ferromagnetic Brownian motion activated by spin-injection. The contribution of the spin-polarized current is introduced by the like relaxation, as a source term into the conservation laws of the magnetization. Explicitly, it is shown that if is the density of magnetic moments oriented in a given direction of the unit sphere, and is the corresponding flux of magnetic moments (this flux is not a displacement in the usual space), the conservation of writes: , where the divergence is defined on the sphere and is the relaxation rate, integrated through the Normal-Ferromagnetic interfaces. This equation defines the irreversible spin-transfer occurring in the ferromagnetic layer, taken as an open system. The relaxation rate is related to the spin-accumulation through an Onsager transport coefficient , (where is proportional to the current). is linked to the relaxation times through the charge conservation laws (or electric screening properties).\n\nDue to the large relaxation time separation, the contribution of the source term can be reduced to the effect of an environment that is responsible for an effective damping and effective fluctuations (or effective temperature). The energy transferred between ferromagnetic layer and the sub-system defined by the spin-accumulation conduction electrons can be positive or negative, depending on the sign of the spin accumulation at the different interfaces. The effective temperature is deduced in the activation (stationary) regime, because the relaxation time that couples the magnetization to the spin-polarized current short cuts the relaxation to the lattice.\n\n## I Thermokinetic approach\n\n### i.1 Interacting sub-systems\n\nThe general scheme of the thermokinetic approach is described in the references Prigogine ; Stuck ; DeGroot ; Kuiken ; Mazur . The method consists in defining the state of the system with a set of the relevant extensive variables, say , where is, e.g. the densities of particles in the sub-system , or equivalently, the density of component of a multicomponent fluid, and is the total entropy density. The conservation equations should then be written, and the two laws of thermodynamics applied. The conservation equation for the component writes:\n\n ∂ni∂t=−div(→Ji)+Σjνij˙Ψj (1)\n\nThe divergence of the current describes the conservative part of the process, and the term is a source term that describes the relaxation of components into the component (), or inversely () RquChim . It is proportional to the inverse of the relaxation time (see Appendix A). Physically, the term describes the relaxation process that changes the internal degree of freedom (e.g. spins, electric charges, internal configuration). In terms of chemical reactions, is the velocity of the reaction, i.e. the generalized flux thermodynamically conjugated to the chemical affinity (defined below). The summation over all sub-systems, or all components of the fluid is that of a conserved variable: . The same holds, of course, for the energy : , where is the flux of energy. In contrast, the entropy production of the total system is not conservative in general, due to the irreversible processes (in other terms, information is lost). The equation for the entropy production of the whole system takes the canonical form , where is the flux of entropy, and is the internal entropy production, or irreversibility, which is a consequence of the second law of thermodynamics: (assuming ). According to the first law of thermodynamics, the energy , is a state function that is also scalar, extensive and conserved, so that\n\n ∂E(s,{xi})∂t=∂E∂s∂s∂t+Σi∂E∂xi.∂xi∂t (2)\n\nwhere is the temperature, is the generalized force associated with the flux . In the following we will deal exclusively with the chemical potentials , unless specified otherwise (i.e. there is no need to introduce other extensive variables). The following Gibbs relation is obtained as a direct consequence of the first law:\n\n T∂s∂t=−div(→JE)+Σiμidiv(→Ji)−Σijμiνij˙Ψj (3)\n\nUsing the development , Eq. (3) can be re-written in the canonical form:\n\n ∂s∂t=−div(→Js)+I (4)\n\nwhere:\n\nwhere the last term on the right hand side defines the dissipative coupling between the sub-systems. As will be shown in the last section, this term is responsible for the irreversible spin-transfer effect described in this work. What is unusual in dealing with the second law, is to manipulate an inequality instead of an equality, and consequently to deal with sufficient conditions instead of equivalences. Here, the condition leads to a positive matrix of Onsager-Casimir transport coefficients that are state functions of the variables , in order to build a positive quadratic form. The condition is fulfilled if the flux and the relaxation velocity have the form\n\n (6)\n\nwhere\n\n Aj≡−Σkνikμk (7)\n\nis the chemical affinity of the corresponding reaction (and we have ) DeDonder . Furthermore, due to the time reversal symmetry of the microscopic equations, the transport coefficients follow the Onsager-Casimir reciprocity relations Onsager0 . The cross-coefficients that couple the flux to the relaxation process are assumed to be zero, because, according to the Curie principle, only processes of identical tensorial nature are coupled. Inserting Eq. (6) into the continuity equation Eq. (1), we obtain an equation of the time variation of the density in terms of derivatives of the chemical potentials :\n\n ∂ni∂t=ΣjLij∇2μj+ΣjkνijLikAk (8)\n\nIt is then sufficient to know the form of the chemical potential as a function of the density (for pur fluids : ) in order to derive the corresponding differential equation, or Fokker-Planck equation, with diffusion and relaxation terms (see sections II, III and VI below).\n\nWhat we gain in performing this analysis is to identify clearly the conservative and dissipative flux (through the internal entropy production), and to be able to define a dissipative process that couples the sub-systems beyond the usual deterministic coupling (electric field, magnetic field, pressure, etc…). This dissipative coupling appears with an additional transport coefficient , defined univocally via the transport equations. In the case studied below, the matrix is composed by theßconductivities associated to each channel (i.e. associated to a given electronic population), the thermal conductivity, or the corresponding Seebeck (thermoelectric power) and Peltier coefficients MTEPW ; Shi ; Gravier and the ferromagnetic transport coefficients: gyromagnetic ratio and the Gilbert damping coefficient . Beyond, the flux of entropy or heat allows the spin transfer to be understood in an open system in terms of relaxation with a supplementary Onsager coefficient . As shown in the last section, this term is responsible for an effective temperature and effective (negative) damping .\n\n### i.2 The model\n\nThe model is based on the hypothesis that the ferromagnetic order parameter is well differentiated from the sub-system composed by spin-polarized conduction electrons, although both systems exchange charges, spins, and heat through a relaxation mechanism that will be described in terms of internal variables Prigogine53 ; DeGroot ; Mazur . As shown above, the relaxation of an internal variable (or internal degree of freedom) defines a transport coefficient related to the corresponding relaxation time (, see appendix A for the relation to the relaxation time).\n\nWe hence start with the two sub-systems: the ferromagnet described by the magnetization and the two conducting spin-channel system of the conduction electrons. Both sub-systems are dynamically coupled through the relaxation time . This relaxation is qualified as interband relaxation, to be opposed to the intraband spin-flip relaxation introduced in the usual two spin-channel approximation. The conducting channels are usually described by the density of conduction electrons with spin up and the density of conduction electrons with spin down. The intraband coupling (accounted for by or ) is responsible for the spin-accumulation mechanism at stationary regime. For convenience, we redefine the two channels with the density of spin-polarized electrons (”spin conduction channel”) and the total density of electrons .\n\nFurthermore, the conduction channels are contacted to a power supply (current generator here). Strictly speaking, the magnetic system is also contacted to the power supply, e.g. through the electron of character Stearn . The conduction electrons are thermalized each-other through a well-known mechanism of elastic scattering (that defines the conduction electron reservoir), at the femto-second time scales (or below), and are also contacted to the lattice through the Fermi-Dirac distribution, and inelastic scattering . On the other hand, the ferromagnetic order parameter is contacted to the lattice with a well-known relaxation time that is measured in ferromagnetic resonance (FMR) experiments, and is typically of the order of the nanosecond (or few hundreds of picoseconds). This description leads to the model depicted in Fig 1(b).", null, "Figure 1: Thermokinetic picture of irreversible spin-transfer. Ferromagnetic system (with magnetization M), and electric system with spin accumulation density Δn and electronic density at the Fermi level n0. The chemical potential μ is defined for each spin channel. The three sub-systems are coupled together through the relaxation times τsd (interband s−d like relaxation) and τsf.(intraband spin-flip relaxation). The sub-systems are also coupled to the current generator I, and to the heat reservoirs, through the corresponding well known relaxation times τ0 (Néel-Brown waiting time), τe and τph: elastic and inelastic electronic relaxation times.\n\nThe basic idea developed below lays on the fact that the typical time scales of the dynamics of the two sub-systems are largely separated. There is a slow variable, the magnetization, and fast variables, the degree of freedom related to the spin of the conduction electrons. It is then possible to reduce the action of the fast variable to the role of an environment with regard to the magnetization, like for spin-bath relaxation. The effect of the coupling to the spin-dependent electronic sub-system will then be reduced to specific damping and fluctuation terms added to the usual stochastic equations for the magnetization. This will be our line of reasoning followed in the last section, after describing the two sub-systems.\n\n## Ii spin-dependent transport\n\nIn order to explain the high resistance and the high thermoelectric power observed in transition metals, Mott introduced the concept of spin-polarized current and suggested that s-d interband scattering plays an essential role in the conduction properties Mott . This approach in terms of two conduction bands Stearn , explained the existence of a spin-polarized current in the 3d ferromagnetic materials TwoChan , and was used for the description of anisotropic magnetoresistance (AMR) Potter0 ; Potter , the description of spin-polarizer Drouhin , and thermoelectric power Handbook . With the discovery of giant magnetoresistance (GMR) GMR and related effects Awschalom (like domain wall scattering Viret ; LevyDW ; Ulrich ; DWS discussed below ), the development of spintronics focused the discussion on spin-flip scattering occurring between spin-polarized conducting channels Gijs ; Buttler ; Levy0 ; Zutic ; Schmidt ; Marrows . The two-channel model, which describes the conduction electrons with majority and minority spins, is applied with great efficiency to GMR and spin injection effects Johnson ; Wyder ; Valet ; Levy ; FertDuvail ; Heide ; PRBThermo , including metal/semiconductor Molenkamp and metal/superconductor interfaces JedemaSupra . In this context, it is sufficient to describe the diffusion process in terms of spin-flip scattering without the need to invoke interband s-d scattering.\n\nIt is convenient to generalize the two spin channel approach to any relevant transport channels, i.e. to any distinguishable electron populations and (defined by an internal degree of freedom). The local out-of-equilibrium state near the junction is then described by a non-vanishing chemical-potential difference between these two populations: . In other words, assuming that the presence of a junction induces a deviation from the local equilibrium, the and populations can be defined by the relaxation mechanism itself, that allows the local equilibrium to be recovered in the bulk material () PRBThermo . Such considerations have been presented in some important spintronics studies on the basis of microscopic calculations FertDuvail ; Heide ; Mott ; Potter0 ; Potter ; Gijs ; Levy0 ; Suzuki ; Tsymbal ; Baxter . The thermokinetic approach cond-mat allows us to deal with interband relaxation on an equal footing with spin-flip relaxation, with the help of the transport coefficients only. For this purpose, the two spin-channel model is generalized, with the introduction of the corresponding transport coefficients: the conductivities and of each channel define the total conductivity and the conductivity asymmetry ; the relaxation between both channels is described by the parameter (or equivalently, the relevant relaxation times ).\n\n### ii.1 The generalized two channel model", null, "Figure 2: Two channel model, including relaxation that couples the two electronic populations.\n\nIn the framework of the two conducting-channel model, which includes relaxation from one channel to the other, it easy to follow step by step the method described in the first section. The conservation laws write (assuming a 1D space variable ):\n\n ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∂nα∂t=−∂Jα∂z−˙Ψαγ∂nγ∂t=−∂Jγ∂z+˙Ψαγ (9)\n\nwhere and are the densities of particles in the channels .\n\nThe entropy variation writes:\n\n TI=−Jα∂μα∂z−Jγ∂μγ∂z−˙Ψαγ(μα−μγ) (10)\n\nthe application of the second law of thermodynamics leads to introduce the Onsager coefficients , , and PRBThermo ; cond-mat , such that:\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Jα=−σαe∂μα∂zJγ=−σγe∂μγ∂z˙Ψαγ=L(μα−μγ) (11)\n\nwhere describes the relaxation from the channel to the other channel in terms of velocity of the reaction . It is not necessary, in what follows, to distinguish between the electric part and the pure chemical part of the electro-chemical potentials (see FertLee ). The effects of the electric charge distribution are described in Appendix A, with the introduction of the screening length and the relation to the relaxation times. As shown in Appendix A, the Onsager coefficient is inversely proportional to the electronic relaxation times :\n\n L∝(gτα→γ+fτγ→α) (12)\n\nwhere and are two functions close to unity, and related to the electric charge distributions (see Appendix A). Note that due to our definition of and , there is no direct coupling between the two channels : there is no transport coefficients that couples the two first equations in Eq. (11). This is a consequence of the definition of the electronic populations, through the relaxation process itself (the populations are stable if ). Indeed, the out-of-equilibrium configuration at the interface is quantified by the chemical affinity , i.e. the chemical potential difference of the reaction.\n\nThe total current is constant:\n\n Jt=Jα+Jγ=−1e∂∂z(σαμα+σγμγ) (13)\n\nHowever, it is not possible to measure separately the different conduction channels, since any realistic electric contact short-cuts the two channels. What is measured is necessarily the usual Ohm’s law, , that imposes the reference electric potential to be introduced, together with the total conductivity Constantes . The potential is hence:\n\n eΦ=1σt(σαμα+σγμγ) (14)", null, "Figure 3: Junction between to layers I and II. Chemical potential profile over the interval [A,B] in the α and γ channels. The A and B points verify μα(A)=μγ(A) and μα(B)=μγ(B). The two straight lines represent the Φ variation in each region (ΦI, ΦII). It can be directly seen that the out-of-equilibrium resistance Rne is determined by the Φ discontinuity at the interface.\n\nLet us assume that the two channels collapse to a unique conduction channel for a specific configuration, the reference, which is a local equilibrium situation: . The out-of-equilibrium contribution to the resistance, , is calculated through the relation:\n\n JteRne=∫BA∂∂z(μα−eΦ(z))dz=∫BA∂∂z(μγ−eΦ(z))dz (15)\n\nso that\n\n Rne=−1Jte∫BAσα−σγ2σt∂Δμ∂zdz (16)\n\nwhere the measurement points and are located far enough from the interface (inside the bulk) so that (see Fig. 3). The integral in Eqs. (15) is performed over the regular part of the function only ( and are discontinuous) Integral . Eq. 16 allows the out-of-equilibrium resistance at a simple junction between two layers (composed by the layers and ) to be easily calculated. If the junction is set at and the conductivities are respectively and (), we have:\n\n JTeRne=∫0AσIα−σIγ2σt∂ΔμI∂zdz+∫B0σIIα−σIIγ2σt∂ΔμII∂zdz (17)\n\nThe equilibrium is recovered in the bulk, so that:\n\n Rne=(σIα−σIγσIt−σIIα−σIIγσIIt)Δμ(0)2Jte (18)\n\nThe chemical potential difference , which accounts for the pumping force opposed to the relaxation , is obtained by solving the diffusion equation deduced from Eqs. (11) and (9), and assuming a stationary regime for each channels,\n\n ∂2Δμ(z)∂z2=Δμ(z)l2diff (19)\n\nwhere\n\n l−2diff=eL(σ−1α+σ−1γ) (20)\n\nis the diffusion length related to the relaxation.\n\nAt the interface (), the continuity of the currents for each channel writes , were\n\n Jα(0)=−σασγeσt∂Δμ∂z+σασtJt (21)\n\nwhich leads to the general relation:\n\n Δμ(0)=(σIασIt−σIIασIIt)⎛⎝σIασIγσItlIdiff+σIIασIIγσIItlIIdiff⎞⎠−1eJt (22)\n\nInserting Eq. (22) into Eq. (18), we obtain the general expression for the out-of-equilibrium resistance (per unit area) produced by the relaxation mechanism at a junction:\n\n Rne=(σIα−σIγ2σIt−σIIα−σIIγ2σIIt)(σIασIt−σIIασIIt)⎛⎜⎝ ⎷σIασIγeLIσIt+ ⎷σIIασIIγeLIIσIIt⎞⎟⎠−1 (23)\n\nwhere we have used the relation :\n\n l−1diff=2√eLσt(1−β2) (24)\n\nIt is convenient to describe the conductivity asymmetry by a parameter such that and . The out-of-equilibrium contribution to the resistance then takes the following form:\n\n Rne=12(βI−βII)2√eLIσIt(1−β2I)+√eLIIσIIt(1−β2II) (25)\n\nIn the case of the subsystem described by two spin-channel, the relaxation leads to a spin-accumulation effect at the interface of a two identical ferromagnet with antiparallel configuration. The corresponding resistance contribution is:\n\n R↑↓sa=β2sσt(1−β2s)lsf=β2s√eLσt(1−β2s) (26)\n\nThis expression is the well-known giant magnetoresistance contribution Johnson ; Wyder ; Valet ; Levy ; PRBThermo ; Jedema2 ; George .\n\n### ii.2 The four channel approximation\n\nIn the previous subsections, two different electronic relaxation mechanisms have been invoked separately in order to describe giant magnetoresistance or anisotropic magnetoresistance. It is clear however that the two relaxations would take place in parallel, leading to a more complex redistribution of spins within the different channels. In the present subsection, we consider a system in which the two mechanisms coexist, leading to a four channel model FourChan .\n\nThe generic band structure (energy as a function of wave vector for a given direction) of a 3d ferromagnet is schematized in Fig. 4. The band s is parabolic and the exchange splitting is very small. In contrast, the d bands are strongly shifted between up and down spin carriers. The hybridized zone is schematized by the dotted lines at the intersection.", null, "Figure 4: Generic band structure for a 3d ferromagnet with s and d bands schematized for an arbitrary direction of the wave vector k. The shift between the two d bands for the two spin carriers up and down is exemplified. The hybridized zone is schematized with dotted lines at the junction between s and d bands. At the Fermi level four different electronic populations can be identified.\n\nThe system is composed by the reservoirs of the injected electrons and the ferromagnetic layer composed by the electrons. At the interface, current injection leads to a redistribution of the different electronic populations that are governed by spin polarization and charge conservation laws. Let us assume that the current injected is spin polarized in the down polarization (). The conservation laws should be written by taking into account the reaction mechanisms between the different populations. At short time scales (electronic scattering) the relaxation channels are assumed to be the following four\n\n(I) (spin-conserved - scattering)\n(II) (spin-flip scattering for the population)\n(III) (spin-flip - scattering)\n(IV) (spin-flip scattering for the population)\n\nProcess (I) is assumed to be the main mechanism responsible for anisotropic magnetoresistance (AMR). Process (II) leads to the well-known spin-accumulation effect and was also described in detail in the first subsections. According to the fact that the majority-spin band is full and lies at a sizable energy below the Fermi level, the current is negligible and the channel is frozen. Processes (III) and (IV) are hence negligible Drouhin . Consequently, we are dealing with a three-channel model .\n\nThe total current is composed by the three currents for each channel : . In order to write the conservation laws, the relaxation rate , is introduced to account for spin-conserved scattering, and the relaxation rate , is introduced in order to account for spin-flip scattering. Assuming that all channels are in a steady state (this condition will relax in the last section, where the magnetic system is coupled to the channels ) :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂nt∂t=−∂Jt∂z=0∂ns↑∂t=−∂Js↑∂z−˙Ψs=0∂ns↓∂t=−∂Js↓∂z−˙Ψsd+˙Ψs=0∂nd↓∂t=−∂Jd↓∂z+˙Ψsd=0 (27)\n\nwhere are respectively the total densities of particles and the density of particles in the in the , , channels. The system is described by the number of electrons present in each channel at a given time, that defines the four currents, plus the entropy of the system. The conjugate (intensive) variables are the chemical potentials . As described in Appendix B, the application of the first and second laws of thermodynamics allows us to deduce the Onsager relations of the system :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Js↓=−σs↓e∂μs↓∂zJs↑=−σs↑e∂μs↑∂zJd↓=−σd↓e∂μd↓∂z˙Ψsd=Lsd(μs↓−μd↓)˙Ψs=Ls(μs↑−μs↓) (28)\n\nwhere the conductivity of each channel has been introduced. The first four equations are nothing but Ohm’s law applied to each channel, and the two last equations introduce new Onsager transport coefficients (see Appendix B), and , that respectively describe the relaxation (I) for minority spins under the action of the chemical potential difference and the spin-flip relaxation (II) under spin pumping . According to Appendix A, the Onsager coefficients are proportional to the corresponding relaxation times.\n\nFor convenience, we define the usual charge current , the minority-spin current , and the two polarized currents and . We introduce the and conductivities and . The conductivity imbalance and between respectively the and channels and the and channels are:\n\n ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩β↓=σs↓−σd↓σ↓βs=σs↑−σs↓σs (29)\n\nEqs. (27) becomes :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂Jt∂z=∂Jd↓∂z+∂Js∂z=0∂J0↓∂z=˙ψs∂δJ↓∂z=−2˙ψsd−˙ψs∂J0s∂z=−˙ψsd∂δJs∂z=˙ψsd−2˙ψs (30)\n\nand, defining the quasi-chemical potentials and , Eqs. (28) becomes :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩J0↓=−σ↓2e(∂μ↓∂z+β↓∂Δμ↓∂z)δJ↓=−σ↓2e(β↓∂μ↓∂z+∂Δμ↓∂z)J0s=−σs2e(∂μs∂z+βs∂Δμs∂z)δJs=−σs2e(βs∂μs∂z+∂Δμs∂z)˙Ψsd=LsdΔμ↓˙Ψs=LsΔμs (31)\n\nThe equations of conservation [Eqs. (30)] and the above Onsager equations lead to the two coupled diffusion equations :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂2Δμ↓∂z2=1l2sdΔμ↓−1λ2sΔμs∂2Δμs∂z2=1λ2sdΔμ↓−1l2sfΔμs (32)\n\nwhere\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩lsd≡√σ↓(1−β2↓)4eLsdλs≡√σ↓(1+β↓)2eLslsf≡√σs(1−β2s)4eLsλsd≡√σs(1−βs)2eLsd (33)\n\nA solution of Eqs. (32) is\n\n ⎧⎪ ⎪⎨⎪ ⎪⎩Δμ↓=Δμ1+Δμ2Δμs=λ2s((1l2sd−1Λ2+)Δμ1+(1l2sd−1Λ2−)Δμ2) (34)\n\nwith\n\n ⎧⎨⎩Δμ1=a1ezΛ++a2e−zΛ+Δμ2=b1ezΛ−+b2e−zΛ− (35)\n\nwhere\n\n Λ−2±=12(l−2sd+l−2sf)⎛⎜ ⎜⎝1±   ⎷1−4l−2sdl−2sf−λ−2sλ−2sd(l−2sd+l−2sf)2⎞⎟ ⎟⎠\n\nThe constants , , , are defined by the boundary conditions. It can then be seen that the usual spin accumulation corresponding to also depends on the spin-conserved electronic diffusion which is known to be efficient Drouhin and, conversely, that spin-conserved diffusion is able to lead to a spin accumulation, or spin-accumulation effects. Accordingly, we expect to measure some typical effects related to spin-accumulation in single magnetic layers, or if : this point will be illustrated in the new expression of the magnetoresistance (Eq. (39) below), and in Section IV through the effect of current induced magnetization switching (CIMS). relaxation adds a new contribution to the resistance, which plays the role of an interface resistance arising from the diffusive treatment of the band mismatch Gijs ; Buttler ; Levy0 .\n\nThe resistance produced by the usual spin-accumulation contribution, plus the contribution of relaxation, are defined (see Eq.   (16)) by\n\n Rsa=−1eJt∫AB∂∂z(μi−Φ(z))dz (36)\n\nwhere is the total electric field and is one of the chemical potentials. Providing that the total current is , or\n\n Jt=−σte∂∂z(σd↓σtμd↓+σs↓σtμs↓+σs↑σtμs↑) (37)\n\nThe total electric field can also be written (from Eqs. (28)) as\n\n Φ(z)=Jtσt=−1e(σd↓σt∂μd↓∂z+σs↓σtΔμ↓∂z+σs↑σtΔμs∂z) (38)\n\nwhere . The resistance is given by :\n\n Rsa=−1eJt∫BA(σs↓σt∂Δμ↓∂z+σs↑σt∂Δμs∂z)dz (39)\n\nThis three-channel model brings to light the interplay between band mismatch effects and spin accumulation, in a diffusive approach. It is interesting to note that the local neutrality charge condition which is often used (see for instance Eq. (4) in Rashba ) was not included, as described in Appendix A. On the contrary, we have imposed the conservation of the current at any point of the conductor. Indeed, electron transfer from a channel to another where the electron mobility is different, induces a local variation of the total current.\n\nThe resolution of the coupled diffusion equations is discussed elsewhere FourChan .\n\n### ii.3 Domain wall scattering\n\nIn the description performed until now, the spin quantification axis that defines up and down spin states was fixed through the whole structure (i.e. through the layers and the interfaces). Providing that the spin quantification axis follows the direction of the magnetization, it could be non-uniform throughout a ferromagnetic layer, or crossing an interface. This is especially the case in the presence of a magnetic domain wall. In a thin enough magnetic domain wall the spin would not follow adiabatically the quantification axis, leading to spin-dependent domain wall scattering (DWS) Viret ; LevyDW ; Ulrich ; DWS . This effect has been investigated intensively in the last decades in various structures Marrows . The underlaying idea is however rather simple, and can be formulated easily with a generalization of the two-spin channel approach. For the sake of simplicity, this generalization will be performed only for the two electronic populations .\n\nAs performed in reference PRBThermo (and appendix B), we start with the conservation of the particles for the two channels, in a discreet model. The system is described by a layer in contact with a left layer and a right layer . The spin-flip scattering introduced in the previous sections is described be the reaction rate . A probability of spin-flip alignment along the quantification axis is introduced. In the case of ballistic alignment where is the angle between the magnetization of two adjacent layers and . The conservation of the particles is now describes by:\n\n ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩dNαdt=(1−Δϵ(k))Ik−1→kα−Ik→k+1α+Δϵ(k)Ik−1→kγ−˙ΨkdNγdt=(1−Δϵ(k))Ik−1→kγ−Ik→k+1γ+Δϵ(k)Ik−1→kα+˙Ψk (40)\n\nWith the notation introduced in the previous sections, the entropy variation can be written in the following way (Appendix B):\n\n TdSdt = PRl→1Φ−PΩ→RrΦ (41) +Ω∑k=212(Δμk−1−Δμk+2(1−Δϵ(k))Δμk)δIk−1→ks +Ω∑k=212(μk−1−μk)Ik−1→k0+Ω∑k=1Δμk˙Ψk\n\nwhere we have introduced , , , and . The terms and stand for heat and chemical transfer from the reservoirs to the system .\n\nAfter performing the continuum limit, the internal entropy production (or irreversibility) reads:\n\n T.I=−12∂μ0∂zJ0+12(−∂Δμ∂z+2ϵΔμ)δJ+Δμ˙Ψ (42)\n\nThe first term is the Joule effect, the second is the dissipation terms related to the spin-accumulation process that occurs at the interface, or for magnetic domain wall, and the third term is the dissipation due to spin-flip (or s-d) electronic relaxation. The expression of the entropy production Eq. (42) allows the Onsager relations generalizing Eq. (11) or Eq. (28) to be deduced:\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩J0=−σ02e(∂μ0∂z+β(∂Δμ∂z−2ϵΔμ))δJ=−σ02e(β∂μ0∂z+∂Δμ∂z−2ϵΔμ)˙Ψ=LαγΔμ (43)\n\nwhere , and, as already introduced: and .\n\nThe diffusion equation for , obtained in the stationary regime, is modified accordingly:\n\n ∂2Δμ∂z2=⎛⎝1l2diff+1l2DW⎞⎠Δμ+1κ∂Δμ∂z (44)\n\nwhere the length as been defined in the first section Eq. (20) :\n\n ldiff=√σ0(1−β2)2eL (45)\n\nthe domain wall diffusion length is defined as:\n\n lDW=√(1−β2)4ϵ (46)\n\nwhile the length is given by:\n\n κ−1=ϵ2β21−β2 (47)\n\nThe magnetoresistance is modified with respect to Eq. (19), due to the new term in the diffusion equation. It is worth pointing out that a spin accumulation should be expected in case of spin polarized current () even without the usual spin-flip contribution, i.e. in the ballistic limit.\n\n## Iii Ferromagnetic brownian motion and magnetization switching\n\n### iii.1 Thermokinetic derivation of the Fokker-Planck equation\n\nThe description performed in the previous sections is related to the transport properties of charge carriers in case of spin polarized current. In spintronics experiments, the electric current is spin-polarized through a ferromagnetic layer, but it is not necessary to describe the ferromagnetic order parameter as such. This is of course no longer the case for current induced magnetization switching experiments, where the magnetization is the measured variable.\n\nThe magnetization is a fascinating degree of freedom, that has to be described in length in terms of rotational Brownian motion. The description of the dynamics of ferromagnetic particles coupled to a heat bath is a very active field of investigation Neel ; Brown ; Brown2 ; Coffey ; Palacios , and the resulting predictions are rather well known and validated experimentally at large WW ; WW2 ; Fruch and short (Bertram ; Smith ; Chappert ; ChappertBal ; Russek ) time scales. The magnetization relaxation described here is limited to the so-called Néel relaxation that involves only the magnetic moment, in contrast to the Debye inertial relaxation occurring in ferrofluids (in which the ferromagnetic particles rotates in a viscous environment, leading to surprising inertial effects like negative viscosity Rubi2 ).\n\nThe aim of this subsection is first to show that the rotational Fokker-Planck equation governing the dynamics of the magnetization of one monodomain particle coupled to the heat bath can also be obtained applying step by step the approach used in the previous sections. The resulting Fokker-Planck equation with the corresponding Onsager transport coefficients, and the hypothesis performed, can then be compared term by term to the previous study of spin-dependent charge transport.\n\n#### iii.1.1 Geometrical representation of the statistical ensemble\n\nLet be a statistical ensemble of identical monodomain particles of volume , having the same energy per unit volume , magnetization and thermostat temperature . The vector is defined by the angles and . The ensemble can be represented by a distribution of representative points over the unit sphere (fig. 5) with a density .", null, "Figure 5: a) The figure from the left illustrates the flow of representative points over the unit sphere: Jθ and Jϕ. b) The figure from the right illustrates a particular case of distribution of points on the sphere: the points are concentrated at two attractors, one with more particles than the other (asymmetric double well potential).\n\nWe divide the ensemble of representative points in sub-ensembles such that the magnetization is confined within the solid angle . (i.e. the representative points lie between two consecutive parallels and meridians over the sphere).\n\nAs the particles undergo changes of magnetization orientation, the representative points move on the sphere, and there is a net surface flux of representative points ; the representative points move from one sub-ensemble to another sub-ensemble . The probability of finding a particle with the magnetization orientation within the solid angle at a given time is .\n\n#### iii.1.2 Conservation laws\n\nThe sub-ensembles of representative points are described by the following extensive parameters: the entropy , the number of points and the energy , where and are the entropy and energy densities. The flow (of points, energy, and entropy) is described by the flux ( and ):\n\n →J=Jθ→uθ+Jϕ→uϕ (48)\n\nand accounts for the flow of the corresponding magnetic moments relaxing or precessing along the coordinates , where , , are the unit vectors in the spherical coordinate system.\n\nThe conservation laws of the number, energy and entropy of the particles contained in the sub-ensemble write :\n\n ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂n∂t=−div→Jn∂e∂t=−div→" ]
[ null, "https://media.arxiv-vanity.com/render-output/6373179/x1.png", null, "https://media.arxiv-vanity.com/render-output/6373179/x2.png", null, "https://media.arxiv-vanity.com/render-output/6373179/x3.png", null, "https://media.arxiv-vanity.com/render-output/6373179/x4.png", null, "https://media.arxiv-vanity.com/render-output/6373179/x5.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89943457,"math_prob":0.9503093,"size":37341,"snap":"2022-40-2023-06","text_gpt3_token_len":7540,"char_repetition_ratio":0.174465,"word_repetition_ratio":0.02916012,"special_character_ratio":0.19230872,"punctuation_ratio":0.10918192,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97895455,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T02:54:34Z\",\"WARC-Record-ID\":\"<urn:uuid:fe1e64bc-8e38-4d2b-bbb2-e086abd96507>\",\"Content-Length\":\"1049416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a1002c3-793d-4bbb-9d0d-f3d079950319>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e665d97-6243-4869-9bcb-5dbd75af1c03>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/cond-mat/0610264/\",\"WARC-Payload-Digest\":\"sha1:WN2F3I2LMO2GHGIPOF5SUSTOUJDGWEUO\",\"WARC-Block-Digest\":\"sha1:EBWZGHGEYDEFWJKYYSYTBZH6JCUM6YPP\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500080.82_warc_CC-MAIN-20230204012622-20230204042622-00119.warc.gz\"}"}
https://openstax.org/books/calculus-volume-3/pages/2-3-the-dot-product
[ "Calculus Volume 3\n\n# 2.3The Dot Product\n\nCalculus Volume 32.3 The Dot Product\n\n### Learning Objectives\n\n• 2.3.1 Calculate the dot product of two given vectors.\n• 2.3.2 Determine whether two given vectors are perpendicular.\n• 2.3.3 Find the direction cosines of a given vector.\n• 2.3.4 Explain what is meant by the vector projection of one vector onto another vector, and describe how to compute it.\n• 2.3.5 Calculate the work done by a given force.\n\nIf we apply a force to an object so that the object moves, we say that work is done by the force. In Introduction to Applications of Integration on integration applications, we looked at a constant force and we assumed the force was applied in the direction of motion of the object. Under those conditions, work can be expressed as the product of the force acting on an object and the distance the object moves. In this chapter, however, we have seen that both force and the motion of an object can be represented by vectors.\n\nIn this section, we develop an operation called the dot product, which allows us to calculate work in the case when the force vector and the motion vector have different directions. The dot product essentially tells us how much of the force vector is applied in the direction of the motion vector. The dot product can also help us measure the angle formed by a pair of vectors and the position of a vector relative to the coordinate axes. It even provides a simple test to determine whether two vectors meet at a right angle.\n\n### The Dot Product and Its Properties\n\nWe have already learned how to add and subtract vectors. In this chapter, we investigate two types of vector multiplication. The first type of vector multiplication is called the dot product, based on the notation we use for it, and it is defined as follows:\n\n### Definition\n\nThe dot product of vectors $u=〈u1,u2,u3〉u=〈u1,u2,u3〉$ and $v=〈v1,v2,v3〉v=〈v1,v2,v3〉$ is given by the sum of the products of the components\n\n$u·v=u1v1+u2v2+u3v3.u·v=u1v1+u2v2+u3v3.$\n(2.3)\n\nNote that if $uu$ and $vv$ are two-dimensional vectors, we calculate the dot product in a similar fashion. Thus, if $u=〈u1,u2〉u=〈u1,u2〉$ and $v=〈v1,v2〉,v=〈v1,v2〉,$ then\n\n$u·v=u1v1+u2v2.u·v=u1v1+u2v2.$\n\nWhen two vectors are combined under addition or subtraction, the result is a vector. When two vectors are combined using the dot product, the result is a scalar. For this reason, the dot product is often called the scalar product. It may also be called the inner product.\n\n### Example 2.21\n\n#### Calculating Dot Products\n\n1. Find the dot product of $u=〈3,5,2〉u=〈3,5,2〉$ and $v=〈−1,3,0〉.v=〈−1,3,0〉.$\n2. Find the scalar product of $p=10i−4j+7kp=10i−4j+7k$ and $q=−2i+j+6k.q=−2i+j+6k.$\n\n### Checkpoint2.21\n\nFind $u·v,u·v,$ where $u=〈2,9,−1〉u=〈2,9,−1〉$ and $v=〈−3,1,−4〉.v=〈−3,1,−4〉.$\n\nLike vector addition and subtraction, the dot product has several algebraic properties. We prove three of these properties and leave the rest as exercises.\n\n### Theorem2.3\n\n#### Properties of the Dot Product\n\nLet $u,u,$ $v,v,$ and $ww$ be vectors, and let c be a scalar.\n\n$i.u·v=v·uCommutative propertyii.u·(v+w)=u·v+u·wDistributive propertyiii.c(u·v)=(cu)·v=u·(cv)Associative propertyiv.v·v=‖v‖2Property of magnitudei.u·v=v·uCommutative propertyii.u·(v+w)=u·v+u·wDistributive propertyiii.c(u·v)=(cu)·v=u·(cv)Associative propertyiv.v·v=‖v‖2Property of magnitude$\n\n#### Proof\n\nLet $u=〈u1,u2,u3〉u=〈u1,u2,u3〉$ and $v=〈v1,v2,v3〉.v=〈v1,v2,v3〉.$ Then\n\n$u·v=〈u1,u2,u3〉·〈v1,v2,v3〉=u1v1+u2v2+u3v3=v1u1+v2u2+v3u3=〈v1,v2,v3〉·〈u1,u2,u3〉=v·u.u·v=〈u1,u2,u3〉·〈v1,v2,v3〉=u1v1+u2v2+u3v3=v1u1+v2u2+v3u3=〈v1,v2,v3〉·〈u1,u2,u3〉=v·u.$\n\nThe associative property looks like the associative property for real-number multiplication, but pay close attention to the difference between scalar and vector objects:\n\n$c(u·v)=c(u1v1+u2v2+u3v3)=c(u1v1)+c(u2v2)+c(u3v3)=(cu1)v1+(cu2)v2+(cu3)v3=〈cu1,cu2,cu3〉·〈v1,v2,v3〉=c〈u1,u2,u3〉·〈v1,v2,v3〉=(cu)·v.c(u·v)=c(u1v1+u2v2+u3v3)=c(u1v1)+c(u2v2)+c(u3v3)=(cu1)v1+(cu2)v2+(cu3)v3=〈cu1,cu2,cu3〉·〈v1,v2,v3〉=c〈u1,u2,u3〉·〈v1,v2,v3〉=(cu)·v.$\n\nThe proof that $c(u·v)=u·(cv)c(u·v)=u·(cv)$ is similar.\n\nThe fourth property shows the relationship between the magnitude of a vector and its dot product with itself:\n\n$v·v=〈v1,v2,v3〉·〈v1,v2,v3〉=(v1)2+(v2)2+(v3)2=[(v1)2+(v2)2+(v3)2]2=‖v‖2.v·v=〈v1,v2,v3〉·〈v1,v2,v3〉=(v1)2+(v2)2+(v3)2=[(v1)2+(v2)2+(v3)2]2=‖v‖2.$\n\nNote that the definition of the dot product yields $0·v=0.0·v=0.$ By property iv., if $v·v=0,v·v=0,$ then $v=0.v=0.$\n\n### Example 2.22\n\n#### Using Properties of the Dot Product\n\nLet $a=〈1,2,−3〉,a=〈1,2,−3〉,$ $b=〈0,2,4〉,b=〈0,2,4〉,$ and $c=〈5,−1,3〉.c=〈5,−1,3〉.$ Find each of the following products.\n\n1. $(a·b)c(a·b)c$\n2. $a·(2c)a·(2c)$\n3. $‖b‖2‖b‖2$\n\n### Checkpoint2.22\n\nFind the following products for $p=〈7,0,2〉,p=〈7,0,2〉,$ $q=〈−2,2,−2〉,q=〈−2,2,−2〉,$ and $r=〈0,2,−3〉.r=〈0,2,−3〉.$\n\n1. $(r·p)q(r·p)q$\n2. $‖p‖2‖p‖2$\n\n### Using the Dot Product to Find the Angle between Two Vectors\n\nWhen two nonzero vectors are placed in standard position, whether in two dimensions or three dimensions, they form an angle between them (Figure 2.44). The dot product provides a way to find the measure of this angle. This property is a result of the fact that we can express the dot product in terms of the cosine of the angle formed by two vectors.\n\nFigure 2.44 Let θ be the angle between two nonzero vectors $uu$ and $vv$ such that $0≤θ≤π.0≤θ≤π.$\n\n### Theorem2.4\n\n#### Evaluating a Dot Product\n\nThe dot product of two vectors is the product of the magnitude of each vector and the cosine of the angle between them:\n\n$u·v=‖u‖‖v‖cosθ.u·v=‖u‖‖v‖cosθ.$\n(2.4)\n\n#### Proof\n\nPlace vectors $uu$ and $vv$ in standard position and consider the vector $v−uv−u$ (Figure 2.45). These three vectors form a triangle with side lengths $‖u‖,‖v‖,and‖v−u‖.‖u‖,‖v‖,and‖v−u‖.$\n\nFigure 2.45 The lengths of the sides of the triangle are given by the magnitudes of the vectors that form the triangle.\n\nRecall from trigonometry that the law of cosines describes the relationship among the side lengths of the triangle and the angle θ. Applying the law of cosines here gives\n\n$‖v−u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ.‖v−u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ.$\n\nThe dot product provides a way to rewrite the left side of this equation:\n\n$‖v−u‖2=(v−u)·(v−u)=(v−u)·v−(v−u)·u=v·v−u·v−v·u+u·u=v·v−u·v−u·v+u·u=‖v‖2−2u·v+‖u‖2.‖v−u‖2=(v−u)·(v−u)=(v−u)·v−(v−u)·u=v·v−u·v−v·u+u·u=v·v−u·v−u·v+u·u=‖v‖2−2u·v+‖u‖2.$\n\nSubstituting into the law of cosines yields\n\n$‖v−u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ‖v‖2−2u·v+‖u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ−2u·v=−2‖u‖‖v‖cosθu·v=‖u‖‖v‖cosθ.‖v−u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ‖v‖2−2u·v+‖u‖2=‖u‖2+‖v‖2−2‖u‖‖v‖cosθ−2u·v=−2‖u‖‖v‖cosθu·v=‖u‖‖v‖cosθ.$\n\nWe can use this form of the dot product to find the measure of the angle between two nonzero vectors. The following equation rearranges Equation 2.3 to solve for the cosine of the angle:\n\n$cosθ=u·v‖u‖‖v‖.cosθ=u·v‖u‖‖v‖.$\n(2.5)\n\nUsing this equation, we can find the cosine of the angle between two nonzero vectors. Since we are considering the smallest angle between the vectors, we assume $0°≤θ≤180°0°≤θ≤180°$ (or $0≤θ≤π0≤θ≤π$ if we are working in radians). The inverse cosine is unique over this range, so we are then able to determine the measure of the angle $θ.θ.$\n\n### Example 2.23\n\n#### Finding the Angle between Two Vectors\n\nFind the measure of the angle between each pair of vectors.\n\n1. i + j + k and 2ij – 3k\n2. $〈2,5,6〉〈2,5,6〉$ and $〈−2,−4,4〉〈−2,−4,4〉$\n\n### Checkpoint2.23\n\nFind the measure of the angle, in radians, formed by vectors $a=〈1,2,0〉a=〈1,2,0〉$ and $b=〈2,4,1〉.b=〈2,4,1〉.$ Round to the nearest hundredth.\n\nThe angle between two vectors can be acute $(0 obtuse $(−1 or straight $(cosθ=−1).(cosθ=−1).$ If $cosθ=1,cosθ=1,$ then both vectors have the same direction. If $cosθ=0,cosθ=0,$ then the vectors, when placed in standard position, form a right angle (Figure 2.46). We can formalize this result into a theorem regarding orthogonal (perpendicular) vectors.\n\nFigure 2.46 (a) An acute angle has $0 (b) An obtuse angle has $−1 (c) A straight line has $cosθ=−1.cosθ=−1.$ (d) If the vectors have the same direction, $cosθ=1.cosθ=1.$ (e) If the vectors are orthogonal (perpendicular), $cosθ=0.cosθ=0.$\n\n### Theorem2.5\n\n#### Orthogonal Vectors\n\nThe nonzero vectors $uu$ and $vv$ are orthogonal vectors if and only if $u·v=0.u·v=0.$\n\n#### Proof\n\nLet $uu$ and $vv$ be nonzero vectors, and let $θθ$ denote the angle between them. First, assume $u·v=0.u·v=0.$ Then\n\n$‖u‖‖v‖cosθ=0.‖u‖‖v‖cosθ=0.$\n\nHowever, $‖u‖≠0‖u‖≠0$ and $‖v‖≠0,‖v‖≠0,$ so we must have $cosθ=0.cosθ=0.$ Hence, $θ=90°,θ=90°,$ and the vectors are orthogonal.\n\nNow assume $uu$ and $vv$ are orthogonal. Then $θ=90°θ=90°$ and we have\n\n$u·v=‖u‖‖v‖cosθ=‖u‖‖v‖cos90°=‖u‖‖v‖(0)=0.u·v=‖u‖‖v‖cosθ=‖u‖‖v‖cos90°=‖u‖‖v‖(0)=0.$\n\nThe terms orthogonal, perpendicular, and normal each indicate that mathematical objects are intersecting at right angles. The use of each term is determined mainly by its context. We say that vectors are orthogonal and lines are perpendicular. The term normal is used most often when measuring the angle made with a plane or other surface.\n\n### Example 2.24\n\n#### Identifying Orthogonal Vectors\n\nDetermine whether $p=〈1,0,5〉p=〈1,0,5〉$ and $q=〈10,3,−2〉q=〈10,3,−2〉$ are orthogonal vectors.\n\n### Checkpoint2.24\n\nFor which value of x is $p=〈2,8,−1〉p=〈2,8,−1〉$ orthogonal to $q=〈x,−1,2〉?q=〈x,−1,2〉?$\n\n### Example 2.25\n\n#### Measuring the Angle Formed by Two Vectors\n\nLet $v=〈2,3,3〉.v=〈2,3,3〉.$ Find the measures of the angles formed by the following vectors.\n\n1. $vv$ and i\n2. $vv$ and j\n3. $vv$ and k\n\n### Checkpoint2.25\n\nLet $v=〈3,−5,1〉.v=〈3,−5,1〉.$ Find the measure of the angles formed by each pair of vectors.\n\n1. $vv$ and i\n2. $vv$ and j\n3. $vv$ and k\n\nThe angle a vector makes with each of the coordinate axes, called a direction angle, is very important in practical computations, especially in a field such as engineering. For example, in astronautical engineering, the angle at which a rocket is launched must be determined very precisely. A very small error in the angle can lead to the rocket going hundreds of miles off course. Direction angles are often calculated by using the dot product and the cosines of the angles, called the direction cosines. Therefore, we define both these angles and their cosines.\n\n### Definition\n\nThe angles formed by a nonzero vector and the coordinate axes are called the direction angles for the vector (Figure 2.48). The cosines for these angles are called the direction cosines.\n\nFigure 2.48 Angle α is formed by vector $vv$ and unit vector i. Angle β is formed by vector $vv$ and unit vector j. Angle γ is formed by vector $vv$ and unit vector k.\n\nIn Example 2.25, the direction cosines of $v=〈2,3,3〉v=〈2,3,3〉$ are $cosα=222,cosα=222,$ $cosβ=322,cosβ=322,$ and $cosγ=322.cosγ=322.$ The direction angles of $vv$ are $α=1.130rad,α=1.130rad,$ $β=0.877rad,β=0.877rad,$ and $γ=0.877rad.γ=0.877rad.$\n\n### Projections\n\nAs we have seen, addition combines two vectors to create a resultant vector. But what if we are given a vector and we need to find its component parts? We use vector projections to perform the opposite process; they can break down a vector into its components. The magnitude of a vector projection is a scalar projection. For example, if a child is pulling the handle of a wagon at a 55° angle, we can use projections to determine how much of the force on the handle is actually moving the wagon forward (Figure 2.49). We return to this example and learn how to solve it after we see how to calculate projections.\n\nFigure 2.49 When a child pulls a wagon, only the horizontal component of the force propels the wagon forward.\n\n### Definition\n\nThe vector projection of $vv$ onto $uu$ is the vector labeled projuv in Figure 2.50. It has the same initial point as $uu$ and $vv$ and the same direction as $uu$, and represents the component of $vv$ that acts in the direction of $uu$. If $θθ$ represents the angle between $uu$ and $vv$, then, by properties of triangles, we know the length of $projuvprojuv$ is $‖projuv‖=‖v‖cosθ.‖projuv‖=‖v‖cosθ.$ Note that when the angle $θθ$ between $uu$ and $vv$ is an obtuse angle, the projection will be in the opposite direction of $uu$. When expressing $cosθcosθ$ in terms of the dot product, this becomes\n\n$‖projuv‖=‖v‖cosθ=‖v‖(|u·v|‖u‖‖v‖)=|u·v|‖u‖.‖projuv‖=‖v‖cosθ=‖v‖(|u·v|‖u‖‖v‖)=|u·v|‖u‖.$\n\nWe now multiply by a unit vector in the direction of $uu$ to get $projuv:projuv:$\n\n$projuv=u·v‖u‖(1‖u‖u)=u·v‖u‖2u.projuv=u·v‖u‖(1‖u‖u)=u·v‖u‖2u.$\n(2.6)\n\nThe length of this vector is also known as the scalar projection of $vv$ onto $uu$ and is denoted by\n\n$‖projuv‖=compuv=u·v‖u‖.‖projuv‖=compuv=u·v‖u‖.$\n(2.7)\nFigure 2.50 The projection of $vv$ onto $uu$ shows the component of vector $vv$ in the direction of $uu$.\n\n### Example 2.27\n\n#### Finding Projections\n\nFind the projection of $vv$ onto u.\n\n1. $v=〈3,5,1〉v=〈3,5,1〉$ and $u=〈−1,4,3〉u=〈−1,4,3〉$\n2. $v=3i−2jv=3i−2j$ and $u=i+6ju=i+6j$\n\nSometimes it is useful to decompose vectors—that is, to break a vector apart into a sum. This process is called the resolution of a vector into components. Projections allow us to identify two orthogonal vectors having a desired sum. For example, let $v=〈6,−4〉v=〈6,−4〉$ and let $u=〈3,1〉.u=〈3,1〉.$ We want to decompose the vector $vv$ into orthogonal components such that one of the component vectors has the same direction as $uu$.\n\nWe first find the component that has the same direction as $uu$ by projecting $vv$ onto $uu$. Let $p=projuv.p=projuv.$ Then, we have\n\n$p=u·v‖u‖2u=18−49+1u=75u=75〈3,1〉=〈215,75〉.p=u·v‖u‖2u=18−49+1u=75u=75〈3,1〉=〈215,75〉.$\n\nNow consider the vector $q=v−p.q=v−p.$ We have\n\n$q=v−p=〈6,−4〉−〈215,75〉=〈95,−275〉.q=v−p=〈6,−4〉−〈215,75〉=〈95,−275〉.$\n\nClearly, by the way we defined $qq$, we have $v=q+p,v=q+p,$ and\n\n$q·p=〈95,−275〉·〈215,75〉=9(21)25+−27(7)25=18925−18925=0.q·p=〈95,−275〉·〈215,75〉=9(21)25+−27(7)25=18925−18925=0.$\n\nTherefore, $qq$ and p are orthogonal.\n\n### Example 2.28\n\n#### Resolving Vectors into Components\n\nExpress $v=〈8,−3,−3〉v=〈8,−3,−3〉$ as a sum of orthogonal vectors such that one of the vectors has the same direction as $u=〈2,3,2〉.u=〈2,3,2〉.$\n\n### Checkpoint2.27\n\nExpress $v=5i−jv=5i−j$ as a sum of orthogonal vectors such that one of the vectors has the same direction as $u=4i+2j.u=4i+2j.$\n\n### Example 2.29\n\n#### Scalar Projection of Velocity\n\nA container ship leaves port traveling $15°15°$ north of east. Its engine generates a speed of 20 knots along that path (see the following figure). In addition, the ocean current moves the ship northeast at a speed of 2 knots. Considering both the engine and the current, how fast is the ship moving in the direction $15°15°$ north of east? Round the answer to two decimal places.", null, "### Checkpoint2.28\n\nRepeat the previous example, but assume the ocean current is moving southeast instead of northeast, as shown in the following figure.", null, "### Work\n\nNow that we understand dot products, we can see how to apply them to real-life situations. The most common application of the dot product of two vectors is in the calculation of work.\n\nFrom physics, we know that work is done when an object is moved by a force. When the force is constant and applied in the same direction the object moves, then we define the work done as the product of the force and the distance the object travels: $W=Fd.W=Fd.$ We saw several examples of this type in earlier chapters. Now imagine the direction of the force is different from the direction of motion, as with the example of a child pulling a wagon. To find the work done, we need to multiply the component of the force that acts in the direction of the motion by the magnitude of the displacement. The dot product allows us to do just that. If we represent an applied force by a vector F and the displacement of an object by a vector s, then the work done by the force is the dot product of F and s.\n\n### Definition\n\nWhen a constant force is applied to an object so the object moves in a straight line from point P to point Q, the work W done by the force F, acting at an angle θ from the line of motion, is given by\n\n$W=F·PQ→=‖F‖‖PQ→‖cosθ.W=F·PQ→=‖F‖‖PQ→‖cosθ.$\n(2.8)\n\nLet’s revisit the problem of the child’s wagon introduced earlier. Suppose a child is pulling a wagon with a force having a magnitude of 8 lb on the handle at an angle of 55°. If the child pulls the wagon 50 ft, find the work done by the force (Figure 2.51).\n\nFigure 2.51 The horizontal component of the force is the projection of F onto the positive x-axis.\n\nWe have\n\n$W=‖F‖‖PQ→‖cosθ=8(50)(cos(55°))≈229ft·lb.W=‖F‖‖PQ→‖cosθ=8(50)(cos(55°))≈229ft·lb.$\n\nIn U.S. standard units, we measure the magnitude of force $‖F‖‖F‖$ in pounds. The magnitude of the displacement vector $‖PQ→‖‖PQ→‖$ tells us how far the object moved, and it is measured in feet. The customary unit of measure for work, then, is the foot-pound. One foot-pound is the amount of work required to move an object weighing 1 lb a distance of 1 ft straight up. In the metric system, the unit of measure for force is the newton (N), and the unit of measure of magnitude for work is a newton-meter (N·m), or a joule (J).\n\n### Example 2.30\n\n#### Calculating Work\n\nA conveyor belt generates a force $F=5i−3j+kF=5i−3j+k$ that moves a suitcase from point $(1,1,1)(1,1,1)$ to point $(9,4,7)(9,4,7)$ along a straight line. Find the work done by the conveyor belt. The distance is measured in meters and the force is measured in newtons.\n\n### Checkpoint2.29\n\nA constant force of 30 lb is applied at an angle of 60° to pull a handcart 10 ft across the ground (Figure 2.52). What is the work done by this force?\n\nFigure 2.52\n\n### Section 2.3 Exercises\n\nFor the following exercises, the vectors $uu$ and $vv$ are given. Calculate the dot product $u·v.u·v.$\n\n123.\n\n$u=〈3,0〉,u=〈3,0〉,$ $v=〈2,2〉v=〈2,2〉$\n\n124.\n\n$u=〈3,−4〉,u=〈3,−4〉,$ $v=〈4,3〉v=〈4,3〉$\n\n125.\n\n$u=〈2,2,−1〉,u=〈2,2,−1〉,$ $v=〈−1,2,2〉v=〈−1,2,2〉$\n\n126.\n\n$u=〈4,5,−6〉,u=〈4,5,−6〉,$ $v=〈0,−2,−3〉v=〈0,−2,−3〉$\n\nFor the following exercises, the vectors a, b, and c are given. Determine the vectors $(a·b)c(a·b)c$ and $(a·c)b.(a·c)b.$ Express the vectors in component form.\n\n127.\n\n$a=〈2,0,−3〉,a=〈2,0,−3〉,$ $b=〈−4,−7,1〉,b=〈−4,−7,1〉,$ $c=〈1,1,−1〉c=〈1,1,−1〉$\n\n128.\n\n$a=〈0,1,2〉,a=〈0,1,2〉,$ $b=〈−1,0,1〉,b=〈−1,0,1〉,$ $c=〈1,0,−1〉c=〈1,0,−1〉$\n\n129.\n\n$a=i+j,a=i+j,$ $b=i−k,b=i−k,$ $c=i−2kc=i−2k$\n\n130.\n\n$a=i−j+k,a=i−j+k,$ $b=j+3k,b=j+3k,$ $c=−i+2j−4kc=−i+2j−4k$\n\nFor the following exercises, the two-dimensional vectors a and b are given.\n\n1. Find the measure of the angle $θθ$ between a and b. Express the answer in radians rounded to two decimal places, if it is not possible to express it exactly.\n2. Is $θθ$ an acute angle?\n131.\n\n[T] $a=〈3,−1〉,a=〈3,−1〉,$ $b=〈−4,0〉b=〈−4,0〉$\n\n132.\n\n[T] $a=〈2,1〉,a=〈2,1〉,$ $b=〈−1,3〉b=〈−1,3〉$\n\n133.\n\n$u=3i,u=3i,$ $v=4i+4jv=4i+4j$\n\n134.\n\n$u=5i,u=5i,$ $v=−6i+6jv=−6i+6j$\n\nFor the following exercises, find the measure of the angle between the three-dimensional vectors a and b. Express the answer in radians rounded to two decimal places, if it is not possible to express it exactly.\n\n135.\n\n$a=〈3,−1,2〉,a=〈3,−1,2〉,$ $b=〈1,−1,−2〉b=〈1,−1,−2〉$\n\n136.\n\n$a=〈0,−1,−3〉,a=〈0,−1,−3〉,$ $b=〈2,3,−1〉b=〈2,3,−1〉$\n\n137.\n\n$a=i+j,a=i+j,$ $b=j−kb=j−k$\n\n138.\n\n$a=i−2j+k,a=i−2j+k,$ $b=i+j−2kb=i+j−2k$\n\n139.\n\n[T] $a=3i−j−2k,a=3i−j−2k,$ $b=v+w,b=v+w,$ where $v=−2i−3j+2kv=−2i−3j+2k$ and $w=i+2kw=i+2k$\n\n140.\n\n[T] $a=3i−j+2k,a=3i−j+2k,$ $b=v−w,b=v−w,$ where $v=2i+j+4kv=2i+j+4k$ and $w=6i+j+2kw=6i+j+2k$\n\nFor the following exercises determine whether the given vectors are orthogonal.\n\n141.\n\n$a=〈x,y〉,a=〈x,y〉,$ $b=〈−y,x〉,b=〈−y,x〉,$ where x and y are nonzero real numbers\n\n142.\n\n$a=〈x,x〉,a=〈x,x〉,$ $b=〈−y,y〉,b=〈−y,y〉,$ where x and y are nonzero real numbers\n\n143.\n\n$a=3i−j−2k,a=3i−j−2k,$ $b=−2i−3j+kb=−2i−3j+k$\n\n144.\n\n$a=i−j,a=i−j,$ $b=7i+2j−kb=7i+2j−k$\n\n145.\n\nFind all two-dimensional vectors a orthogonal to vector $b=〈3,4〉.b=〈3,4〉.$ Express the answer in component form.\n\n146.\n\nFind all two-dimensional vectors a orthogonal to vector $b=〈5,−6〉.b=〈5,−6〉.$ Express the answer by using standard unit vectors.\n\n147.\n\nDetermine all three-dimensional vectors $uu$ orthogonal to vector $v=〈1,1,0〉.v=〈1,1,0〉.$ Express the answer by using standard unit vectors.\n\n148.\n\nDetermine all three-dimensional vectors $uu$ orthogonal to vector $v=i−j−k.v=i−j−k.$ Express the answer in component form.\n\n149.\n\nDetermine the real number $αα$ such that vectors $a=2i+3ja=2i+3j$ and $b=9i+αjb=9i+αj$ are orthogonal.\n\n150.\n\nDetermine the real number $αα$ such that vectors $a=−3i+2ja=−3i+2j$ and $b=2i+αjb=2i+αj$ are orthogonal.\n\n151.\n\n[T] Consider the points $P(4,5)P(4,5)$ and $Q(5,−7).Q(5,−7).$\n\n1. Determine vectors $OP→OP→$ and $OQ→.OQ→.$ Express the answer by using standard unit vectors.\n2. Determine the measure of angle O in triangle OPQ. Express the answer in degrees rounded to two decimal places.\n152.\n\n[T] Consider points $A(1,1),A(1,1),$ $B(2,−7),B(2,−7),$ and $C(6,3).C(6,3).$\n\n1. Determine vectors $BA→BA→$ and $BC→.BC→.$ Express the answer in component form.\n2. Determine the measure of angle B in triangle ABC. Express the answer in degrees rounded to two decimal places.\n153.\n\nDetermine the measure of angle A in triangle ABC, where $A(1,1,8),A(1,1,8),$ $B(4,−3,−4),B(4,−3,−4),$ and $C(−3,1,5).C(−3,1,5).$ Express your answer in degrees rounded to two decimal places.\n\n154.\n\nConsider points $P(3,7,−2)P(3,7,−2)$ and $Q(1,1,−3).Q(1,1,−3).$ Determine the angle between vectors $OP→OP→$ and $OQ→.OQ→.$ Express the answer in degrees rounded to two decimal places.\n\nFor the following exercises, determine which (if any) pairs of the following vectors are orthogonal.\n\n155.\n\n$u=〈3,7,−2〉,u=〈3,7,−2〉,$ $v=〈5,−3,−3〉,v=〈5,−3,−3〉,$ $w=〈0,1,−1〉w=〈0,1,−1〉$\n\n156.\n\n$u=i−k,u=i−k,$ $v=5j−5k,v=5j−5k,$ $w=10jw=10j$\n\n157.\n\nUse vectors to show that a parallelogram with equal diagonals is a rectangle.\n\n158.\n\nUse vectors to show that the diagonals of a rhombus are perpendicular.\n\n159.\n\nShow that $u·(v+w)=u·v+u·wu·(v+w)=u·v+u·w$ is true for any vectors $uu$, $vv$, and $ww$.\n\n160.\n\nVerify the identity $u·(v+w)=u·v+u·wu·(v+w)=u·v+u·w$ for vectors $u=〈1,0,4〉,u=〈1,0,4〉,$ $v=〈−2,3,5〉,v=〈−2,3,5〉,$ and $w=〈4,−2,6〉.w=〈4,−2,6〉.$\n\nFor the following problems, the vector $uu$ is given.\n\n1. Find the direction cosines for the vector $uu$.\n2. Find the direction angles for the vector $uu$ expressed in degrees. (Round the answer to the nearest integer.)\n161.\n\n$u = 〈 2 , 2 , 1 〉 u = 〈 2 , 2 , 1 〉$\n\n162.\n\n$u = i − 2 j + 2 k u = i − 2 j + 2 k$\n\n163.\n\n$u = 〈 −1 , 5 , 2 〉 u = 〈 −1 , 5 , 2 〉$\n\n164.\n\n$u = 〈 2 , 3 , 4 〉 u = 〈 2 , 3 , 4 〉$\n\n165.\n\nConsider $u=〈a,b,c〉u=〈a,b,c〉$ a nonzero three-dimensional vector. Let $cosα,cosα,$ $cosβ,cosβ,$ and $cosγcosγ$ be the direction cosines of $uu$. Show that $cos2α+cos2β+cos2γ=1.cos2α+cos2β+cos2γ=1.$\n\n166.\n\nDetermine the direction cosines of vector $u=i+2j+2ku=i+2j+2k$ and show they satisfy $cos2α+cos2β+cos2γ=1.cos2α+cos2β+cos2γ=1.$\n\nFor the following exercises, the vectors $uu$ and $vv$ are given.\n\n1. Find the vector projection $w=projuvw=projuv$ of vector $vv$ onto vector $uu$. Express your answer in component form.\n2. Find the scalar projection $compuvcompuv$ of vector $vv$ onto vector u.\n167.\n\n$u=5i+2j,u=5i+2j,$ $v=2i+3jv=2i+3j$\n\n168.\n\n$u=〈−4,7〉,u=〈−4,7〉,$ $v=〈3,5〉v=〈3,5〉$\n\n169.\n\n$u=3i+2k,u=3i+2k,$ $v=2j+4kv=2j+4k$\n\n170.\n\n$u=〈4,4,0〉,u=〈4,4,0〉,$ $v=〈0,4,1〉v=〈0,4,1〉$\n\n171.\n\nConsider the vectors $u=4i−3ju=4i−3j$ and $v=3i+2j.v=3i+2j.$\n\n1. Find the component form of vector $w=projuvw=projuv$ that represents the projection of $vv$ onto $uu$.\n2. Write the decomposition $v=w+qv=w+q$ of vector $vv$ into the orthogonal components $ww$ and $qq$, where $ww$ is the projection of $vv$ onto $uu$ and $qq$ is a vector orthogonal to the direction of $uu$.\n172.\n\nConsider vectors $u=2i+4ju=2i+4j$ and $v=4j+2k.v=4j+2k.$\n\n1. Find the component form of vector $w=projuvw=projuv$ that represents the projection of $vv$ onto $uu$.\n2. Write the decomposition $v=w+qv=w+q$ of vector $vv$ into the orthogonal components $ww$ and $qq$, where $ww$ is the projection of $vv$ onto $uu$ and $qq$ is a vector orthogonal to the direction of $uu$.\n173.\n\nA methane molecule has a carbon atom situated at the origin and four hydrogen atoms located at points $P(1,1,−1),Q(1,−1,1),R(−1,1,1),andS(−1,−1,−1)P(1,1,−1),Q(1,−1,1),R(−1,1,1),andS(−1,−1,−1)$ (see figure).\n\n1. Find the distance between the hydrogen atoms located at P and R.\n2. Find the angle between vectors $OS→OS→$ and $OR→OR→$ that connect the carbon atom with the hydrogen atoms located at S and R, which is also called the bond angle. Express the answer in degrees rounded to two decimal places.", null, "174.\n\n[T] Find the vectors that join the center of a clock to the hours 1:00, 2:00, and 3:00. Assume the clock is circular with a radius of 1 unit.\n\n175.\n\nFind the work done by force $F=〈5,6,−2〉F=〈5,6,−2〉$ (measured in Newtons) that moves a particle from point $P(3,−1,0)P(3,−1,0)$ to point $Q(2,3,1)Q(2,3,1)$ along a straight line (the distance is measured in meters).\n\n176.\n\n[T] A sled is pulled by exerting a force of 100 N on a rope that makes an angle of $25°25°$ with the horizontal. Find the work done in pulling the sled 40 m. (Round the answer to one decimal place.)\n\n177.\n\n[T] A father is pulling his son on a sled at an angle of $20°20°$ with the horizontal with a force of 25 lb (see the following image). He pulls the sled in a straight path of 50 ft. How much work was done by the man pulling the sled? (Round the answer to the nearest integer.)", null, "178.\n\n[T] A car is towed using a force of 1600 N. The rope used to pull the car makes an angle of 25° with the horizontal. Find the work done in towing the car 2 km. Express the answer in joules $(1J=1N·m)(1J=1N·m)$ rounded to the nearest integer.\n\n179.\n\n[T] A boat sails north aided by a wind blowing in a direction of $N30°EN30°E$ with a magnitude of 500 lb. How much work is performed by the wind as the boat moves 100 ft? (Round the answer to two decimal places.)\n\n180.\n\nVector $p=〈150,225,375〉p=〈150,225,375〉$ represents the price of certain models of bicycles sold by a bicycle shop. Vector $n=〈10,7,9〉n=〈10,7,9〉$ represents the number of bicycles sold of each model, respectively. Compute the dot product $p·np·n$ and state its meaning.\n\n181.\n\n[T] Two forces $F1F1$ and $F2F2$ are represented by vectors with initial points that are at the origin. The first force has a magnitude of 20 lb and the terminal point of the vector is point $P(1,1,0).P(1,1,0).$ The second force has a magnitude of 40 lb and the terminal point of its vector is point $Q(0,1,1).Q(0,1,1).$ Let F be the resultant force of forces $F1F1$ and $F2.F2.$\n\n1. Find the magnitude of F. (Round the answer to one decimal place.)\n2. Find the direction angles of F. (Express the answer in degrees rounded to one decimal place.)\n182.\n\n[T] Consider $r(t)=〈cost,sint,2t〉r(t)=〈cost,sint,2t〉$ the position vector of a particle at time $t∈[0,30],t∈[0,30],$ where the components of r are expressed in centimeters and time in seconds. Let $OP→OP→$ be the position vector of the particle after 1 sec.\n\n1. Show that all vectors $PQ→,PQ→,$ where $Q(x,y,z)Q(x,y,z)$ is an arbitrary point, orthogonal to the instantaneous velocity vector $v(1)v(1)$ of the particle after 1 sec, can be expressed as $PQ→=〈x−cos1,y−sin1,z−2〉,PQ→=〈x−cos1,y−sin1,z−2〉,$ where $xsin1−ycos1−2z+4=0.xsin1−ycos1−2z+4=0.$ The set of point Q describes a plane called the normal plane to the path of the particle at point P.\n2. Use a CAS to visualize the instantaneous velocity vector and the normal plane at point P along with the path of the particle.\nOrder a print copy\n\nAs an Amazon Associate we earn from qualifying purchases." ]
[ null, "https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/d318306a2fa412393bd9568f75f530a3a9f38b43", null, "https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/e4378fee157d4e003bf43223b07874f89b2fa396", null, "https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/edb80c3da17dcef33915667bc542ec2f4896c18a", null, "https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/3bd5999ff18fdcc699bea2e64111ba0357b3c207", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92696154,"math_prob":0.9997198,"size":23773,"snap":"2023-40-2023-50","text_gpt3_token_len":5153,"char_repetition_ratio":0.18932223,"word_repetition_ratio":0.11446067,"special_character_ratio":0.21684264,"punctuation_ratio":0.098687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999682,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T18:24:14Z\",\"WARC-Record-ID\":\"<urn:uuid:9de764bb-5b2e-4118-b0fa-1378588a3e64>\",\"Content-Length\":\"653918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ac29b8a-a3f4-4626-a0cc-197593674b84>\",\"WARC-Concurrent-To\":\"<urn:uuid:bacf62a6-52eb-44d9-bbc0-2b6f6e7ec538>\",\"WARC-IP-Address\":\"99.84.191.66\",\"WARC-Target-URI\":\"https://openstax.org/books/calculus-volume-3/pages/2-3-the-dot-product\",\"WARC-Payload-Digest\":\"sha1:XJX65X7PFEWZKDTLOMO5IUPQZT4FFVRC\",\"WARC-Block-Digest\":\"sha1:CWXX6KI2KQHA6T64TSB2JEY4DO6SWXNZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00583.warc.gz\"}"}
https://edusaint.com/study/courses/class-9th-ncert-science-course/lessons/gravitation-class-9th-physics-ncert-courses/topic/mass-and-weight-explained/
[ "# Mass and Weight Explained\n\n## Mass and Weight Explained", null, "", null, "MASS\n\n• The mass of a body is the quantity of matter (or material) contained in it.\n• Mass is a scalar quantity which has only magnitude but no direction.\n• The mass of a body (or object) is commonly measured by an equal arm balance.\n• The SI unit of mass is kilogram which is written in short form as kg.\n\nA body contains the same quantity of matter wherever it be—whether on earth, moon or even in outer space.\n\n• So, the mass of an object is the same everywhere.\n• For example, if the mass of an object is 5 kilograms on the earth, then it will have the same mass of 5 kilograms even when it is taken to any other planet, or moon, or in outer space.\n• Thus, the mass of a body (or object) is constant and does not change from place to place.\n• Mass of a body is usually denoted by the small ‘m’.\n• Mass of a body is a measure of inertia of the body and it is also known as inertial mass.\n• The mass of a body cannot be zero.\n\nWEIGHT\n\n• The earth attracts everybody (or object) towards its centre with a certain force which depends on the mass of the body and the acceleration due to gravity at that place.\n• The weight of a body is the force with which it is attracted towards the centre of the earth.\n• In other words, the force of earth’s gravity acting on a body is known as its weight.\n• We know that, Force = mass × acceleration\n• The acceleration produced by the force of attraction of the earth is known as acceleration due to gravity and written as ‘g’.\n• Thus, the downward force acting on a body of mass ‘m’ is given by :\n\nForce = mass × acceleration due to gravity\n\nor Force = m × g\n\n### ACCELERATION DUE TO GRAVITY\n\n• When an object is dropped from some height, its velocity increases at a constant rate.\n• In other words, when an object is dropped from some height, a uniform acceleration is produced in it by the gravitational pull of the earth and this acceleration does not depend on the mass of the falling object.\n• The uniform acceleration produced in a freely falling body due to the gravitational force of the earth is known as acceleration due to gravity and it is denoted by the letter g.\n• When a body is dropped freely, it falls with an acceleration of 9.8 m/s2 and when a body is thrown vertically upwards, it undergoes a retardation of 9.8 m/s2.\n• So, the velocity of a body thrown vertically upwards will decrease at the rate of 9.8 m/s2.\n• The velocity decreases until it reaches zero. The body then falls back to the earth like any other body dropped from that height.\n\n### Variation of gravity at different planets and space", null, "• Gravity is a fundamental force of physics, one which we Earthlings tend to take for granted. You can’t really blame us.\n• Having evolved over the course of billions of years in Earth’s environment, we are used to living with the pull of a steady 1 g (or 9.8 m/s2). However, for those who have gone into space or set foot on the Moon, gravity is a very tenuous and precious thing.\n• Basically, gravity is dependent on mass, where all things – from stars, planets, and galaxies to light and sub-atomic particles – are attracted to one another.\n• Depending on the size, mass, and density of the object, the gravitational force it exerts varies.\n• And when it comes to the planets of our solar system, which vary in size and mass, the strength of gravity on their surfaces varies considerably.\n• For example, Earth’s gravity, as already noted, is equivalent to 9.80665 m/s2 (or 32.174 ft/s2).\n• This means that an object if held above the ground and let go, will accelerate towards the surface at a speed of about 9.8 meters for every second of free fall.\n• This is the standard for measuring gravity on other planets, which is also expressed as a single g.\n• In accordance with Isaac Newton’s law of universal gravitation, the gravitational attraction between two bodies can be expressed mathematically as F = G (m1m2/r2) – where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant (6.674×10-11 N m2/kg2 ).\n\nVariation of “g” as height increases.\n\n• Please note that the value of acceleration due to gravity, g, is not constant at all the places on the surface of the earth.\n• This is due to the fact that the earth is not a perfect sphere, so the value of its radius R is not the same at all the places on its Surface.\n• In other words, due to the flattening of the earth at the poles, all the places on its surface are not at the same distance from its center and so the value of g varies with latitude.\n• Since the radius of the earth at the poles is minimum, the value of g is maximum at the poles.\n• Again, the radius of the earth is maximum at the equator,\n• so the value of g is minimum at the equator (because radius occurs in the denominator of the formula for g).\n• We find that the value of g is inversely proportional to the square of the distance from the center of the earth.\n• Now, as we go up from the surface of the earth, the distance from the center of the earth increases, and hence the value of g decreases (because R increases in this case).\n The value of acceleration due to gravity, g, at an altitude of 200 km above the surface of the earth is 9.23 m/s2 At an altitude of 1000 km, g is 7.34 m/s2 At 5,000 km above earth g is 3.08 m/s2. At 10,000 km g is 1.49 m/s2 At 20,000 km, g is 0.57 m/s2 Whereas at a height of 30,000 km above the surface of the earth, the value of g is only 0.30 m/s2.\nScroll to Top" ]
[ null, "https://edusaint.com/study/ugrakees/2020/11/mass-and-weight.png", null, "https://edusaint.com/study/ugrakees/2020/11/mass-weight.jpg", null, "https://edusaint.com/study/ugrakees/2020/08/ow-strong-is-gravity-on-other-planets.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9469671,"math_prob":0.98884016,"size":5133,"snap":"2021-43-2021-49","text_gpt3_token_len":1195,"char_repetition_ratio":0.15383115,"word_repetition_ratio":0.06504065,"special_character_ratio":0.23865186,"punctuation_ratio":0.09254013,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962379,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T06:56:25Z\",\"WARC-Record-ID\":\"<urn:uuid:c5cd4471-fe73-4eeb-8b92-e0725f4b7c00>\",\"Content-Length\":\"172782\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae4dd5bb-9660-4ea9-8574-95d1928db469>\",\"WARC-Concurrent-To\":\"<urn:uuid:c513b9c7-cc3d-4200-9c48-382a977ccd2b>\",\"WARC-IP-Address\":\"103.129.97.244\",\"WARC-Target-URI\":\"https://edusaint.com/study/courses/class-9th-ncert-science-course/lessons/gravitation-class-9th-physics-ncert-courses/topic/mass-and-weight-explained/\",\"WARC-Payload-Digest\":\"sha1:UR5BY7EZYZM2SMT2CDD7HYJSCNII2UMR\",\"WARC-Block-Digest\":\"sha1:5EDRWOC5KNHOU3SEVXJR2RKPSKOGTOHV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583423.96_warc_CC-MAIN-20211016043926-20211016073926-00144.warc.gz\"}"}
https://www.wolframalpha.com/examples/mathematics/trigonometry/
[ "", null, "", null, "Browse examples\n\n# Trigonometry\n\nTrigonometry is the study of the relationships between side lengths and angles of triangles and the applications of these relationships. The field is fundamental to mathematics, engineering and a wide variety of sciences. Wolfram|Alpha has comprehensive functionality in the area and is able to compute values of trigonometric functions, solve equations involving trigonometry and more.\n\nTrigonometric Calculations\n\nEvaluate trigonometric functions or larger expressions involving trigonometric functions with different input values.\n\nCompute values of trigonometric functions:\n\nCompute values of inverse trigonometric functions:\n\nMore examples\n\nTrigonometric Functions\n\nLearn about and perform computations using trigonometric functions and their inverses, over the real or complex numbers.\n\nCompute properties of a trigonometric function:\n\nCompute properties of an inverse trigonometric function:\n\nPlot a trigonometric function:\n\nAnalyze a trigonometric function of a complex variable:\n\nAnalyze a trigonometric polynomial:\n\nGenerate a table of special values of a function:\n\nCompute the root mean square of a periodic function:\n\nMore examples\n\nTrigonometric Identities\n\nLearn about and apply well-known trigonometric identities.\n\nFind multiple-angle formulas:\n\nFind addition formulas:\n\nFind other trig identities:\n\nMore examples\n\nTrigonometric Equations\n\nSolve equations involving trigonometric functions.\n\nSolve a trigonometric equation:\n\nMore examples\n\nTrigonometric Theorems\n\nLearn about and apply well-known trigonometric theorems.\n\nApply a trigonometric theorem:\n\nApply the Pythagorean theorem:\n\nMore examples\n\nSpherical Trigonometry\n\nStudy the relationships between side lengths and angles of triangles when these triangles are drawn atop a spherical surface.\n\nApply a theorem of spherical trigonometry:\n\nMore examples" ]
[ null, "https://www.wolframcdn.com/examples/proPages/subpageImages/grid-lg.svg", null, "https://www.wolframcdn.com/examples/proPages/subpageImages/grid-sm.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8828501,"math_prob":0.99767697,"size":531,"snap":"2021-04-2021-17","text_gpt3_token_len":93,"char_repetition_ratio":0.18026565,"word_repetition_ratio":0.0,"special_character_ratio":0.14312617,"punctuation_ratio":0.077922076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986064,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T13:30:27Z\",\"WARC-Record-ID\":\"<urn:uuid:e14b89ce-b5e6-4906-ae6a-7cab474fd2f6>\",\"Content-Length\":\"57127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3442e824-b665-4986-89df-41c0b3e95744>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd33de34-6cf4-45af-996f-af2cad6ea362>\",\"WARC-IP-Address\":\"140.177.16.37\",\"WARC-Target-URI\":\"https://www.wolframalpha.com/examples/mathematics/trigonometry/\",\"WARC-Payload-Digest\":\"sha1:Q2TOGRNRWZNZBJM24KTLL4VDVHXVFX3F\",\"WARC-Block-Digest\":\"sha1:TWGRALZUPKYYWDNHMWBEJPNLYHZRVI2R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039610090.97_warc_CC-MAIN-20210422130245-20210422160245-00068.warc.gz\"}"}
https://gist.github.com/jonnylaw/99460d466c84d9b52b57010d28b6a4f6
[ "Skip to content\n{{ message }}\n\nInstantly share code, notes, and snippets.\n\n# jonnylaw/ErrorHandling.scala\n\nCreated Jan 5, 2017\n import cats.implicits._ import cats.data.OptionT object ErrorHandling { // Calculate the square root of a positive number or return an exception (which isn't obvious from the return type) def unsafe_sqrt(a: Double): Double = { if (a > 0) math.sqrt(a) else throw new Exception(\"Can't calculate square root of negative number\") } // Calculate the square root of a positive number and return either a Success containing the result // or a Failure containing an exception (if the number supplied is negative) // This makes it clear to users that this function can fail def try_sqrt(a: Double): Try[Double] = { if (a > 0) Success(math.sqrt(a)) else Failure(throw new Exception(\"Can't calculate square root of negative number\")) } // Calculate the square root of a positive number and return an optional value def option_sqrt(a: Double): Option[Double] = { if (a > 0) Some(math.sqrt(a)) else None } // In order to chain operations we can use compose on the function unsafe_sqrt // this is because the function has compatible types (Double => Double) def sqrt_twice = unsafe_sqrt _ compose unsafe_sqrt _ // In order to chain the option_sqrt function, we need to invoke flatMap, since the // types are not compatible def sqrt_twice_option(x: Double): Option[Double] = option_sqrt(x) flatMap option_sqrt // A naive attempt to compose a try and an option using nested type constructors // This appears to work, but becomes more difficult if we want to work with the return type def sqrt_twice_2(a: Double): Try[Option[Double]] = try_sqrt(a) map option_sqrt // Another function we want to apply to the result of sqrt_twice_2 def f(a: Double) = a + 1 // Function which applies f to the result of sqrt_twice_2 def apply_f = sqrt_twice_2(81) map (_.map(f)) // We can remove the extra map function by utilising OptionT from cats (http://typelevel.org/cats/), // a monad transformer for Option def sqrt_twice_trans(a: Double): OptionT[Try, Double] = OptionT.fromOption[Try](option_sqrt(a)) flatMap (b => OptionT.liftF(try_sqrt(b)) def apply_f_trans = sqrt_twice_trans(81) map f }\nto join this conversation on GitHub. Already have an account? Sign in to comment" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7046442,"math_prob":0.97922903,"size":2463,"snap":"2021-21-2021-25","text_gpt3_token_len":642,"char_repetition_ratio":0.15778773,"word_repetition_ratio":0.17505996,"special_character_ratio":0.30613074,"punctuation_ratio":0.09476309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988491,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T11:13:53Z\",\"WARC-Record-ID\":\"<urn:uuid:0d217f2d-eff6-4604-b6b1-4b6c3c6acb91>\",\"Content-Length\":\"81656\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6f54e45-b62b-49cf-b3c1-30965a3fd875>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9df735d-aa6d-4cbe-a00b-b0a4a59d4628>\",\"WARC-IP-Address\":\"140.82.113.4\",\"WARC-Target-URI\":\"https://gist.github.com/jonnylaw/99460d466c84d9b52b57010d28b6a4f6\",\"WARC-Payload-Digest\":\"sha1:64BVRSYE7TKAFITCSBBPZ6VO4QAFLHHF\",\"WARC-Block-Digest\":\"sha1:4YEPD45NQUYAYL325ODGBHW3BQWECA77\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488538041.86_warc_CC-MAIN-20210623103524-20210623133524-00528.warc.gz\"}"}
https://math.stackexchange.com/questions/3323661/why-are-equivalence-relations-and-partitions-interchangeable
[ "# Why are equivalence relations and partitions interchangeable?\n\nI am reviewing the foundation course I took in year 1 and I found out that I still don't fully understand it.\n\nI understand foundamental theorem of equivalence relations, i.e if there is an equivalence relation on a set S, then the set {Ex} of all equivalence classes with respect to the equivalence classes is a partition of set S.\n\nBasically, we can show that (i) if x is related to y, then {Ex}={Can by using the symmetry and transitivity of equivalence relations. We can also show that (ii) if {Ex} and {Ey} have element other than the empty set in common, then {Ex}={Ey} by using the symmetry, the transitivity and (i). Hence, we can show that each of element x of set S belongs to one equivalence class. If two equivalence classes have the element x other than the empty set in common, they are the same. Therefore, each element x of set S belongs to one and only one equivalence class which means the set of all equivalence classes is a partition of set S.\n\nThe part that I don't understand is when I start with partition of a set S. We can define a relation on A set S that \"x~y\" means \"x and y belong to a subset of set S\". The reflexivity is trivial. The symmetry is also consistent since \"x and y belong to a subset of set S\" means exactly the same as \"y and x...\". The transitivity is also true here because partition decides x,y,z belong to one and the only subset of set S if x~y, y~z. Equivalence classes are subsets of set S and each element x of set S belongs to one and only one equivalence class which is also a unique subset of set S because of (ii).\n\nI have skipped quite a few details in the proof of the theorem I know.\n\nAs for the second proof, I feel like this is the best I can do but I am not certain about the last part of proof I have written. Is there anything else that I can add into my proof to make it flawless?\n\nThank you so much!\n\nRegards,\n\n• You say \"x and y belong to a subset of set S\". But what you need to say is \"$x$ and $y$ belong to the same part of the given partition.\" Then the only difficult part is, as you say, transitivity. But that is no problem because the parts of a partition are disjoint. – ancientmathematician Aug 15 '19 at 6:24\n• Thank you for your kind reply. I start off the second proof with defining a relation on set S and verify it as an equuvalence relation. Then, I tried to clarify that the equivalence classes are exactly the subsets of this partition by showing that the disjoint property of equivalence classes match the element x of set S belongs to one and only subset of a partition. I feel like the way I wrote down is not that convincing ( •̥́ ˍ •̀ू ). Thank you again. – Hi I am Max Aug 15 '19 at 10:45\n• As you have written it you are allowing $x,y$ to be in any subset of $S$. You must insist that they lie in some part of the partition. – ancientmathematician Aug 15 '19 at 12:52\n\nTo go the other way around, assume a partition and define $$a \\equiv b$$ if they belong to the same subset. This makes the subsets the classes of the relation. Then you need to prove that is an equivalence:\n• Transitive: if $$a$$ and $$b$$ are in the same group, and $$b$$ and $$c$$ are in the same group, $$a$$ and $$c$$ are in the same group" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9485498,"math_prob":0.98290586,"size":1868,"snap":"2021-04-2021-17","text_gpt3_token_len":439,"char_repetition_ratio":0.16255365,"word_repetition_ratio":0.1388889,"special_character_ratio":0.23286937,"punctuation_ratio":0.08148148,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989747,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T05:48:21Z\",\"WARC-Record-ID\":\"<urn:uuid:6539b9a9-e058-48f1-a499-f1de0583c4c8>\",\"Content-Length\":\"151727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96964890-df2d-4aa3-8740-6d17164e2e4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8642fb9c-9b95-4f01-ad4f-950cc0373710>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3323661/why-are-equivalence-relations-and-partitions-interchangeable\",\"WARC-Payload-Digest\":\"sha1:3MN4IG27BR74WZHDAJNFTKKAOWHA2KRB\",\"WARC-Block-Digest\":\"sha1:EMKWRJHO5B6EGEAKXTK22RWFMCDXIYI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703522242.73_warc_CC-MAIN-20210121035242-20210121065242-00309.warc.gz\"}"}
https://andrewbarr.io/posts/advent_of_code_day_one/show
[ "Sat, Dec 04 2021\n\n## Day One - Part A\n\nThe first problem was determining how many times a value was larger than the preceding value.\n\nI know `Elixir` has a fantastic core library called `Enum`, but I like using recursion to see what is happening. The first thing was to have a look at the data. Below is a subset of a list of depths.\n\n``[ 171, 173, 174, 163, 161 ...]``\n\nI always like to start with my exit function when writing recursive functions. I know I will match on an empty list, so I start there. I also know I want the result, so I return it.\n\n``def count_increasing_depths([], count), do: count``\n\nNext is the main logic. I need to assess each record and then update the count.\n\n``````def count_increasing_depths([h | t], count) do\ncount = ...\ncount_increasing_depths(t, count)\nend``````\n\nThe logic is if the current value of `h` is larger than the last value, then add 1 to the count. To make this work, I need to pass the value of the previous call. Adjusting my function signatures, I get the following and can now implement my test.\n\n``````def count_increasing_depths([], count, _last), do: count\n\ndef count_increasing_depths([h | t], count, last) do\ncount = if last < h, do: count + 1, else: count\ncount_increasing_depths(t, count, h)\nend``````\n\nTo run the code, I call it with the starting values `Advant.count_increasing_depths(depths(), 0, 0)`. This works nicely but does not give me the correct answer. After re-reading the question, I realise I need to account for the first value not being assessed. To fix this, I need another match on the function.\n\n``````def count_increasing_depths([h | t], count, 0), do: count_increasing_depths(t, count, h)\n\ndef count_increasing_depths([], count, _last), do: count\n\ndef count_increasing_depths([h | t], count, last) do\ncount = if last < h, do: count + 1, else: count\ncount_increasing_depths(t, count, h)\nend``````\n\nNow when I run `Advant.count_increasing_depths(depths(), 0, 0)` the first match extracts the first value and starts the recursive call. The answer is correct, so Happy Days.\n\n## Day One - Part B\n\nThe second part of the problem is a little more complex, and it requires adding a sliding window of values and then making the same comparison. Since I already have a function that counts the increasing values, I decided to build a new list that can be passed into `count_increasing_depths/3`; I started with my exit.\n\n``def sum_sliding_window([], results), do: results``\n\nNow I needed a function to do the test. I know I needed the first three values, so I started there. I match the first three values and add them together. I then add that to an accumulator called `results` by adding them to the start of the list `[ sum | results]`.\n\n``````def sum_sliding_window([a, b, c | t], results) do\nsum = a + b + c\nsum_sliding_window(t, [sum | results])\nend``````\n\nThe problem is that the `t` is missing two of the values I need in the next iteration. The simplest solution is to put them back.\n\n``````def sum_sliding_window([], results), do: results\n\ndef sum_sliding_window([a, b, c | t], results) do\nsum = a + b + c\ntail = [b, c | t]\nsum_sliding_window(tail, [sum | results])\nend``````\n\nNow I can run the code `Advant.sum_sliding_window(depths(), []) |> Advant.count_increasing_depths(0, 0)` but no dice the answer is wrong. This took me a minute to figure out, and I had to get some smaller test data to see what I was doing wrong. In the end, it was a silly mistake; when I created the new list with `[ sum | results]`, I was reversing the order of the values. I made a quick adjustment to my exit function, and happy days again!\n\n``````def sum_sliding_window([], results), do: Enum.reverse(results)\n\ndef sum_sliding_window([a, b, c | t], results) do\nsum = a + b + c\ntail = [b, c | t]\nsum_sliding_window(tail, [sum | results])\nend``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8533175,"math_prob":0.8485888,"size":3676,"snap":"2022-40-2023-06","text_gpt3_token_len":978,"char_repetition_ratio":0.18082789,"word_repetition_ratio":0.16796267,"special_character_ratio":0.26985854,"punctuation_ratio":0.16864295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99730027,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T16:53:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cffa6c5c-ea55-4adf-9a3e-d93e2fb77143>\",\"Content-Length\":\"25372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de80dd7d-5886-45b2-9cc1-4e5e11828b41>\",\"WARC-Concurrent-To\":\"<urn:uuid:e74345c9-b62d-4915-80fd-f23bc03b71dd>\",\"WARC-IP-Address\":\"213.188.216.74\",\"WARC-Target-URI\":\"https://andrewbarr.io/posts/advent_of_code_day_one/show\",\"WARC-Payload-Digest\":\"sha1:N2SKDPR55HC63QVNQ7X3SJJ633BAWK3I\",\"WARC-Block-Digest\":\"sha1:R7LEAJK7O2KGMLDI362GSMB7XTKB22KF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338213.55_warc_CC-MAIN-20221007143842-20221007173842-00711.warc.gz\"}"}
https://msestudent.com/what-are-semiconductors/
[ "# What Are Semiconductors?\n\nIn high school, you probably never learned about one of the most important types of materials: semiconductors. In my AP chemistry class, we divided elements into metals, nonmetals, and metalloids. That kind of classification might lead you to think that semiconductors are some kind of “weakly conducting” material–but semiconductors actually have a much more useful property.\n\nSemiconductors are materials with a highly controllable electrical conductivity. Semiconductors have a small band gap which engineers can use to switch between conducting and nonconducting. This property makes semiconductors extremely useful for transistors in computers and other electronics.\n\nWhile semiconductors have electrical conductivity between that of metals and insulators, their most important property is their small band gap. If electrons have a certain amount of energy, they can pass the bandgap and transition from nonconducting to conducting.  I’ll talk about bandgap later in the article in more detail because band gap is the most unique property of semiconductors.\n\n### What Are Semiconductors?\n\nAll materials can be placed in one of three categories, depending on their electrical conductivity: conductors (metals), insulators, and semiconductors. (There are also superconductors, but that’s a different story).\n\nBefore modern physics, scientists categorized these materials based on their ability to conduct electricity. Conductivity is the inverse of resistivity, and resistivity measures how difficult it is to push electrons through a material.\n\nMaterials like metals, which easily conduct electricity, were deemed electrical conductors. Materials that did not conduct electricity were called insulators. Before the idea of band gaps, I imagine that scientists considered most semiconductors to be insulators. However, the term “semiconductor” did exist, and the first documentation of a semiconducting diode effect was in the 1800s, although semiconductors wouldn’t become widely useful until the invention of the transistor in 1947.\n\nWith band gaps, we have a quantum-mechanical basis for defining metals, semiconductors, and insulators. I’ll explain the band gap in the next section, but here is the band-gap definition of these materials\n\n• Metals have an overlapping valence band and conduction band\n• Semiconductors have a small gap between the valence band and conduction band, that electrons can potentially be crossed\n• Insulators have a large gap between the valence band and conduction band and conduction band, which can’t be crossed by electrons\n\nThis band gap is what allows semiconductors to conduct electricity when energy is provided, so semiconductors can function as on/off switches. The band gap of most semiconductors is between 0.25 and 2.5 eV.\n\n### What Is a Band Gap?\n\nBand diagrams can help us understand conductors, semiconductors, and insulators. There are many features of the band diagram that are important to semiconductors, but for this article, you only need to know the band gap.\n\nThe band diagram shows the possible energy states for an electron. For a single element and electron, there are some very specific energy levels that the electron can exist in. If the energy is energized it can hop between these states, and if there is enough energy it’s even possible for the electron to leave the atom completely.\n\nAs you have a piece of metal with a terrifyingly large number of atoms and electrons, these allowed energy states for each atom basically merge into a “band” of continually allowed states. This is called the valence band\n\nBeyond the valence band is the conduction band. The conduction band is the collection of energy states where the electrons have enough energy to leave the atom that they’re bound to.\n\nThe band gap is the distance between these valence bands and conduction bands. The difference between metals, insulators, and semiconductors is the size of the band gap.\n\nIn other words, the band gap is the minimum energy required for an electron to leave the atom and start conducting.\n\nMetals have no band gap. In other words, the conduction band and valence band overlap, so an atom is not bound to any particular atom. If it has enough energy to leave, it just leaves.\n\nSemiconductors have a small band gap. This means that if the electrons don’t have enough energy to fully jump across the band gap, the semiconductor does not conduct at all. If there is enough energy to pass this barrier, the material conducts. Semiconductors are super useful because they can act as switches, either passing 0% or 100% of the current.\n\nInsulators have a large band gap. The distinction between insulator and semiconductor is a bit nebulous–it’s not like scientists have a simple value and if the band gap is larger than that value, it’s an insulator. These terms are practical–anything which is considered an insulator has a band gap that is too large to cross in a realistic scenario. Trying to pass too much current through many insulators will destroy the material before electrons have enough energy to jump across the band gap.\n\n### Examples of Semiconducting Materials\n\nMany materials can be semiconducting. Pure elements like Si and Ge can be semiconductors, as well as compounds, oxides and organic-based materials.\n\nA common technique to control the conductivity of semiconductors is doping. Since semiconductors are usually used for their electric properties, they are also made as single crystals so grain boundaries don’t scatter electrons.\n\nHere is a list of some common semiconductors, and their band gaps:\n\n### How Do Semiconductors Conduct Electricity?\n\nConductivity measures the amount of electrical current a material can carry. It can also be called “specific conductance” and is the inverse of resistivity.\n\n• Conductivity is inversely proportional to resistivity! (high conductivity = low resistance)\n• Units: Resistivity, ρ (Ω·m); Conductivity, σ (1/(Ω·m) =S/m)\n\nIn conductors, the charge carrier is just electrons. But in semiconductors we have both electrons and holes. Electrons have a charge of -1 and holes have a charge of +1.\n\nWith this model of electrons and holes, conductivity happens when electrons and holes move in the opposite direction.\n\nConductivity is given by the following equation.\n\nn and p are the carrier density–in other words, how many electrons and holes exist per cross-sectional area. n is for negatively charged carriers (electrons) and p is for positively charged carriers (holes). This can be changed by doping. If the semiconductor is doped, either n or p will be much larger than the other, so the conductivity equation can be approximated as", null, "or", null, ". N-type semiconductors tend to be more common because the mobility of electrons is usually higher than the mobility of holes. For example, the mobility of electrons is 3 times higher than holes in intrinsic silicon at room temperature.\n\nq is the electric charge of each carrier–for electrons and holes, this is 1.", null, "is the mobility, which is how quickly the electron or hole can move through the material.\n\nConductivity in semiconductors strongly depends on:\n\n• temperature (more energy to bridge the band gap)\n• illumination (more energy to bridge the band gap)\n• tiny amounts of impurity atoms (dopants)\n\nFor semiconductors to conduct, charger carriers need enough energy to bridge the band gap.\n\nIf free electrons and holes can be created, they will respond to an electric field, conducting electricity.\n\nHowever, you can’t take the presence of free electrons and holes for granted. These can only exist because of external energy–usually thermal or light.\n\nHere, you can see how important temperature is to semiconductor conductivity.\n\nSemiconductor conductivity increases exponentially with temperature because more electrons reach the conduction band.\n\nAnother way to think of this is that some covalent bonds break with increasing temperature. These free electrons can now participate in electrical conduction.\n\nUsing", null, ", increasing temperature dramatically increase", null, "and", null, ".", null, "decreases slightly, but it is negligible compared to the increase in charge carriers.\n\n### Types of Semiconductors\n\nSemiconductors can be divided into two groups: intrinsic semiconductors and extrinsic semiconductors. Extrinsic semiconductors are doped, which means some small amounts of impurities are added to improve conductivity. Intrinsic semiconductors are not doped–or more precisely, they have the same number of electrons and holes.\n\nIntrinsic semiconductors conduct because of migration of electrons and holes. Since increasing the temperature gives electrons enough thermal energy to pass the band gap, the number of electrons and holes increases with temperature. However, the number of charge carriers is still much lower in intrinsic semiconductors than extrinsic (doped) semiconductors.\n\nTo make extrinsic semiconductors, engineers dope the material with a different-valence element. I’ll use silicon as the general example for this.\n\nSilicon is a group IV semiconductor: it has 4 electrons in its valence shell. By adding small amounts of gallium (a group III element with 3 electrons in its valence shell), holes would be created. Each atom of boron would create one additional hole. Since B binds an electron, it is an acceptor atom. This kind of doping results in an P-type semiconductor.\n\nIf arsenic (a group VI element with 5 electrons in its valence shell) was added to Si, there would be extra unbound electrons. Each atom of arsenic would contribute an additional n charge carrier. Since As adds a free electron, it is a donor atom. This kind of doping results in a N-type semiconductor.\n\nIf arsenic and gallium were added together, the semiconductor would still have the same number of holes and free electrons, so it would be intrinsic. GaAs is an intrinsic semiconductor, although only if it has exactly the same amount of Ga and As. GaAs is a semiconductor that can be self-doped; for example, by having slightly more Ga than As. This kind of doping results in an N-type semiconductor.\n\nHere is a summary table of the difference between N- and P-type semiconductors.\n\nUsing the band gap diagram, you can think of doping as adding possible energy states. N-type doping adds new levels for electrons below the conduction band, and P-type doping adds new energy levels above the valence band.\n\nThese new energy levels shift the effective Fermi level, making conduction much easier.\n\n### Manufacturing Techniques of Semiconductor\n\nSemiconductors are usually single crystals. Actually, silicon wafers are arguably the most-perfect arrangement of atoms in the universe. Billions of dollars are spent keeping them as defect-free as possible, since every defect affects semiconducting properties.\n\nTo make single crystals, semiconductors can be cast in the Czochralski method.\n\nIn this method, the liquid material sits in a crucible very close to the melting temperature. A crystal nucleus seeds crystal growth, and new atoms grow from there. As the nucleus is pulled out, more atoms stick until the entire liquid has solidified into a single crystal.\n\nThis single crystal log is then cut into wafters to use in integrated circuits for computers, TVs, phones, and other devices, or in solar cells. Monocrystalline silicon made by this method is called Czochralski silicon, or Cz-Si.\n\nSingle crystal semiconductors can also be deposited as a film by epitaxial growth. Epitaxial growth is when the semiconductor is grown directly on a substrate. It is often grown by vapor-phase epitaxy, a type of chemical vapor deposition (CVD).", null, "Images of epitaxial layers taken from “C/L-band emission of InAs QDs monolithically grown on Ge substrate” publication\n\n### Applications of Semiconductors\n\nSemiconductors are used in power devices, optical sensors, light emitters (LED), solid-state lasers, RF applications, diode (p-n junction), MOSFETs, photovoltaics, and more.\n\nI want to focus on 3 main devices that use semiconductors and are useful to make other things: diodes, transistors, and photovoltaics.\n\n#### Diodes\n\nDiodes are devices that only allow current to flow in one direction. They work because of a P-N junction. A P-N junction means that you connect a P-type semiconductor to an N-type semiconductor.\n\nSince the N-type semiconductor has excess electrons and the P-type semiconductor has excess holes, there is a tendency for electrons to migrate over the junction. However, this depletion region will have an electrical potential, repulsing further electron exchange.\n\nIf an external electric potential (like voltage from a battery) is applied to push the electrons from the N-type to the P-type, the depletion region can be overcome and electricity will flow. We call this a forward bias.\n\nIf the external electric potential tries to pull electrons and holes away from each other, the electrons will build up a large negative charge that will counteract the external potential, preventing electric flow in that direction. We call this a reverse bias.\n\nDiodes can be used for many applications such as radio demodulation, power conversion, logic gates, reverse voltage protection, light-emission, and more.\n\nLight-emitting diodes (LEDs) are a special type of diode that takes advantage of recombination. Recombination is when an electron and hole come together. Recombination emits energy because the electron drops from the conduction band to the valence band. Since the energy emitted is light, the color of the light relates to its energy. The energy corresponds to the band gap size, so the band gap determines the color of energy emitted.\n\nIn most diodes, the energy emitted is in the infrared range, so we can’t see it. LEDs are made of semiconductors with a band gap such that the energy emitted corresponds to the visible spectrum.\n\n#### Transistor\n\nTransistors are basically switches. They are used extensively in electronics as a way to store 0 and 1.\n\nIf you understood how diodes work, a transistor is essentially 2 diodes back-to-back. Here is a common type of transistor called a MOSFET (Metal Oxide Silicon Field Effect Transistor)\n\nYou have an N-type, P-type, N-type arrangement of semiconductors. As with diodes, there is some recombination at the P-N junctions, but this creates a depletion layer that repels further charge transfer. However, if a voltage is applied, the depletion layer can be overcome and it will conduct electricity. Remove the voltage, and conduction stops.\n\nTransistors are very powerful because they are small, can change states rapidly, and have no moving parts. They can be used as billions of on/off switches in computers or other electronics.\n\n#### Photovoltaics\n\nPhotovoltaics, or solar cells, also work by a P-N junction. At the junction, free electrons fill the holes. However, the electrons and holes can be separated by hitting them with energy equal to–you guessed it–the band gap.\n\nSolar cells generate electricity by sweeping away these free electrons as they are generated, and preventing them from recombining again.\n\nThat’s why the top N-layer is thin–the light has to pass through this layer to hit the neutral junction between the N-layer and P-layer. Electrons which are split from their holes in this layer will migrate to the top and follow the conducting wire that lets them return to the P-layer while work is extracted.\n\n### Final Thoughts\n\nSemiconductors are materials that have a variable conductivity depending on the energy that electrons have to pass the band gap. One of the most common kinds of semiconductor is Si, with a band gap of 1.1 eV.\n\nThey are the backbone of computers. Modern technology would not be possible without transistors and diodes.\n\n### References and Further Reading\n\nIf you are interested in electronic properties of materials, you may be interested in our article about why metals conduct electricity.\n\nIf you like functional materials, you can find our main article on magnetism here.\n\nThis is the paper where we got the image of epitaxial growth:\nWen-Qi Wei, Jian-Huan Wang, Yue Gong, Jin-An Shi, Lin Gu, Hong-Xing Xu, Ting Wang, and Jian-Jun Zhang, C/L-band emission of InAs QDs monolithically grown on Ge substrate, Opt. Mater. Express 7, 2955-2961 (2017)" ]
[ null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-d8e7c3b04975ef63aa55bf306e9bd036_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-3e334dbb7dc5c3c437dbd6c7f8b673d7_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-461fe1a58a75801541487ddf10d32abd_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-18e4b33180242734275cdc8f8d42141d_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-b170995d512c659d8668b4e42e1fef6b_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-3bf85f1087e9fbed3a319341134ac1a2_l3.png", null, "https://msestudent.com/wp-content/ql-cache/quicklatex.com-461fe1a58a75801541487ddf10d32abd_l3.png", null, "https://msestudent.com/wp-content/uploads/2020/09/Epitaxial-layer-STEM.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92572606,"math_prob":0.9290901,"size":17000,"snap":"2022-40-2023-06","text_gpt3_token_len":3776,"char_repetition_ratio":0.19281007,"word_repetition_ratio":0.025354214,"special_character_ratio":0.19505882,"punctuation_ratio":0.10810811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9567044,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,2,null,9,null,2,null,7,null,2,null,9,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T08:14:01Z\",\"WARC-Record-ID\":\"<urn:uuid:08edc5d8-37a5-4e92-bc47-0c6278bcf1fc>\",\"Content-Length\":\"97711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60537c3a-4863-4ed3-b9dc-85125623e58a>\",\"WARC-Concurrent-To\":\"<urn:uuid:208d053e-b57a-472c-a1b1-1f71787873ac>\",\"WARC-IP-Address\":\"3.234.104.255\",\"WARC-Target-URI\":\"https://msestudent.com/what-are-semiconductors/\",\"WARC-Payload-Digest\":\"sha1:GFSZM6RBCDSSQVYM2WSPBF6K3WIFW5PL\",\"WARC-Block-Digest\":\"sha1:O5M3ZDKJWKKDPYIUCSUJUGXIIHST7F6T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335326.48_warc_CC-MAIN-20220929065206-20220929095206-00547.warc.gz\"}"}
https://cstheory.stackexchange.com/questions/18440/two-hamiltonian-path-problem-variants?noredirect=1
[ "# Two Hamiltonian path problem variants\n\nWhile formalizing the gadgets for the proposed reduction of the question Efficient algorithm for existence of permutation with differences sequence? the following problems came to my mind:\n\nProblem 1 (I call it the \"crazy frog problem\" :-))\n\nGiven a $n \\times n$ partially filled board (some cells are empty, some are already filled), a starting cell $c_0 = (x_0,y_0)$ and a list of jump distances $(\\Delta x_1,\\Delta y_1),(\\Delta x_2,\\Delta y_2),...,(\\Delta x_m, \\Delta y_m),\\; -n < \\Delta x_i, \\Delta y_i < n$; is there a sequence $(s_1,s_2,...,s_m), s_i \\in \\{+1,-1\\}$ such that the sequence of jumps: $$c_j = ( x_{j-1} + \\Delta x_j * s_j, \\quad y_{j-1} + \\Delta y_j * s_j )$$\n\nmakes the frog visit every empty cell of the board exactly once? (every jump must be within the boundaries of the board, at every jump the target cell must be empty, after the jump the cell becomes filled)", null, "Figure 1. An instance of the CFP on the left and its solution on the right.\n\nProblem 2\n\nWhat is the complexity of the Hamiltonian path problem on grid graphs with holes, if we force the path to be an alternating sequence of horizontal/vertical moves (i.e. two consecutive horizontal/vertical moves along the path are forbidden).\n\nAre these problems already known and what is their complexity?\n\nNotes: there is an immediate reduction from problem 2 to problem 1; probably there is a reduction from problem 1 to the problem of finding a valid permutation from a difference sequence (which I'm trying to prove NPC).\n\nI tried to write a formal proof of the NP-completeness of Problem 1 (Crazy Frog Problem, CFP).\n\nIt remains NP-complete even if restricetd to a one dimensional board (1-D Crazy Frog Problem, 1-D CFP) or to a one dimensional board without blocked cells, and there is an immediate reduction from the 1-D CFP without blocked cells to the Permutation Reconstruction from Differences sequence problem.\n\nThe reduction is from Hamiltonian path on grid graphs (a grid graph is a node-induced finite subgraph of the infinite grid: if two nodes are adjacent (distance 1 in the grid) then there is an edge between them; a grid graph can have holes, i.e. some nodes may be missing; see \"Hamilton Paths in Grid Graphs\" by Alon Itai, Christos H. Papadimitriou, Jayme L. Szwarcfiter). Given a grid graph $G$ that fits in a $w \\times w$ square, with nodes $u_i$ at coordinates $(x_{u_i},y_{u_i})$ and source, target nodes $s,t$; build a filled board of size $4n\\times4n$ with all blocked cells except cells $(4x_{u_i},4y_{u_i})$ and target cell $(4x_t+1,4y_t)$. The initial frog position is $(4x_s,4y_s)$. This part of the board is called graph area. The first \"logical\" sequence of moves is the following sequence repeated $|V| - 1$ times:\n\n(suppose that frog is in position $(x,y)$)\n\n• $(0,J_i)$: vertically jump to an empty part of the board (edge gadget)\n• $(-2,2)$: make a backward diagonal jump of length 2\n• $(0,p)$: vertically jump to another empty part of the board (on the same gadget)\n• $(2,2)$: make a forward diagonal jump of length 2\n• $(0,J_i+p)$: return to graph area in one of the cells $(x+4,y),(x-4,y),(x,y-4),(x,y+4)$\n\nThis forces the frog to make $n-1$ steps on the empty cells corresponding to the nodes of the original graph.\n\nThe sequence is followed by one horizontal step $(1,0)$ (that forces the last cell to be the one associated with the target node $t$) and a forced step that leads to a cleanup gadget. The sequence of jumps of the cleanup gadget allows it to visit all unvisited cells of the edge gadgets and completely fill the board.\n\nThis is a simple graphical outline of the gadgets used in the reduction:", null, "Figure: Outline of a CFP instance (for better readability blocked cells are not shown, the cleanup gadget is not complete and space is compacted ). Black jumps represent the graph area traversal, green jumps represent the edge gadgets traversals, red jumps represent the vertical selection gadget traversal, blue jumps represent the horizontal hole gadget traversal.\n\nThe details can be found in the draft paper \"The Crazy Frog Problem and Permutation Reconstruction from Differences\" that can be download here.\n\nThe reduction is complex so perhaps it is wrong (or can be simplified), but I think that the results (if new?) are interesting ...\n\n• Hamiltonian path on grid graphs isn't NP-complete. Every grid graph has a Hamiltonian path. – David Richerby Sep 25 '13 at 23:22\n• @DavidRicherby, Hamiltonin path on grid graphs is NP-Hard, note that grid graph is just a subset of infinite grid, on the other hand on solid grids ($n\\times n$) finding Hamiltonian path is trivial. And in this answer Marzio used solid grid graph, which is wrong. – Saeed Sep 26 '13 at 7:36\n• @Saeed: Hamiltonian path (and cycles) on grid graphs with \"holes\" is NP-Complete. I didn't used a solid grid graph in my reduction (the Graph Area contain (many) blocked cells). – Marzio De Biasi Sep 26 '13 at 9:54\n• I think that, given the confusion over exactly what a grid graph is (for example, Diestel and Wikipedia agree with the definition I gave; Itai et al. and others agree with the definition you gave), it would be a good idea to explicitly define what it meant by \"grid graph\" in the answer. That way, people won't have to read the comments to understand it. – David Richerby Sep 26 '13 at 16:08\n• @DavidRicherby: ok, I included the definition in the answer. – Marzio De Biasi Sep 26 '13 at 16:21" ]
[ null, "https://i.stack.imgur.com/qbFNV.png", null, "https://i.stack.imgur.com/Aokk2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90250665,"math_prob":0.9749728,"size":1492,"snap":"2020-34-2020-40","text_gpt3_token_len":387,"char_repetition_ratio":0.14180107,"word_repetition_ratio":0.0,"special_character_ratio":0.27546915,"punctuation_ratio":0.14046822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966531,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T21:52:01Z\",\"WARC-Record-ID\":\"<urn:uuid:8670cf28-bcde-42d6-942a-e035f1d4530d>\",\"Content-Length\":\"161612\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddcc57e9-4806-4808-8e52-8d6a04f3265e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b91c258f-9660-4245-ac32-4d0bc93315ef>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/18440/two-hamiltonian-path-problem-variants?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:VPZGXNOSI7MH763AVMNHQ2SGDIIJD4C7\",\"WARC-Block-Digest\":\"sha1:KI6FUQIL5LBGVJDUPWLKVQY2O2XVPGQW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402128649.98_warc_CC-MAIN-20200930204041-20200930234041-00034.warc.gz\"}"}
https://tools.carboncollective.co/inflation/us/1957/75360/2019/
[ "# $75,360 in 1957 is worth$685,633.86 in 2019\n\n$75,360 in 1957 has the same purchasing power as$685,633.86 in 2019. Over the 62 years this is a change of $610,273.86. The average inflation rate of the dollar between 1957 and 2019 was 3.66% per year. The cumulative price increase of the dollar over this time was 809.81%. ## The value of$75,360 from 1957 to 2019\n\nSo what does this data mean? It means that the prices in 2019 are 6,856.34 higher than the average prices since 1957. A dollar in 2019 can buy 10.99% of what it could buy in 1957.\n\nWe can look at the buying power equivalent for $75,360 in 1957 to see how much you would need to adjust for in order to beat inflation. For 1957 to 2019, if you started with$75,360 in 1957, you would need to have $685,633.86 in 1957 to keep up with inflation rates. So if we are saying that$75,360 is equivalent to $685,633.86 over time, you can see the core concept of inflation in action. The \"real value\" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously. In the chart below you can see how the value of the dollar is worth less over 62 years. ## Value of$75,360 Over Time\n\nIn the table below we can see the value of the US Dollar over time. According to the BLS, each of these amounts are equivalent in terms of what that amount could purchase at the time.\n\nYear Dollar Value Inflation Rate\n1957 $75,360.00 3.31% 1958$77,505.48 2.85%\n1959 $78,041.85 0.69% 1960$79,382.78 1.72%\n1961 $80,187.33 1.01% 1962$80,991.89 1.00%\n1963 $82,064.63 1.32% 1964$83,137.37 1.31%\n1965 $84,478.29 1.61% 1966$86,891.96 2.86%\n1967 $89,573.81 3.09% 1968$93,328.40 4.19%\n1969 $98,423.91 5.46% 1970$104,055.80 5.72%\n1971 $108,614.95 4.38% 1972$112,101.35 3.21%\n1973 $119,074.16 6.22% 1974$132,215.23 11.04%\n1975 $144,283.56 9.13% 1976$152,597.30 5.76%\n1977 $162,520.14 6.50% 1978$174,856.65 7.59%\n1979 $194,702.35 11.35% 1980$220,984.48 13.50%\n1981 $243,780.21 10.32% 1982$258,798.58 6.16%\n1983 $267,112.31 3.21% 1984$278,644.27 4.32%\n1985 $288,567.12 3.56% 1986$293,930.82 1.86%\n1987 $304,658.22 3.65% 1988$317,262.92 4.14%\n1989 $332,549.47 4.82% 1990$350,517.86 5.40%\n1991 $365,268.04 4.21% 1992$376,263.63 3.01%\n1993 $387,527.40 2.99% 1994$397,450.25 2.56%\n1995 $408,714.02 2.83% 1996$420,782.35 2.95%\n1997 $430,437.01 2.29% 1998$437,141.64 1.56%\n1999 $446,796.30 2.21% 2000$461,814.66 3.36%\n2001 $474,955.73 2.85% 2002$482,464.91 1.58%\n2003 $493,460.50 2.28% 2004$506,601.57 2.66%\n2005 $523,765.41 3.39% 2006$540,661.07 3.23%\n2007 $556,060.25 2.85% 2008$577,410.47 3.84%\n2009 $575,356.17 -0.36% 2010$584,793.60 1.64%\n2011 $603,252.78 3.16% 2012$615,736.79 2.07%\n2013 $624,755.85 1.46% 2014$634,890.57 1.62%\n2015 $635,644.17 0.12% 2016$643,662.90 1.26%\n2017 $657,375.20 2.13% 2018$673,431.44 2.44%\n2019 $685,633.86 1.81% ## US Dollar Inflation Conversion If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 809.81%. Initial Value Equivalent Value$1.00 in 1957 $9.10 in 2019$5.00 in 1957 $45.49 in 2019$10.00 in 1957 $90.98 in 2019$50.00 in 1957 $454.91 in 2019$100.00 in 1957 $909.81 in 2019$500.00 in 1957 $4,549.06 in 2019$1,000.00 in 1957 $9,098.11 in 2019$5,000.00 in 1957 $45,490.57 in 2019$10,000.00 in 1957 $90,981.14 in 2019$50,000.00 in 1957 $454,905.69 in 2019$100,000.00 in 1957 $909,811.39 in 2019$500,000.00 in 1957 $4,549,056.94 in 2019$1,000,000.00 in 1957 $9,098,113.88 in 2019 ## Calculate Inflation Rate for$75,360 from 1957 to 2019\n\nTo calculate the inflation rate of $75,360 from 1957 to 2019, we use the following formula: $$\\dfrac{ 1957\\; USD\\; value \\times CPI\\; in\\; 2019 }{ CPI\\; in\\; 1957 } = 2019\\; USD\\; value$$ We then replace the variables with the historical CPI values. The CPI in 1957 was 28.1 and 255.657 in 2019. $$\\dfrac{ \\75,360 \\times 255.657 }{ 28.1 } = \\text{ \\685,633.86 }$$$75,360 in 1957 has the same purchasing power as \\$685,633.86 in 2019.\n\nTo work out the total inflation rate for the 62 years between 1957 and 2019, we can use a different formula:\n\n$$\\dfrac{\\text{CPI in 2019 } - \\text{ CPI in 1957 } }{\\text{CPI in 1957 }} \\times 100 = \\text{Cumulative rate for 62 years}$$\n\nAgain, we can replace those variables with the correct Consumer Price Index values to work out the cumulativate rate:\n\n$$\\dfrac{\\text{ 255.657 } - \\text{ 28.1 } }{\\text{ 28.1 }} \\times 100 = \\text{ 809.81\\% }$$\n\n## Inflation Rate Definition\n\nThe inflation rate is the percentage increase in the average level of prices of a basket of selected goods over time. It indicates a decrease in the purchasing power of currency and results in an increased consumer price index (CPI). Put simply, the inflation rate is the rate at which the general prices of consumer goods increases when the currency purchase power is falling.\n\nThe most common cause of inflation is an increase in the money supply, though it can be caused by many different circumstances and events. The value of the floating currency starts to decline when it becomes abundant. What this means is that the currency is not as scarce and, as a result, not as valuable.\n\nBy comparing a list of standard products (the CPI), the change in price over time will be measured by the inflation rate. The prices of products such as milk, bread, and gas will be tracked over time after they are grouped together. Inflation shows that the money used to buy these products is not worth as much as it used to be when there is an increase in these products’ prices over time.\n\nThe inflation rate is basically the rate at which money loses its value when compared to the basket of selected goods – which is a fixed set of consumer products and services that are valued on an annual basis." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86635613,"math_prob":0.9893182,"size":6493,"snap":"2022-05-2022-21","text_gpt3_token_len":2443,"char_repetition_ratio":0.15148713,"word_repetition_ratio":0.019011406,"special_character_ratio":0.53026336,"punctuation_ratio":0.20316301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T00:11:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a6902f7f-18bd-4aa2-b7c4-84c8d78c50c2>\",\"Content-Length\":\"42649\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e6f706c-b995-4a42-9b14-bff0d62d43c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:61a10184-90ed-4662-97e3-744e68c8a05b>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/inflation/us/1957/75360/2019/\",\"WARC-Payload-Digest\":\"sha1:BKN43CDILYOUUEN5W4AU2AYPK2VKNYEE\",\"WARC-Block-Digest\":\"sha1:JIBLPPWLYEB2AXIBNGFSVNHSIXMW6AE5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662520936.24_warc_CC-MAIN-20220517225809-20220518015809-00381.warc.gz\"}"}
https://viralsocialblog.com/anomaly-detection-using-machine-learning-how-machine-learning-can-enable-anomaly-detection/
[ "# Anomaly Detection using Machine Learning | How Machine Learning Can Enable Anomaly Detection?\n\nAnomaly detection is something similar to how our human brains are always trying to recognize something abnormal or out of the “normal” or the “usual stuff.” Anomaly is basically  that does not fit with the usual pattern. Abnormal growth of data or some pattern which is not similar with the other part of the data and our data science concepts and tools are also trying to look for anomalies that do not maintain the normal data flow.\n\nSo to understand more intuitively let me give you an example like an “unusually high” number of login attempts to some particular account may point to a potential cyberattack, or a huge amount of transaction with credit card can be a reason for fraud transaction. Also, check put this Fake news detection using machine learning course today.\n\nSo In this blog we will discuss about:\n\n## What is Anomaly Detection?\n\nAnomaly Detection is basically a technique to identify rare events or observations. These events or the observations  can be a cause of  suspicious activity. Because these observations are statistically different from the rest of the observations. Examples of such activities are known as anomaly in the dataset and these behaviours are known as   “anomalous” behaviour and  typically the reason for some kind of a problem like a credit card fraud, failing machine in a server, a cyber attack, or some other kind of serious issues.\n\n## Why do we need Anomaly Detection?\n\nWe all know that Machine Learning has four parts or classes of applications:\n\n1. classification,\n2. predicting the next upcoming value,\n3. anomaly detection,\n4. and discovering structure.\n\nFrom these four classes Anomaly detection helps to detect data points from the dataset  that do not fit well or not behaving normally with the rest of the data. The applications for this particular class are fraud detection, surveillance, diagnosis, data cleanup, and predictive maintenance etc.\n\nThe advent of IoT and  anomaly detection is now playing a  key role in IoT applications as well  such as monitoring and predictive maintenance.\n\nModern businesses are now becoming dependent on the data and they try to forecast their sales with the base of these new technologies and they have started understanding  the importance of interconnected operations to get the full schema of their business.At the same time they need to respond and take actions  to fast-moving changes in data promptly. Especially if we talk about the cybersecurity threats and rapid changes in their domain. To solve these heart throbbing problems anomaly detection can be a key or a bright side  for solving such intrusions. And unfortunately, we have no other effective way to handle and analyze constantly growing datasets manually. In constantly changing data “normal” behavior is redefining each and every moment, and a new effective, proactive approach to identify anomalous behaviour can be find using these concepts.\n\nSo to get and use all the features anomaly detection is needed to identify the anomaly of the data.\n\n## Types of Anomaly detection\n\nThese anomalies can be divided into three categories:\n\n1. Point Anomaly: An individual data instance in a dataset is considered to be a Point Anomaly if it belongs to far away from the rest of the data.\n\nExample: Sudden transaction of huge amount from a credit card\n\n1. Contextual Anomaly: If a particular observation is anomalous from other data points then this is called a Contextual Anomaly. Or if the data is  an anomaly because of the context of the observation.\n2. Collective Anomaly: Collective Anomaly are known as a set of data instances that help in finding an anomaly. This type of anomaly  has two variations.\n1. Events are in unexpected order\n2. Unexpected value combinations\n\n## Benefits and application of anomaly detection:\n\nArtificial Intelligence helps our human resources to handle the elastic environment of cloud infrastructure, microservices and containers. We use artificial intelligence concepts everywhere to overcome these challenges.\n\nSo let me give you some examples of anomaly detection,\n\nSuppose in the automation Industry they start using AI-driven anomaly detection algorithms that can automatically analyze and understand the datasets. These algorithms are too efficient for that they can dynamically fine-tune the parameters of normal behavior and identify breaches in the patterns.\n\nReal-time analysis: In the real time analysis industry also AI solutions can interpret data activity in real time. Basically here AI tries to check the pattern and compare them. At any moment if a pattern is not recognized by the system, then the system will send a signal as soon as possible.\n\nScrupulousness: In this application of anomaly detection basically helps to provide an end-to-end gap-free monitoring which can go through minutiae of data. By this application we can identify the smallest anomalies in the data which is almost impossible for a human eye  to find out.\n\nAccuracy:When it comes to comparing the accuracy between AI and the human resource we always notice that Artificial Intelligence is far better to deal with anomaly detection.  Artificial Intelligence helps to enhance the accuracy of anomaly detection with avoiding nuisance alerts and false positives/negatives triggered by static thresholds.\n\nSelf-learning: We all have the idea about self driving  cars and Tesla Self driving cars are also very famous so this industry is also using Artificial Intelligence concepts. So the heart of this industry is based on AI-driven algorithms which constitute the core of self-learning systems. These systems are able to learn from data patterns and deliver predictions or answers as required.\n\n## Machine learning methods to do anomaly detection:\n\nWhat is Machine Learning?\n\nMachine learning is a sub-set of artificial intelligence (AI) that allows the system to automatically learn and improve from experience without being explicitly programmed\n\nThree types are there in machine learning:\n\n1. Supervised\n2. Unsupervised\n3. Reinforcement learning\n\nWhat is supervised learning?\n\nFrom the name itself, we can understand supervised learning works as a supervisor or teacher. Basically, In supervised learning, we teach or train the machine with labeled data (that means data is already tagged with some predefined class). Then we test our model with some unknown new set of data and predict the level for them.\n\nWhat is unsupervised learning?\n\nUnsupervised learning is a machine learning technique, where you do not need to supervise the model. Instead, you need to allow the model to work on its own to discover information. It mainly deals with the unlabeled data.\n\nWhat is Reinforcement Learning?\n\nReinforcement learning is about taking suitable action to maximize reward in a particular situation. It is used to define the best sequence of decisions that allow the agent to solve a problem while maximizing a long-term reward.\n\n## Machine Learning and Outlier Analysis\n\nWhat is an outlier?\n\nAn outlier in the data basically stands for identifying any  data object or point that significantly deviates from the remaining data points. In the concept of data mining, outliers are commonly considered as an exception or simply noise to the data. But the same process cannot be applied in anomaly detection, hence the emphasis on outlier analysis.\n\nLet me give you an example about performing anomaly detection using machine learning. This method is the K-means clustering method.\n\nWhat is K means clustering?\n\nK-means clustering is a part of unsupervised learning in the machine learning algorithm. This algorithm is used for the unlabeled data (i.e.,The  data without defined categories, class, or groups). The goal of this K-means algorithm is to find the specific groups in the data, with the number of groups represented by the variable K. The algorithm normally works iteratively to put  and classify each data point to one of K groups based on the features the data points have. These  data points are clustered based on feature similarity.\n\nThe results we got from the  K-means clustering algorithm are:\n\n1.  The centroids of the K clusters, which are used to label new data\n2. Labels for the each data points in the training data (each data point is assigned to a single cluster)\n\nSo this algorithm K-means clustering method is applied to detect the outlier based on their plotted distance from the closest cluster.\n\nK-means clustering method helps the form the different types of  clusters for each  data point in the dataset with a mean value. Normally we observe that objects within a cluster have the closest mean value. Any object if it has the threshold value greater than the nearest cluster mean value then that is identified as an outlier.\n\nStep-by-step method implementation to use K-means clustering:\n\n1. First calculate the mean value of each cluster\n2. Then try to set an initial threshold value\n3. When the testing process is going in that time try to  determine the distance of each data point from the mean value\n4. Find or Identify the cluster which  is nearest to the test data point\n5. If we observe that the “Distance” value is more than the “threshold” value, then we can conclude that it is an outlier.\n1. Supervised Anomaly Detection:\n\nAs we are already familiar with that, the  supervised learning method needs a labeled dataset. This dataset can  contain both the  normal and the anomalous samples to construct a predictive model. This dataset helps us  to classify future data points. The most commonly and famous  used algorithms for this purpose are supervised learning processes like, Support Vector Machine learning, Machine learning modelling, K-Nearest Neighbors Classifier, etc.\n\n1. Unsupervised Anomaly Detection:\n\nSo in the beginning of this tutorial we have learned about unsupervised learning as well.\n\nUnsupervised anomaly detection algorithms are divided into some parts like:\n\n(1) Nearest-neighbor based techniques,\n\n(2) Clustering-based methods and\n\n(3) Statistical algorithms.\n\nThis learning process does not require any training data and instead of that it automatically assumes two things about the data:\n\nA very small percentage of data is anomalous in the whole dataset\n\nAnd the second one is  that any anomaly is statistically different from the normal samples.\n\nBased on these two assumptions, the data is then process to cluster using the similarity measurement and the data points in the dataset which are far away from the cluster they are considered as the  anomalies.\n\n## Project with anomaly detection:\n\nCredit card Fraud Analysis:\n\n##### Project: Credit card Fraud Analysis using Data mining techniques\n\nIN today’s world, we are literally sitting on the express train to become a cashless society. As per the World Payments Report, in 2016 total non-cash transactions increased by 10.1% from 2015 for a total of 482.6 billion transactions! That’s huge! Also, it’s expected that in future years there will be a steady growth of non-cash transactions.\n\nAs this is a blessing on the other hand it becomes a curse for this cashless society because of the immense number of fraud transactions even if EMV smart chips are also implemented.\n\nSo our data scientists are trying to come up with one of the best solutions to make a model for predicting fraud transactions.\n\n##### Import the libraries:\n```import pandas as pd\n\nimport numpy as np\n\nimport matplotlib\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import train_test_split\n\nfrom sklearn.metrics import classification_report, accuracy_score\n\nfrom sklearn.metrics import confusion_matrix\n\nfrom sklearn.linear_model import LogisticRegression\n\nfrom sklearn import metrics\n\nimport seaborn as sns```\n##### Collect the Data\n\nI collected the data from Kaggle dataset.\n\nDataset description:\n\n1. It contains 285,000 rows of data and 31 columns.\n2.  The most important columns are\n•  Time,\n• Amount,\n•  and Class (fraud or not fraud)\n`data_df=pd.read_csv('/content/creditcard 2.csv')`\n`data_df.head()`\n\nData understanding:\n\nThis method is used to display basic statistical details like\n\n`data_df.describe()`\n• Percentile,\n•  mean,\n• std\n• etc. of a data frame or a series of numeric values.\n\nFor example, we only took the Amount, Time, and the Class columns.\n\n• data_df.isna().any(): This method is used to Check the null values in the dataset.\n`data_df.isna().any()`\n\nFalse stands for we don’t have any column with null values.\n\n• Display the percentage of total null values in the dataset:\n\nJust to reconfirm that we don’t have any null values in the dataset so that percentage calculation is done.\n\n• Find out the percentage of total not fraud transaction in the dataset:\n```data_df[‘Class’] = 0 Not a fraud transaction\n\ndata_df[‘Class’] = 1 Fraud transaction```\n```nfcount=0\n\nnotFraud=data_df['Class']\n\nfor i in range(len(notFraud)):\n\nif notFraud[i]==0:\n\nnfcount=nfcount+1\n\nnfcount\n\nper_nf=(nfcount/len(notFraud))*100\n\nprint('percentage of total not fraud transaction in the dataset: ',per_nf)```\n\nSo in this data 99.82% of data are for normal transactions.\n\n• Find out the percentage of total fraud transaction in the dataset:\n```data_df[‘Class’] = 0 Not a fraud transaction\n\ndata_df[‘Class’] = 1 Fraud transaction```\n```fcount=0\n\nFraud=data_df['Class']\n\nfor i in range(len(Fraud)):\n\nif Fraud[i]==1:\n\nfcount=fcount+1\n\nfcount\n\nper_f=(fcount/len(Fraud))*100\n\nprint('percentage of total fraud transaction in the dataset: ',per_f)```\n\n0.172% of data holds the fraud transaction record.\n\nData Visualization:\n\nNow we will visualize the data through the graph to understand more intuitively.\n\n• Plot Fraud transaction vs genuine transaction:\n```plt.title(\"Bar plot for Fraud VS Genuine transactions\")\n\nsns.barplot(x = 'Fraud Transaction', y = 'Genuine Transaction', data = plot_data, palette = 'Blues', edgecolor = 'w')```\n\nAs per the graph we can say the ratio of genuine transactions are higher than fraud transactions.\n\n1. Plot Amount Vs Time:\n```x=data_df['Amount']\n\ny=data_df['Time']\n\nplt.plot(x, y)\n\nplt.title('Time Vs amount')\n\n#sns.barplot(x = x, y = y, data = data, palette = 'Blues', edgecolor = 'w')```\n\nIn this graph we try to plot the relation between Time and the amount.\n\n• Amount distribution curve:\n```plt.figure(figsize=(10,8), )\n\nplt.title('Amount Distribution')\n\nsns.distplot(data_df['Amount'],color='red');```\n\nFrom this amount distribution curve  it is shown that the number high amount transactions are very low. So there is a high probability for huge transactions to be fraudulent .\n\nFind the correlation between all the attributes in the Data:\n\n```# Correlation matrix\n\ncorrelation_metrics = data_df.corr()\n\nfig = plt.figure(figsize = (14, 9))\n\nsns.heatmap(correlation_metrics, vmax = .9, square = True)\n\nplt.show()```\n\nCorrelation metrics help us to understand the core relation between two attributes.\n\nFind the outliers in the dataset:\n\nAn outlier is an observation that helps to find  the abnormal behaviour in the dataset.\n\nModelling:\n\n• 80% → 80% of the data will use to train the model\n• 20% → 20% to validate the model\n```x=data_df.drop(['Class'], axis = 1)#drop the target variable\n\ny=data_df['Class']\n\nxtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size = 0.2, random_state = 42)```\n\nWhat is a Linear regression model?\n\nLinear regression is a type of supervised algorithm used for finding linear relationships between independent and dependent variables. Finds relationship between two or more continuous variables.\n\nThis algorithm is mostly used in forecasting and predictions and shows the linear relationship between input and output variables, so it is called linear regression.\n\nEquation to solve linear regression problems:\n\nY= MX+C\n\nWhere, y= Dependent variable\n\nX= independent variable\n\nM= slope\n\nC= intercept\n\nI hope we got a brief introduction about Linear regression now start implementing and training the model.\n\nHere we call the linear regression method from scikit learn library and fit the model.\n\n``` from sklearn.linear_model import LinearRegression\n\nlinear =LinearRegression()\n\nlinear.fit(xtrain, ytrain)```\n\nNow comes the prediction part:\n\n```y_pred = linear.predict(xtest)\n\ntable= pd.DataFrame({\"Actual\":ytest,\"Predicted\":y_pred})\n\ntable```\n\nIn this part we will provide test data to understand the model performance.\n\nAs per, the accuracy score we can say our model’s prediction is not good enough.\n\nSo we can try some other algorithms to predict the fraud transaction:\n\nLogistic Regression:\n\nWhat is Logistic Regression?\n\nLogistic regression is also a part of supervised learning classification algorithm. It is used to predict the probability of a target variable and the nature of target or dependent variable is discrete, so for the output there will be only two class will be present\n\n• The dependent variable is binary in nature so that can be either 1 (stands for success/yes) or 0 (stands for failure/no).\n• Logistic regression is also known as sigmoid function\n• Sigmoid function = 1 / (1 + e^-value)\n• Implement and train the model:\n```logisticreg = LogisticRegression()\n\nlogisticreg.fit(xtrain, ytrain)```\n• Predict the new data using Logistic Regression model:\n`y_pred = logisticreg.predict(xtest)`\n```table= pd.DataFrame({\"Actual\":ytest,\"Predicted\":y_pred})\n\ntable```\n\nAccording to the  accuracy score Logistic regression works pretty well because predicting fraud transactions is a classification problem.\n\nSo, this is one method to predict the fraud transaction but also there are many methods and algorithms are there to solve this problem.\n\nIn this article, we covered what is anomaly detection and how can we use machine learning to perform such tasks.Here is a chance for you to get a free course about machine learning , click the banner to know more" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.871682,"math_prob":0.91571474,"size":17975,"snap":"2023-14-2023-23","text_gpt3_token_len":3631,"char_repetition_ratio":0.14556786,"word_repetition_ratio":0.022922637,"special_character_ratio":0.1998331,"punctuation_ratio":0.09574803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99063843,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T13:29:28Z\",\"WARC-Record-ID\":\"<urn:uuid:5616f9f4-d7fa-4d49-854e-742ab8ba7d7e>\",\"Content-Length\":\"71894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e91c614-4045-4904-afb3-95a24455b0d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b6263e8-c6b8-4925-b1f4-12ddb00cf1d7>\",\"WARC-IP-Address\":\"172.67.207.89\",\"WARC-Target-URI\":\"https://viralsocialblog.com/anomaly-detection-using-machine-learning-how-machine-learning-can-enable-anomaly-detection/\",\"WARC-Payload-Digest\":\"sha1:ZLDDD4TLHSQMUUFSKXI3BCXDCDZI5OV5\",\"WARC-Block-Digest\":\"sha1:M2W7NISMSIL5FTFGZ3Z7DUZOCZRYTBTQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945333.53_warc_CC-MAIN-20230325130029-20230325160029-00656.warc.gz\"}"}
https://allaboutcad.com/dimensioning-basics-part-iii-create-accurate-dimensions-for-angles/
[ "The most comprehensive AutoCAD book around!", null, "", null, "# Dimensioning basics, Part III: Create accurate dimensions for angles\n\nMeasuring angles can be challenging. Sometimes, two lines that create an angle don’t even intersect or intersect, but not at their endpoints. For this reason, specifying an accurate vertex for the angle is important.\n\nTo dimension an angle, start the DIMANGULAR command. You can get there in two ways:\n\n• Home tab> Annotation panel> Dimension drop-down list> Angular\n• Annotate tab> Dimensions panel> Dmension drop-down list> Angular\n\nYou see the Select arc, circle, line, or <specify vertex>: prompt. You can respond in one of 4 ways:\n\n## Select an arc\n\nIf you select an arc, DIMANGULAR dimensions the arc. The arc’s center is the vertex of the angle. You can place the dimension either inside or outside the arc.", null, "## Select a circle\n\nIf you select a circle, DIMANGULAR uses the point you picked when selecting the circle as the first angle endpoint. The circle’s center is the vertex. You are prompted for the second angle endpoint; pick a point on the circle. In this way, you are dimensioning an arc, which is just a portion of a circle.\n\nTip: Let’s say that you draw a circle and then draw lines that cross the circle, as you see below. If you want to select the circle and try using the Intersection object snap, you end up selecting a line, because it’s on top of the circle. That’s because newer objects are on top of older objects. If you want to selecft the circle, select it, and right-click it, and choose Draw Order> Bring to Front. Of course, you could get the same angle measurement by selecting the lines, but if you want to be dimensioning the circle (perhaps you’ll erase the lines later), bringing the circle to the front can help.", null, "## Select a line\n\nIf you select a line, DIMANGULAR’s prompt asks you for a second line. If the lines don’t intersect, the implied intersection is the vertex.", null, "## Press Enter to specify all the points of the angle\n\nIf you want to individually specify the vertex and the two angle endpoints, just press Enter.  You’re then prompted for the vertex, 1st angle endpoint and 2nd angle endpoint.", null, "## Dimensioning the outside angle\n\nIn the above example, the dimension measures the minor angle, the portion that is less than 180°. By simple moving the cursor below the vertex, you can measure the major angle, as you see here.", null, "Always use object snaps when specify the vertex and the angle endpoints. This will ensure that you get an accurate measurement.\n\nRemember that you can specify the decimal precision of a dimension in the dimension style. For angular dimensions, start the DIMSTYLE command to open the Dimension Style dialog box. Then, on the Primary Units, tab, use the Angular Dimension section’s Precision drop-down list.", null, "•", null, "rohit aggarwal" ]
[ null, "https://ws-na.amazon-adsystem.com/widgets/q", null, "https://ir-na.amazon-adsystem.com/e/ir", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-1.png", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-2.png", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-3.png", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-4-1024x414.png", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-5.png", null, "https://allaboutcad.com/wp-content/uploads/2012/05/autocad-tips-dimensions-angular-6.png", null, "https://secure.gravatar.com/avatar/c83e095492faf6f94126fabfec0e1543", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8004824,"math_prob":0.8484599,"size":2733,"snap":"2019-51-2020-05","text_gpt3_token_len":598,"char_repetition_ratio":0.16672774,"word_repetition_ratio":0.0042643924,"special_character_ratio":0.21002561,"punctuation_ratio":0.121157326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97544795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T02:45:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3dc8e911-8354-4a2f-b155-05f0105ab10c>\",\"Content-Length\":\"112325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36a6584c-8ebe-4e6d-aed9-27766a3234b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd7bf385-fa0d-4644-ba91-2330f17dcd24>\",\"WARC-IP-Address\":\"134.209.49.15\",\"WARC-Target-URI\":\"https://allaboutcad.com/dimensioning-basics-part-iii-create-accurate-dimensions-for-angles/\",\"WARC-Payload-Digest\":\"sha1:LNTCUUUJ4LHBY3HS5BHSPOK6AWIGBHEC\",\"WARC-Block-Digest\":\"sha1:EPSFRYK43Z5VTWSX7IDZAZZMPTRK2GV7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251684146.65_warc_CC-MAIN-20200126013015-20200126043015-00344.warc.gz\"}"}
https://mattermodeling.stackexchange.com/questions/8600/do-we-know-for-sure-that-all-atomic-and-molecular-wavefunctions-decay-exponentia
[ "# Do we know for sure that all atomic and molecular wavefunctions decay exponentially as r goes to infinity?\n\nSlater type orbitals (STO) are considered to be more accurate than gaussian type orbitals (GTO) for atomic and molecular QM calculations because - among other reasons - they decay with $$e^{-\\alpha r}$$ as $$r \\to \\infty$$. But GTOs are more popular because they are easier to calculate with. GTOs decay with $$e^{-\\alpha r^2}$$, so its adequate to sometimes add diffuse functions to the GTO basis set to compensate for gaussian decay behaviour.\n\nAlso, exact hydrogen wavefunctions decay exponentially, so the motivation for STOs.\n\nI understand that the only boundary requirement for solving the Schrödinger equation for atoms and molecules in free space is that the wavefunction goes zero as $$r \\to \\infty$$, but there are no a priori requirements for the way it decays as it does so.\n\nMy question is: do we have theoretical (ab initio) and/or experimental reasons to believe that all atomic and molecular wavefunctions decay like $$e^{-\\alpha r}$$ as $$r \\to \\infty$$.\n\n• The title change was just to make the post more easily searchable and accessible across the SE network, see here for some details.\n– Tyberius\nJan 24 at 18:10\n• @Tyberius, cool, will keep that in mind in the next ones, thanks (bad habits from Math.SE die hard :).\n– Arc\nJan 25 at 5:53\n\nI'll answer this question from the theoretical side. The exponential behavior follows simply from the Schrödinger equation. Consider the one-electron Schrödinger equation: $$(-\\frac{1}{2}\\nabla^2 + V(\\mathbf{r}))\\psi(\\mathbf{r}) = \\epsilon\\psi(\\mathbf{r}), \\epsilon < 0$$ At spatial points that are very far away from the nucleus, $$V(\\mathbf{r})\\approx 0$$, so that the asymptotic solution is given by $$-\\frac{1}{2}\\nabla^2\\psi(\\mathbf{r}) = \\epsilon\\psi(\\mathbf{r}), \\epsilon < 0$$ This differential equation has basic solutions of the form $$\\psi(\\mathbf{r}) = Ce^{-\\sqrt{-2\\epsilon}\\mathbf{k}\\cdot\\mathbf{r}}$$ for some unit vector $$\\mathbf{k}$$. The real asymptotic behavior of $$\\psi(\\mathbf{r})$$ is thus a linear combination of these basic solutions. The linear combination may bring a polynomial prefactor to the exponential, but will never alter the exponent. Thus we have not only proved the exponential behavior, but also derived the correct exponent $$\\alpha = \\sqrt{-2\\epsilon}$$. For a multi-electronic, non-interacting system, the overall decay rate is governed by the slowest decaying orbital, i.e. the HOMO.\nOf course, the real wavefunction can only be described by a multi-electron Schrödinger equation. But we can work on the equivalent Kohn-Sham system and show that the Kohn-Sham wavefunction decays at a rate given by the Kohn-Sham HOMO energy. By Janak's theorem, the Kohn-Sham HOMO energy is just the negative of the ionization potential of the exact system. To see this, consider a huge ensemble of $$N$$ identical, non-interacting molecules. If we remove one electron from the ensemble and let the hole delocalize evenly between all the molecules, then as $$N\\to +\\infty$$, the electron removal has a negligible impact on the electron density of any molecule (and therefore the Kohn-Sham potential of each molecule). Therefore under the Kohn-Sham framework we see that removing such an electron costs an energy of $$-\\epsilon_{\\mathrm{HOMO}}$$ (it does not matter whether the HOMO refers to that of the ensemble or that of a molecule, since their orbital energies are equal), since the electron is taken from an energy level whose energy is $$\\epsilon_{\\mathrm{HOMO}}$$ and the Hamiltonian is not changed in this process. On the other hand, from the perspective of the real system it is clear that the energy cost is equal to the first ionization energy of one of the molecules, $$I$$. Therefore we have $$\\epsilon_{\\mathrm{HOMO}} = -I$$, which means that the Kohn-Sham wavefunction decays like (again up to a possible polynomial prefactor; the precise determination of this polynomial prefactor is a much more difficult question) $$\\psi(\\mathbf{r}) = Ce^{-\\sqrt{2I}\\mathbf{k}\\cdot\\mathbf{r}}$$ Although the Kohn-Sham wavefunction is fictional, its density is equal to the true multielectronic density, and in order for the true density to have the same asymptotic behavior as the Kohn-Sham density, the true wavefunction must have the same asymptotic behavior as the Kohn-Sham wavefunction. Q.E.D." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86074567,"math_prob":0.99922454,"size":3015,"snap":"2022-05-2022-21","text_gpt3_token_len":776,"char_repetition_ratio":0.14579874,"word_repetition_ratio":0.03196347,"special_character_ratio":0.23416252,"punctuation_ratio":0.082720585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99977463,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T22:00:22Z\",\"WARC-Record-ID\":\"<urn:uuid:5ac150cd-fa9f-4f24-9bf3-c1a4eb9f3749>\",\"Content-Length\":\"133690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aedfac93-189e-45dd-8237-1389f30f5991>\",\"WARC-Concurrent-To\":\"<urn:uuid:8778e7cb-741e-49ea-85a3-34af4f2ab6b8>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mattermodeling.stackexchange.com/questions/8600/do-we-know-for-sure-that-all-atomic-and-molecular-wavefunctions-decay-exponentia\",\"WARC-Payload-Digest\":\"sha1:KS3662DSKMRLKTD4VZZCVP5K65XV6Z7S\",\"WARC-Block-Digest\":\"sha1:VGFDFUTCZECHWHYQ5DUUOE5EGXZRZFRH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662530066.45_warc_CC-MAIN-20220519204127-20220519234127-00241.warc.gz\"}"}
https://docs.pgrouting.org/3.2/en/pgr_bdDijkstraCost.html
[ "# pgr_bdDijkstraCost¶\n\npgr_bdDijkstraCost — Returns the shortest path(s)’s cost using Bidirectional Dijkstra algorithm.", null, "Boost Graph Inside¶\n\nAvailability:\n\n• Version 3.2.0\n\n• New proposed function:\n\n• pgr_bdDijkstraCost(Combinations)\n\n• Version 3.0.0\n\n• Official function\n\n• Version 2.5.0\n\n• New proposed function\n\n## Description¶\n\nThe main characteristics are:\n\n• Process is done only on edges with positive costs.\n\n• Values are returned when there is a path.\n\n• When the starting vertex and ending vertex are the same, there is no path.\n\n• The agg_cost the non included values (v, v) is 0\n\n• When the starting vertex and ending vertex are the different and there is no path:\n\n• The agg_cost the non included values (u, v) is $$\\infty$$\n\n• Running time (worse case scenario): $$O((V \\log V + E))$$\n\n• For large graphs where there is a path bewtween the starting vertex and ending vertex:\n\n• It is expected to terminate faster than pgr_dijkstra\n\n## Signatures¶\n\nSummary\n\npgr_bdDijkstraCost(Edges SQL, from_vid, to_vid [, directed])\npgr_bdDijkstraCost(Edges SQL, from_vid, to_vids [, directed])\npgr_bdDijkstraCost(Edges SQL, from_vids, to_vid [, directed])\npgr_bdDijkstraCost(Edges SQL, from_vids, to_vids [, directed])\npgr_bdDijkstraCost(Edges SQL, Combinations SQL [, directed]) -- Proposed on v3.2\n\nRETURNS SET OF (start_vid, end_vid, agg_cost)\nOR EMPTY SET\n\n\nUsing default\n\npgr_bdDijkstraCost(Edges SQL, from_vid, to_vid)\nRETURNS SET OF (seq, path_seq, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nFrom vertex $$2$$ to vertex $$3$$ on a directed graph\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\n2, 3\n);\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 5\n(1 row)\n\n\n\n### One to One¶\n\npgr_bdDijkstraCost(Edges SQL, from_vid, to_vid [, directed])\nRETURNS SET OF (seq, path_seq, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nFrom vertex $$2$$ to vertex $$3$$ on an undirected graph\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\n2, 3,\nfalse\n);\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 1\n(1 row)\n\n\n\n### One to Many¶\n\npgr_bdDijkstraCost(Edges SQL, from_vid, to_vids [, directed])\nRETURNS SET OF (seq, path_seq, end_vid, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nFrom vertex $$2$$ to vertices $$\\{3, 11\\}$$ on a directed graph\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\n2, ARRAY[3, 11]);\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 5\n2 | 11 | 3\n(2 rows)\n\n\n\n### Many to One¶\n\npgr_bdDijkstraCost(Edges SQL, from_vids, to_vids [, directed])\nRETURNS SET OF (seq, path_seq, start_vid, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nFrom vertices $$\\{2, 7\\}$$ to vertex $$3$$ on a directed graph\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\nARRAY[2, 7], 3);\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 5\n7 | 3 | 6\n(2 rows)\n\n\n\n### Many to Many¶\n\npgr_bdDijkstraCost(Edges SQL, start_vids, end_vids [, directed])\nRETURNS SET OF (seq, path_seq, start_vid, end_vid, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nFrom vertices $$\\{2, 7\\}$$ to vertices $$\\{3, 11\\}$$ on a directed graph\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\nARRAY[2, 7], ARRAY[3, 11]);\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 5\n2 | 11 | 3\n7 | 3 | 6\n7 | 11 | 4\n(4 rows)\n\n\n\n### Combinations¶\n\npgr_bdDijkstra(Edges SQL, Combinations SQL [, directed])\nRETURNS SET OF (seq, path_seq, start_vid, end_vid, node, edge, cost, agg_cost)\nOR EMPTY SET\n\nExample\n\nUsing a combinations table on a directed graph.\n\nSELECT * FROM pgr_bdDijkstraCost(\n'SELECT id, source, target, cost, reverse_cost FROM edge_table',\n'SELECT * FROM ( VALUES (2, 3), (7, 11) ) AS t(source, target)');\nstart_vid | end_vid | agg_cost\n-----------+---------+----------\n2 | 3 | 5\n7 | 11 | 4\n(2 rows)\n\n\n\n## Parameters¶\n\nParameter\n\nType\n\nDefault\n\nDescription\n\nEdges SQL\n\nTEXT\n\nEdges query as described below\n\nCombinations SQL\n\nTEXT\n\nCombinations query as described below\n\nstart_vid\n\nBIGINT\n\nIdentifier of the starting vertex of the path.\n\nstart_vids\n\nARRAY[BIGINT]\n\nArray of identifiers of starting vertices.\n\nend_vid\n\nBIGINT\n\nIdentifier of the ending vertex of the path.\n\nend_vids\n\nARRAY[BIGINT]\n\nArray of identifiers of ending vertices.\n\ndirected\n\nBOOLEAN\n\ntrue\n\n• When true Graph is considered Directed\n\n• When false the graph is considered as Undirected.\n\n## Inner queries¶\n\n### Edges query¶\n\nColumn\n\nType\n\nDefault\n\nDescription\n\nid\n\nANY-INTEGER\n\nIdentifier of the edge.\n\nsource\n\nANY-INTEGER\n\nIdentifier of the first end point vertex of the edge.\n\ntarget\n\nANY-INTEGER\n\nIdentifier of the second end point vertex of the edge.\n\ncost\n\nANY-NUMERICAL\n\nWeight of the edge (source, target)\n\n• When negative: edge (source, target) does not exist, therefore it’s not part of the graph.\n\nreverse_cost\n\nANY-NUMERICAL\n\n-1\n\nWeight of the edge (target, source),\n\n• When negative: edge (target, source) does not exist, therefore it’s not part of the graph.\n\nWhere:\n\nANY-INTEGER\n\nSMALLINT, INTEGER, BIGINT\n\nANY-NUMERICAL\n\nSMALLINT, INTEGER, BIGINT, REAL, FLOAT\n\n### Combinations query¶\n\nColumn\n\nType\n\nDefault\n\nDescription\n\nsource\n\nANY-INTEGER\n\nIdentifier of the first end point vertex of the edge.\n\ntarget\n\nANY-INTEGER\n\nIdentifier of the second end point vertex of the edge.\n\nWhere:\n\nANY-INTEGER\n\nSMALLINT, INTEGER, BIGINT\n\n## Result Columns¶\n\nReturns SET OF (start_vid, end_vid, agg_cost)\n\nColumn\n\nType\n\nDescription\n\nstart_vid\n\nBIGINT\n\nIdentifier of the starting vertex.\n\nend_vid\n\nBIGINT\n\nIdentifier of the ending vertex.\n\nagg_cost\n\nFLOAT\n\nAggregate cost from start_vid to end_vid.\n\n## See Also¶\n\nIndices and tables" ]
[ null, "https://docs.pgrouting.org/3.2/en/_images/boost-inside.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61767864,"math_prob":0.8844826,"size":5434,"snap":"2021-21-2021-25","text_gpt3_token_len":1619,"char_repetition_ratio":0.16206262,"word_repetition_ratio":0.4547619,"special_character_ratio":0.32315055,"punctuation_ratio":0.18980478,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98762554,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T08:27:48Z\",\"WARC-Record-ID\":\"<urn:uuid:08931832-4960-4209-b7b7-89c7bcaa81f1>\",\"Content-Length\":\"31427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:849460ab-c23d-4d4f-a5d4-3bd2764ef4ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1e27d38-1fa0-4edc-82ef-2769ba70bb49>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://docs.pgrouting.org/3.2/en/pgr_bdDijkstraCost.html\",\"WARC-Payload-Digest\":\"sha1:ZP2RZRKUDQGPRCEIVGGN2SHAZMNFAEUN\",\"WARC-Block-Digest\":\"sha1:YCCAEPMIRLH7HDCWTLLGW7XMQTRVXN67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487647232.60_warc_CC-MAIN-20210619081502-20210619111502-00335.warc.gz\"}"}
https://answers.everydaycalculation.com/multiply-fractions/63-12-times-35-60
[ "Solutions by everydaycalculation.com\n\n## Multiply 63/12 with 35/60\n\n1st number: 5 3/12, 2nd number: 35/60\n\nThis multiplication involving fractions can also be rephrased as \"What is 63/12 of 35/60?\"\n\n63/12 × 35/60 is 49/16.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 63/12 × 35/60 = 63 × 35/12 × 60 = 2205/720\n3. After reducing the fraction, the answer is 49/16\n4. In mixed form: 31/16\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83761346,"math_prob":0.9888912,"size":406,"snap":"2021-04-2021-17","text_gpt3_token_len":140,"char_repetition_ratio":0.12935324,"word_repetition_ratio":0.0,"special_character_ratio":0.39901477,"punctuation_ratio":0.10465116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771229,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T02:57:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3088cdf0-0542-4739-9f7a-21ea11f03248>\",\"Content-Length\":\"6890\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41779037-3473-4542-86c0-cecaccde1ad4>\",\"WARC-Concurrent-To\":\"<urn:uuid:96bfae1d-08f0-4c1a-824f-c5669106f6e7>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/63-12-times-35-60\",\"WARC-Payload-Digest\":\"sha1:TQ7INK5QAYXC6LJFS6WBFFZR4VEMOOM5\",\"WARC-Block-Digest\":\"sha1:G66SE3JTUKR33TABPUJA44LWS35VY4JU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703517559.41_warc_CC-MAIN-20210119011203-20210119041203-00184.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/1-7-plus-4-16
[ "# Answers\n\nSolutions by everydaycalculation.com\n\n## Add 1/7 and 4/16\n\n1/7 + 4/16 is 11/28.\n\n#### Steps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 7 and 16 is 112\n\nNext, find the equivalent fraction of both fractional numbers with denominator 112\n2. For the 1st fraction, since 7 × 16 = 112,\n1/7 = 1 × 16/7 × 16 = 16/112\n3. Likewise, for the 2nd fraction, since 16 × 7 = 112,\n4/16 = 4 × 7/16 × 7 = 28/112\n4. Add the two like fractions:\n16/112 + 28/112 = 16 + 28/112 = 44/112\n5. 44/112 simplified gives 11/28\n6. So, 1/7 + 4/16 = 11/28\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\n#### Add Fractions Calculator\n\n+\n\n© everydaycalculation.com" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60370344,"math_prob":0.99514526,"size":347,"snap":"2021-04-2021-17","text_gpt3_token_len":164,"char_repetition_ratio":0.20991254,"word_repetition_ratio":0.0,"special_character_ratio":0.52737755,"punctuation_ratio":0.05376344,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983317,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T13:20:16Z\",\"WARC-Record-ID\":\"<urn:uuid:be527650-76bf-4fc6-8c9a-807bda70ddad>\",\"Content-Length\":\"8370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b547ea9d-a523-4c03-be57-93ca1985f6df>\",\"WARC-Concurrent-To\":\"<urn:uuid:e868f08b-1f68-4e8d-9e19-7faa3f5ba587>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/1-7-plus-4-16\",\"WARC-Payload-Digest\":\"sha1:Z24YXXV4WX4ER67VF7JMDODYJPCMM5ZF\",\"WARC-Block-Digest\":\"sha1:22PQP6DB2QFH2TPSRXP3AL6KI3374D5U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039544239.84_warc_CC-MAIN-20210421130234-20210421160234-00440.warc.gz\"}"}
http://ptmts.org.pl/jtam/index.php/jtam/article/view/v45n1p119/0
[ "Journal of Theoretical\nand Applied Mechanics\n\n45, 1, pp. 119-131, Warsaw 2007\n\n### Chaotic vibration of an autoparametrical system with a non-ideal source of power\n\nThis paper studies the dynamical coupling between energy sources and the response of a two degrees of freedom autoparametrical system, when the excitation comes from an electric motor (with unbalanced mass $m_0$), which works with limited power supply. The investigated system consists of a pendulum of the length $l$ and mass $m$, and a body of mass $M$ suspended on a flexible element. In this case, the excitation has to be expressed by an equation describing how the energy source supplies the energy to the system. The non-ideal source of power adds one degree of freedom, which makes the system have three degrees of freedom. The system has been searched for known characteristics of the energy source (DC motor). The equations of motion have been solved numerically. The influence of motor speed on the phenomenon of energy transfer has been studied. Near the internal and external resonance region, except for different kinds of periodic vibration, chaotic vibration has been observed. For characterizing an irregular chaotic response, bifurcation diagrams and time histories, power spectral densities, Poincaré maps and maximal exponents of Lyapunov have been developed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8944669,"math_prob":0.9572038,"size":1466,"snap":"2020-45-2020-50","text_gpt3_token_len":314,"char_repetition_ratio":0.11491108,"word_repetition_ratio":0.044247787,"special_character_ratio":0.20736699,"punctuation_ratio":0.118773945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96076506,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T18:39:05Z\",\"WARC-Record-ID\":\"<urn:uuid:4af216ef-83cb-4a4e-a380-6c8e4b788e2d>\",\"Content-Length\":\"18641\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97047348-7fec-4f92-b411-f9986979d852>\",\"WARC-Concurrent-To\":\"<urn:uuid:35aa1490-8e0c-4add-836a-c735b98ecacf>\",\"WARC-IP-Address\":\"212.85.123.71\",\"WARC-Target-URI\":\"http://ptmts.org.pl/jtam/index.php/jtam/article/view/v45n1p119/0\",\"WARC-Payload-Digest\":\"sha1:2MISCZTGXVTPRJJDKDXKHFTD4MWYOB4A\",\"WARC-Block-Digest\":\"sha1:7TYEAAVMYISV6YVYXRIQMM4JTS7V23FH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894426.63_warc_CC-MAIN-20201027170516-20201027200516-00610.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-9-quadratic-relations-and-conic-sections-9-2-graph-and-write-equations-of-parabolas-9-2-exercises-mixed-review-page-625/73
[ "## Algebra 2 (1st Edition)\n\nUsing the distance formula, we find: $$\\sqrt{\\left(-4.6-2.3\\right)^2+\\left(-1.4-1.1\\right)^2}\\\\ \\sqrt{53.87} \\\\ 7.34$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7649476,"math_prob":0.99999285,"size":395,"snap":"2022-27-2022-33","text_gpt3_token_len":115,"char_repetition_ratio":0.10230179,"word_repetition_ratio":0.0,"special_character_ratio":0.30886075,"punctuation_ratio":0.14444445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9768836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T05:27:13Z\",\"WARC-Record-ID\":\"<urn:uuid:bf5d19a3-2e32-48e4-8f2e-f75d1562aec0>\",\"Content-Length\":\"99316\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d81e8fb3-739b-423e-a08a-1b113b53b740>\",\"WARC-Concurrent-To\":\"<urn:uuid:90325ba2-bff5-4d82-b8dc-ec737f72174c>\",\"WARC-IP-Address\":\"34.206.232.47\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-9-quadratic-relations-and-conic-sections-9-2-graph-and-write-equations-of-parabolas-9-2-exercises-mixed-review-page-625/73\",\"WARC-Payload-Digest\":\"sha1:WF2BSLVZUF2DZWUTWIKHWVQE572WTAHD\",\"WARC-Block-Digest\":\"sha1:7HANCX4NWWJPVKPLYFKNY62FZ4N6LCN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570765.6_warc_CC-MAIN-20220808031623-20220808061623-00384.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2007/Feb/msg00529.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Map function which adds last two numbers of a list\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg73578] Re: Map function which adds last two numbers of a list\n• From: \"dimitris\" <dimmechan at yahoo.com>\n• Date: Thu, 22 Feb 2007 04:34:09 -0500 (EST)\n• References: <ergqrb\\$jg0\\[email protected]>\n\n```Hi.\n\nAlong many other alternatives you can try\n\nIn := z /. {x_, y_, z_, w_} :> {x, y, z + w}\nOut = {{1, 4, 11}, {7, 8, 10}, {1, 2, 7}}\n\nIn := ({#1[], #1[], #1[] + #1[]} &) /@ z\nOut = {{1, 4, 11}, {7, 8, 10}, {1, 2, 7}}\n\nIn:=\n{z[[#, 1]], z[[#, 2]], (Plus @@@ Reap[{Sow[#[]], Sow[#[]]} & /@\nz][])[[#]]} & /@ Range\n\nOut=\n{{1, 4, 11}, {7, 8, 10}, {1, 2, 7}}\n\nHowever pay special attention to what Andrzej Kozlowski mentions you\nin his post!\n\nDimitris\n\n=CF/=C7 Christopher Pike =DD=E3=F1=E1=F8=E5:\n> Hi,\n> Consider this:\n>\n> z = {{1,4,5,6},{7,8,9,1},{1,2,4,3}}\n>\n> I'd like to Map a function onto z which would replace the last two items\n> with their sum:\n>\n> {{1,4,11},{7,8,10},{1,2,7}}\n>\n> I could easily use the Table command to construct this new table, but it\n> would be nicer if I new how to Map some function on z that would produce\n> the same result.\n>\n> Any suggestions.\n>\n> Thanks, Chris Pike\n\n```\n\n• Prev by Date: RE: Re: Quick integral.\n• Next by Date: NonLinearRegression Weights\n• Previous by thread: Re: Map function which adds last two numbers of a list\n• Next by thread: RE: Diferent solution of integral in versions 4 and 5..." ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/7.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75742143,"math_prob":0.94785786,"size":1238,"snap":"2019-35-2019-39","text_gpt3_token_len":497,"char_repetition_ratio":0.08752026,"word_repetition_ratio":0.1402715,"special_character_ratio":0.5024233,"punctuation_ratio":0.26397514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9510728,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T04:16:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d9e16937-63b3-465f-aea4-9e7251d48e1b>\",\"Content-Length\":\"43026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fdc4c09-7f03-400b-b1a7-18081eacc97a>\",\"WARC-Concurrent-To\":\"<urn:uuid:fdb6d6f0-a057-4e10-9bc8-8d80bdcb4ec1>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2007/Feb/msg00529.html\",\"WARC-Payload-Digest\":\"sha1:B5YF6YSFRM7YCCRCT7E7DNRJPORMPUPC\",\"WARC-Block-Digest\":\"sha1:O3R4YMOHR6CKRIBUVF34SIZ4XXVU3Z7X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575860.37_warc_CC-MAIN-20190923022706-20190923044706-00312.warc.gz\"}"}
http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=4142
[ "If the air resistance is include, the net force will be F= mg sin? - ? mg cos? -(1/2) CpAv[sup]2[/sup]\nWhen the terminal velocity is reached, it requires F=0\nSo (1/2) CpAv[sup]2[/sup]=mg sin? - ? mg cos? =mg (sin? - ? cos? )\nv[sup]2[/sup]=2 mg (sin? - ? cos? )/ (CpA)\nFor  ?=0.1 and 20 degree (?=0.349),  (sin? - ? cos? )=0.248, air density p=1.2 kg/m[sup]3[/sup]\nwhich give us v[sup]2[/sup]=4.05 * m/(C A)\nThe drag coefficient C for a skier is between 1.0-1.1 (http://en.wikipedia.org/wiki/Drag_coefficient)\nThe area is estimated to be A=0.5*1.7*cos(?)=0.8 then we will have v[sup]2[/sup]=5.06*m\nFor a skier with mass (80kg) it will give us v=20.1m/s=72.5 km/h.\nIt is very close to your value 80 km/h.\nI will add this drag force to the simulation and update it soon!\n\nC*A=0.11 for an upright body, minimum frontal area\nC*A=0.84 for a horizontal body,maximum frontal area\nC*A=0.46 for a body in tuck position" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70256776,"math_prob":0.9830498,"size":905,"snap":"2019-26-2019-30","text_gpt3_token_len":337,"char_repetition_ratio":0.13318536,"word_repetition_ratio":0.013513514,"special_character_ratio":0.3922652,"punctuation_ratio":0.18253969,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977879,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T17:22:20Z\",\"WARC-Record-ID\":\"<urn:uuid:68f98cd0-5a6d-43a7-a24f-a2c99b598be6>\",\"Content-Length\":\"1317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64478504-7d0d-471e-aa9e-93a4232dff3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8eeb90dc-0e0d-4905-b641-1e1d8b0d4c62>\",\"WARC-IP-Address\":\"140.122.141.1\",\"WARC-Target-URI\":\"http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=4142\",\"WARC-Payload-Digest\":\"sha1:PB5RKTCYQ3AXRNUJ4CAMBVLQPMRIOQGB\",\"WARC-Block-Digest\":\"sha1:RZI3JIDSHO2YOUVXN6PYV6BY5NSRTDVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998513.14_warc_CC-MAIN-20190617163111-20190617185111-00017.warc.gz\"}"}
https://discourse.vidvox.net/t/help-with-shaders/1359
[ "", null, "#1\n\nHi,\nas an exercise I wish to recreate this pictures.\nI have the ring and circles function, and I know that I need to use a\nFor loop function.\n\nBut I’m a bit lost on how to divide and place the circles on the ring !?\n\nAny help would be appreciate.\n\nBest\n\n0 Likes\n\n#2\n\nOr maybe some examples which I can studies ?\n\n0 Likes\n\n#3\n\nHello!\nThere are a few ways you can do this! Its a great problem.\nYou need to read up on polar coordinate system.\nsomething that you can use is this modPolar function from mercury.sexy/hg_sdf\n\n``````// Repeat around the origin by a fixed angle.\n// For easier use, num of repetitions is use to specify the angle.\nfloat pModPolar(inout vec2 p, float repetitions) {\nfloat angle = 2*(3.14152)/repetitions;\nfloat a = atan(p.y, p.x) + angle/2.;\nfloat r = length(p);\nfloat c = floor(a/angle);\na = mod(a,angle) - angle/2.;\np = vec2(cos(a), sin(a))*r;\n// For an odd number of repetitions, fix cell index of the cell in -x direction\n// (cell index would be e.g. -5 and 5 in the two halves of the cell):\nif (abs(c) >= (repetitions/2)) c = abs(c);\nreturn c;\n}\n``````\n\nWhat this does is it repeats a section of space around a center, creating a kaleidoscope effect. It returns a number unique to the slice, so you can use that to change the color of the circle.\nYou will have to find where to offset the circle so it can show up in the reflection.\nHopefully you can find this helpful!\n\n2 Likes\n\n#4\n\nI integrated the function in my shaders, but as I get float with this function and need a vec2 for the position of my circle I’m missing an element\n\nAnd Should I use a loop function somewhere ?\n\nThe aim of this shaders is to create an Euclidean Circle like this one : https://www.researchgate.net/figure/The-six-fundamental-African-and-Latin-American-rhythms-which-all-have-equal-sum-of_fig1_237419500\n\n1 Like\n\n#5\n\nSo this function uses the `inout` declaration for the position parameter, meaning it changes the vec2 in place. What it returns is a number unique to the slice, so you can use that to change the color of the circle. Do you understand what the modPolar function does? Just checking bc I can help if not :)\n\nThis modPolar function method doesnt need a for loop to get the result. Its modding the space i.g. repeating space. But there is a different way to do this that uses a for loop, i personally think its a little harder that way but I could be wrong. You could try it and post your results here & i can look at it.\n\nThis problem is not easy- are you working on it to learn GLSL? if you are looking for a way to get that result GLSL isnt the fastest way there. Id recommend a more imperative language like processing, p5js or something like that.\n\n1 Like\n\n#6" ]
[ null, "https://vidvox-discourses-uploads.s3.dualstack.us-east-1.amazonaws.com/original/1X/42f24047eb66dbafc769869dc2f933d5598a4872.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77060866,"math_prob":0.76956767,"size":1056,"snap":"2022-27-2022-33","text_gpt3_token_len":287,"char_repetition_ratio":0.12737642,"word_repetition_ratio":0.0,"special_character_ratio":0.28598484,"punctuation_ratio":0.14767933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97065467,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T19:10:45Z\",\"WARC-Record-ID\":\"<urn:uuid:58df4330-d74d-4bf0-99a9-94b440863379>\",\"Content-Length\":\"18303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd2843d5-c71e-49a4-aac7-8bf083ed1db0>\",\"WARC-Concurrent-To\":\"<urn:uuid:27d9beb6-8d3b-4af9-9602-116993f161bb>\",\"WARC-IP-Address\":\"54.235.247.140\",\"WARC-Target-URI\":\"https://discourse.vidvox.net/t/help-with-shaders/1359\",\"WARC-Payload-Digest\":\"sha1:AL6FVT6TN3X35STXIW2MHNQGXT6J4Y2E\",\"WARC-Block-Digest\":\"sha1:I6NMDXJBY67O4GTVUGFZVGMGZTY5ZUWD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573399.40_warc_CC-MAIN-20220818185216-20220818215216-00762.warc.gz\"}"}
http://conversion.org/time/year-common/callippic-cycle
[ "# year (common) to Callippic cycle conversion\n\nConversion number between year (common) [a, y, or yr] and Callippic cycle is 0.013148888648727. This means, that year (common) is smaller unit than Callippic cycle.\n\n### Contents [show][hide]", null, "Switch to reverse conversion:\nfrom Callippic cycle to year (common) conversion\n\n### Enter the number in year (common):\n\nDecimal Fraction Exponential Expression\n [a, y, or yr]\neg.: 10.12345 or 1.123e5\n\nResult in Callippic cycle\n\n?\n precision 0 1 2 3 4 5 6 7 8 9 [info] Decimal: Exponential:\n\n### Calculation process of conversion value\n\n• 1 year (common) = (exactly) (31536000) / (2398377600) = 0.013148888648727 Callippic cycle\n• 1 Callippic cycle = (exactly) (2398377600) / (31536000) = 76.052054794521 year (common)\n• ? year (common) × (31536000  (\"s\"/\"year (common)\")) / (2398377600  (\"s\"/\"Callippic cycle\")) = ? Callippic cycle\n\n### High precision conversion\n\nIf conversion between year (common) to second and second to Callippic cycle is exactly definied, high precision conversion from year (common) to Callippic cycle is enabled.\n\nDecimal places: (0-800)\n\nyear (common)\nResult in Callippic cycle:\n?\n\n### year (common) to Callippic cycle conversion chart\n\n Start value: [year (common)] Step size [year (common)] How many lines? (max 100)\n\nvisual:\nyear (common)Callippic cycle\n00\n100.13148888648727\n200.26297777297453\n300.3944666594618\n400.52595554594906\n500.65744443243633\n600.78893331892359\n700.92042220541086\n801.0519110918981\n901.1833999783854\n1001.3148888648727\n1101.4463777513599\nCopy to Excel\n\n## Multiple conversion\n\nEnter numbers in year (common) and click convert button.\nOne number per line.\n\nConverted numbers in Callippic cycle:\nClick to select all\n\n## Details about year (common) and Callippic cycle units:\n\nConvert Year (common) to other unit:\n\n### year (common)\n\nDefinition of year (common) unit: 365 d. The year commonly has 365 days (except the leap year)\n\nConvert Callippic cycle to other unit:\n\n### Callippic cycle\n\nDefinition of Callippic cycle unit: ≡ 76 years (Julian). One callippic cycle is equal to 441 mo (hollow) + 499 mo (full) = 76 a of 365.25 d = 2.3983776 Gs", null, "← Back to Time units", null, "" ]
[ null, "http://conversion.org/images/switch.png", null, "http://conversion.org/images/time.png", null, "http://conversion.org/menufiles/top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7233325,"math_prob":0.5484383,"size":1130,"snap":"2020-24-2020-29","text_gpt3_token_len":371,"char_repetition_ratio":0.21403196,"word_repetition_ratio":0.0,"special_character_ratio":0.45575222,"punctuation_ratio":0.15165877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9765112,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T19:34:30Z\",\"WARC-Record-ID\":\"<urn:uuid:f949c599-b70e-4818-b653-95d15511c78f>\",\"Content-Length\":\"27429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f1bdd53-8289-4b70-a5d9-e4d536f700e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:11746c6f-cde3-4297-9057-e13835b2889e>\",\"WARC-IP-Address\":\"204.12.239.146\",\"WARC-Target-URI\":\"http://conversion.org/time/year-common/callippic-cycle\",\"WARC-Payload-Digest\":\"sha1:AT6K7THJJFFSDCPUSBBVYNI5RG354RJI\",\"WARC-Block-Digest\":\"sha1:VSSS4VAUW2YI5OLJI5VOHYCXWBBDNQZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347389355.2_warc_CC-MAIN-20200525192537-20200525222537-00042.warc.gz\"}"}
https://codeauri.com/basic-programming/cpp/calculate-compound-interest-in-cpp/
[ "# WAP in C++ to enter P, T, R to calculate CI\n\nThe Program in C++ to enter P, T, R to calculate Compound Interest is given below:\n\n``````#include <iostream>\n#include <cmath>\nusing namespace std;\n\nint main() {\nfloat P, T, R, CI;\n\ncout<< \"Hello Codeauri Family, Please enter P,T,R sequencely:\\n\";\ncin>>P>>T>>R;\n\nCI = P * (pow((1 + (R/100)),T));\n\ncout << \"Compound Interest is: \" << CI << endl;\n\nreturn 0;\n}\n\n``````\n\n## Output:\n\nHello Codeauri Family, Please enter P,T,R sequencely:\n4\n5\n6\nCompound Interest is: 5.3529\n\n## Pro-Tips💡\n\nThis program calculates the compound interest for a given principle amount (P), time period (T) and rate of interest (R).\n\nThe user is prompted to enter the values for P, T, and R using the cin function.\n\nThe formula for compound interest is P(1+R/100)^T, where P is the principle amount, T is the time period and R is the rate of interest.\n\nTo use the power function ‘pow’ I added #include at the beginning of the code. This value is stored in the variable CI.\n\nFinally, the program prints out the calculated compound interest using the cout function.\n\n### Learn C-Sharp ↗\n\nC-sharp covers every topic to learn about C-Sharp thoroughly.\n\n### Learn C Programming ↗\n\nC-Programming covers every topic to learn about C-Sharp thoroughly.\n\n### Learn C++ Programming↗\n\nC++ covers every topic to learn about C-Sharp thoroughly.", null, "Codeauri is Code Learning Hub and Community for every Coder to learn Coding by navigating Structurally from Basic Programming to Front-End Development, Back-End Development to Database, and many more.\n\n## C# Program to Find Sum of Rows & Columns of a Matrix\n\nThe Program in C# Program to Find Sum of Rows & Columns of a Matrix is given below: Output: Hello Codeauri Family,enter the number of rows and columns…\n\n## C# Program to Calculate Determinant of Given Matrix\n\nThe Program in C# Program to Calculate Determinant of Given Matrix is given below: Output: Hello Codeauri Family, enter the number of rows and columns of the matrix…\n\n## C# Program to Find Sum of right Diagonals of a Matrix\n\nThe Program in C# Program to Find Sum of right Diagonals of a Matrix is given below: Output: Hello Codeauri Family, enter the number of rows and columns…\n\n## C# Program to Find Transpose of Given Matrix\n\nThe Program in C# Program to Find Transpose of Given Matrix is given below: Output: Hello Codeauri Family, enter the number of rows and columns in the matrix:22Enter…\n\n## C# Program for Multiplication of two square Matrices\n\nThe Program in C# Program for Multiplication of two square Matrices is given below: Output: Hello Codeauri Family, enter the number of rows/columns in the matrices:2Enter the elements…\n\n## C# Program to Delete Element at Desired position From Array\n\nThe Program in C# Program to Delete Element at Desired position From Array is given below: Output: Hello Codeauri Family, enter the number of elements in the array:4Enter…\n\nYour Journey into Code Begins Now: Discover the Wonders of Basic Programming\n\nX" ]
[ null, "https://secure.gravatar.com/avatar/43a85a234904f352aab99c9ff251811b", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8184406,"math_prob":0.7976839,"size":1250,"snap":"2023-40-2023-50","text_gpt3_token_len":332,"char_repetition_ratio":0.12279294,"word_repetition_ratio":0.077294685,"special_character_ratio":0.2688,"punctuation_ratio":0.15530303,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99105847,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T17:52:16Z\",\"WARC-Record-ID\":\"<urn:uuid:4dcdad54-bd56-422c-8820-b27afcc4b450>\",\"Content-Length\":\"134896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ade318d5-d74e-473e-b392-b774a6276b41>\",\"WARC-Concurrent-To\":\"<urn:uuid:e85172a9-e014-45cd-9a7f-40accfe56b48>\",\"WARC-IP-Address\":\"172.67.147.119\",\"WARC-Target-URI\":\"https://codeauri.com/basic-programming/cpp/calculate-compound-interest-in-cpp/\",\"WARC-Payload-Digest\":\"sha1:QDHN2TGPGF7Z46QBQW57EHUHQ5L2GUID\",\"WARC-Block-Digest\":\"sha1:M4V5ALYJ375E55X63VHWYHXIXEK5KH37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511170.92_warc_CC-MAIN-20231003160453-20231003190453-00023.warc.gz\"}"}
https://books.google.com.jm/books?qtid=c7e7878&lr=&id=WhMEAAAAQAAJ&sa=N&start=190
[ "", null, "If a straight line be divided into two equal parts, and also into two unequal parts; the rectangle contained by the unequal parts, together with the square of the line between the points of section, is equal to the square of half the line.", null, "", null, "" ]
[ null, "https://books.google.com.jm/googlebooks/quote_l.gif", null, "https://books.google.com.jm/googlebooks/quote_r.gif", null, "https://books.google.com.jm/books/content", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91629636,"math_prob":0.99983555,"size":355,"snap":"2023-40-2023-50","text_gpt3_token_len":88,"char_repetition_ratio":0.13960114,"word_repetition_ratio":0.0,"special_character_ratio":0.26197183,"punctuation_ratio":0.1369863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646344,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T20:58:06Z\",\"WARC-Record-ID\":\"<urn:uuid:12e412eb-08b3-4002-9a5c-e4d0803f597f>\",\"Content-Length\":\"14560\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d96757ad-92ac-4e35-bbf6-4476c9e24596>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a9a3eb3-e519-4244-b589-c3b25c71f5d1>\",\"WARC-IP-Address\":\"172.253.63.100\",\"WARC-Target-URI\":\"https://books.google.com.jm/books?qtid=c7e7878&lr=&id=WhMEAAAAQAAJ&sa=N&start=190\",\"WARC-Payload-Digest\":\"sha1:5D2PQN4N6ANHFVYV6N6F2CBQK3WETRZ4\",\"WARC-Block-Digest\":\"sha1:VP57Y5QKAWAVG4VQETBFXIPYMBUL5DCF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510085.26_warc_CC-MAIN-20230925183615-20230925213615-00474.warc.gz\"}"}
https://www.kyaliapps.com/search/label/Force
[ "Showing posts with label Force. Show all posts\nShowing posts with label Force. Show all posts\n\n### Physics Ganaka\n\nPhysics Ganaka\nCalculators in Physics are becoming big part of physics learning, because unlike mathematics where we are only doing calculations, in Physics we need start with principles, theory and derivations and calculations. So a calculator comes in handy, especially for doing trivial ones. Physics Ganaka is an attempt at that. Physics are useful to check at the values you arrive at doing the calculations manually.\n\nPhysics Ganaka 1.2\nPhysics Ganaka is a Basic Physics calculator, Physics Ganaka calculates most of the fundamental calculations of fundamental physics.", null, "It Calculates\n• Motion – Displacement, Momentum, Impulse, velocity,Force and also circular motion.\n• Work, Kinetic Energy, Potential Energy, Power in Gravitational, Rotational and Elastic\n• Matter – Density, Specific Gravity, Pressure, Specific Heat , Gas Laws\n• Electricity – Current, Volt, Resistance, Power, Coulomb's Law\n• Waves – Wave Speed, Wave length, Period, Frequency, Doppler Effect.\n• Gravitation – Force, Acceleration, Potential Energy, Weight.\n• Thermal – Specific Heat, Heat Transfer, ideal Gas Laws, Enthalpy, Helmholtz Energy\nKeycontrol\nUp and Down Arrow keys to Browse Menu. Select or center key to select the Option. Touch screen choose the option and choose select. On the Parameter screen , you enter the data and select calculate. choose exit to get-back to menu. select quit to quit application.\n\nSupporting Phones\nAll Java Enabled phone and Blackberry phones.\nBoth Touch and Non- Touch phones" ]
[ null, "https://4.bp.blogspot.com/_Dxmk6DuXP0I/TSbzZs1GvXI/AAAAAAAAAA4/LSvM3QjfTNI/s200/Physics+Ganaka+Menu.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7299384,"math_prob":0.9360419,"size":1129,"snap":"2019-51-2020-05","text_gpt3_token_len":256,"char_repetition_ratio":0.123555556,"word_repetition_ratio":0.023809524,"special_character_ratio":0.20460585,"punctuation_ratio":0.2039801,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95946366,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T18:01:56Z\",\"WARC-Record-ID\":\"<urn:uuid:799d8236-6a87-4417-88e2-639da50a3268>\",\"Content-Length\":\"103829\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74c3bf3a-37b4-4ee8-a677-466ff7b99cac>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa35072d-a06f-4aa3-8787-18fd71457fbd>\",\"WARC-IP-Address\":\"172.217.7.179\",\"WARC-Target-URI\":\"https://www.kyaliapps.com/search/label/Force\",\"WARC-Payload-Digest\":\"sha1:XC7X4PHIJARBJC5IYNXKLHTKXYPT34XK\",\"WARC-Block-Digest\":\"sha1:3ZCXUFVXBBPYYBY4KHSX4TE2IEVXHZFN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250599718.13_warc_CC-MAIN-20200120165335-20200120194335-00378.warc.gz\"}"}
http://git.dev.opencascade.org/gitweb/?p=occt.git;a=blob;f=src/Shaders/Shaders_PBREnvBaking_fs.pxx;h=bc995ac8e7cc102e3fdad8bfd8207ad3d2ccbf22;hb=67312b7991c945e66d4fa1d3a6f8e02dbabc9b5a
[ "1 // This file has been automatically generated from resource file src/Shaders/PBREnvBaking.fs\n3 static const char Shaders_PBREnvBaking_fs[] =\n4   \"THE_SHADER_IN vec3 ViewDirection; //!< direction of fetching from environment cubemap\\n\"\n5   \"\\n\"\n6   \"uniform int uSamplesNum;     //!< number of samples in Monte-Carlo integration\\n\"\n7   \"uniform int uCurrentLevel;   //!< current level of specular IBL map (ignored in case of diffuse map's processing)\\n\"\n8   \"uniform int uEnvMapSize;     //!< one edge's size of source environtment map's zero mipmap level\\n\"\n9   \"uniform int uYCoeff;         //!< coefficient of Y controlling horizontal flip of cubemap\\n\"\n10   \"uniform int uZCoeff;         //!< coefficient of Z controlling vertical flip of cubemap\\n\"\n11   \"uniform samplerCube uEnvMap; //!< source of baking (environment cubemap)\\n\"\n12   \"\\n\"\n13   \"//! Returns coordinates of point theNumber from Hammersley point set having size theSize.\\n\"\n14   \"vec2 hammersley (in int theNumber,\\n\"\n15   \"                 in int theSize)\\n\"\n16   \"{\\n\"\n17   \"  int aDenominator = 2;\\n\"\n18   \"  int aNumber = theNumber;\\n\"\n19   \"  float aVanDerCorput = 0.0;\\n\"\n20   \"  for (int i = 0; i < 32; ++i)\\n\"\n21   \"  {\\n\"\n22   \"    if (aNumber > 0)\\n\"\n23   \"    {\\n\"\n24   \"      aVanDerCorput += float(aNumber % 2) / float(aDenominator);\\n\"\n25   \"      aNumber /= 2;\\n\"\n27   \"    }\\n\"\n28   \"  }\\n\"\n29   \"  return vec2(float(theNumber) / float(theSize), aVanDerCorput);\\n\"\n30   \"}\\n\"\n31   \"\\n\"\n32   \"//! This function does importance sampling on hemisphere surface using GGX normal distribution function\\n\"\n33   \"//! in tangent space (positive z axis is surface normal direction).\\n\"\n34   \"vec3 importanceSample (in vec2  theHammersleyPoint,\\n\"\n35   \"                       in float theRoughness)\\n\"\n36   \"{\\n\"\n37   \"  float aPhi = PI_2 * theHammersleyPoint.x;\\n\"\n38   \"  theRoughness *= theRoughness;\\n\"\n39   \"  theRoughness *= theRoughness;\\n\"\n40   \"  float aCosTheta = sqrt((1.0 - theHammersleyPoint.y) / (1.0 + (theRoughness - 1.0) * theHammersleyPoint.y));\\n\"\n41   \"  float aSinTheta = sqrt(1.0 - aCosTheta * aCosTheta);\\n\"\n42   \"  return vec3(aSinTheta * cos(aPhi),\\n\"\n43   \"              aSinTheta * sin(aPhi),\\n\"\n44   \"              aCosTheta);\\n\"\n45   \"}\\n\"\n46   \"\\n\"\n47   \"//! This function uniformly generates samples on whole sphere.\\n\"\n48   \"vec3 sphereUniformSample (in vec2 theHammersleyPoint)\\n\"\n49   \"{\\n\"\n50   \"  float aPhi = PI_2 * theHammersleyPoint.x;\\n\"\n51   \"  float aCosTheta = 2.0 * theHammersleyPoint.y - 1.0;\\n\"\n52   \"  float aSinTheta = sqrt(1.0 - aCosTheta * aCosTheta);\\n\"\n53   \"  return vec3(aSinTheta * cos(aPhi),\\n\"\n54   \"              aSinTheta * sin(aPhi),\\n\"\n55   \"              aCosTheta);\\n\"\n56   \"}\\n\"\n57   \"\\n\"\n58   \"//! Transforms resulted sampled direction from tangent space to world space considering the surface normal.\\n\"\n59   \"vec3 fromTangentSpace (in vec3 theVector,\\n\"\n60   \"                       in vec3 theNormal)\\n\"\n61   \"{\\n\"\n62   \"  vec3 anUp = (abs(theNormal.z) < 0.999) ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0);\\n\"\n63   \"  vec3 anX = normalize(cross(anUp, theNormal));\\n\"\n64   \"  vec3 anY = cross(theNormal, anX);\\n\"\n65   \"  return anX * theVector.x + anY * theVector.y + theNormal * theVector.z;\\n\"\n66   \"}\\n\"\n67   \"\\n\"\n68   \"const float aSHBasisFuncCoeffs = float\\n\"\n69   \"(\\n\"\n70   \"  0.282095 * 0.282095,\\n\"\n71   \"  0.488603 * 0.488603,\\n\"\n72   \"  0.488603 * 0.488603,\\n\"\n73   \"  0.488603 * 0.488603,\\n\"\n74   \"  1.092548 * 1.092548,\\n\"\n75   \"  1.092548 * 1.092548,\\n\"\n76   \"  1.092548 * 1.092548,\\n\"\n77   \"  0.315392 * 0.315392,\\n\"\n78   \"  0.546274 * 0.546274\\n\"\n79   \");\\n\"\n80   \"\\n\"\n81   \"const float aSHCosCoeffs = float\\n\"\n82   \"(\\n\"\n83   \"  3.141593,\\n\"\n84   \"  2.094395,\\n\"\n85   \"  2.094395,\\n\"\n86   \"  2.094395,\\n\"\n87   \"  0.785398,\\n\"\n88   \"  0.785398,\\n\"\n89   \"  0.785398,\\n\"\n90   \"  0.785398,\\n\"\n91   \"  0.785398\\n\"\n92   \");\\n\"\n93   \"\\n\"\n94   \"//! Bakes diffuse IBL map's spherical harmonics coefficients.\\n\"\n95   \"vec3 bakeDiffuseSH()\\n\"\n96   \"{\\n\"\n97   \"  int anIndex = int(gl_FragCoord.x);\\n\"\n98   \"  vec3 aResult = vec3 (0.0);\\n\"\n99   \"  for (int aSampleIter = 0; aSampleIter < uSamplesNum; ++aSampleIter)\\n\"\n100   \"  {\\n\"\n101   \"    vec2 aHammersleyPoint = hammersley (aSampleIter, uSamplesNum);\\n\"\n102   \"    vec3 aDirection = sphereUniformSample (aHammersleyPoint);\\n\"\n103   \"\\n\"\n104   \"    vec3 aValue = occTextureCube (uEnvMap, cubemapVectorTransform (aDirection, uYCoeff, uZCoeff)).rgb;\\n\"\n105   \"\\n\"\n106   \"    float aBasisFunc;\\n\"\n107   \"    aBasisFunc = 1.0;\\n\"\n108   \"\\n\"\n112   \"\\n\"\n116   \"\\n\"\n119   \"\\n\"\n120   \"    aResult += aValue * aBasisFunc[anIndex];\\n\"\n121   \"  }\\n\"\n122   \"\\n\"\n123   \"  aResult *= 4.0 * aSHCosCoeffs[anIndex] * aSHBasisFuncCoeffs[anIndex] / float(uSamplesNum);\\n\"\n124   \"  return aResult;\\n\"\n125   \"}\\n\"\n126   \"\\n\"\n127   \"//! Bakes specular IBL map.\\n\"\n128   \"vec3 bakeSpecularMap (in vec3  theNormal,\\n\"\n129   \"                      in float theRoughness)\\n\"\n130   \"{\\n\"\n131   \"  vec3 aResult = vec3(0.0);\\n\"\n132   \"  float aWeightSum = 0.0;\\n\"\n133   \"  int aSamplesNum = (theRoughness == 0.0) ? 1 : uSamplesNum;\\n\"\n134   \"  float aSolidAngleSource = 4.0 * PI / (6.0 * float(uEnvMapSize * uEnvMapSize));\\n\"\n135   \"  for (int aSampleIter = 0; aSampleIter < aSamplesNum; ++aSampleIter)\\n\"\n136   \"  {\\n\"\n137   \"    vec2 aHammersleyPoint = hammersley (aSampleIter, aSamplesNum);\\n\"\n138   \"    vec3 aHalf = importanceSample (aHammersleyPoint, occRoughness (theRoughness));\\n\"\n139   \"    float aHdotV = aHalf.z;\\n\"\n140   \"    aHalf = fromTangentSpace (aHalf, theNormal);\\n\"\n141   \"    vec3  aLight = -reflect (theNormal, aHalf);\\n\"\n142   \"    float aNdotL = dot (aLight, theNormal);\\n\"\n143   \"    if (aNdotL > 0.0)\\n\"\n144   \"    {\\n\"\n145   \"      float aSolidAngleSample = 1.0 / (float(aSamplesNum) * (occPBRDistribution (aHdotV, theRoughness) * 0.25 + 0.0001) + 0.0001);\\n\"\n146   \"      float aLod = (theRoughness == 0.0) ? 0.0 : 0.5 * log2 (aSolidAngleSample / aSolidAngleSource);\\n\"\n147   \"      aResult += occTextureCubeLod (uEnvMap, aLight, aLod).rgb * aNdotL;\\n\"\n148   \"      aWeightSum += aNdotL;\\n\"\n149   \"    }\\n\"\n150   \"  }\\n\"\n151   \"  return aResult / aWeightSum;\\n\"\n152   \"}\\n\"\n153   \"\\n\"\n154   \"void main()\\n\"\n155   \"{\\n\"\n156   \"  vec3 aViewDirection = normalize (ViewDirection);\\n\"\n157   \"  if (occNbSpecIBLLevels == 0)\\n\"\n158   \"  {\\n\"\n159   \"    occSetFragColor (vec4 (bakeDiffuseSH (), 1.0));\\n\"\n160   \"  }\\n\"\n161   \"  else\\n\"\n162   \"  {\\n\"\n163   \"    occSetFragColor (vec4 (bakeSpecularMap (aViewDirection, float(uCurrentLevel) / float(occNbSpecIBLLevels - 1)), 1.0));\\n\"\n164   \"  }\\n\"\n165   \"}\\n\";" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51428145,"math_prob":0.9805384,"size":5664,"snap":"2020-34-2020-40","text_gpt3_token_len":2005,"char_repetition_ratio":0.1590106,"word_repetition_ratio":0.039800994,"special_character_ratio":0.38276836,"punctuation_ratio":0.20662768,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9808309,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T16:40:54Z\",\"WARC-Record-ID\":\"<urn:uuid:51f915d6-abb8-4adc-8966-1ca3789d30e4>\",\"Content-Length\":\"58335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:323bb9c9-8a35-4748-b71b-48d839dabd4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f7f4a0c-1ab3-40ff-a058-08104d7f57fb>\",\"WARC-IP-Address\":\"188.165.114.136\",\"WARC-Target-URI\":\"http://git.dev.opencascade.org/gitweb/?p=occt.git;a=blob;f=src/Shaders/Shaders_PBREnvBaking_fs.pxx;h=bc995ac8e7cc102e3fdad8bfd8207ad3d2ccbf22;hb=67312b7991c945e66d4fa1d3a6f8e02dbabc9b5a\",\"WARC-Payload-Digest\":\"sha1:STQMNFDH64GM2JSM63EETVPPI3ISCYI7\",\"WARC-Block-Digest\":\"sha1:SZTTVMVBKNYLC37BQE7ZJXPM7YC6NKQC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735963.64_warc_CC-MAIN-20200805153603-20200805183603-00257.warc.gz\"}"}
http://eulersprint.org/problem/91
[ "Problem 91\nRight triangles with integer coordinates\n\nThe points P (x1, y1) and Q (x2, y2) are plotted at integer co-ordinates and are joined to the origin, O(0,0), to form ΔOPQ.", null, "There are exactly fourteen triangles containing a right angle that can be formed when each co-ordinate lies between 0 and 2 inclusive; that is,\n0", null, "x1, y1, x2, y2", null, "2.", null, "Given that 0", null, "x1, y1, x2, y2", null, "50, how many right triangles can be formed?\n\nThese problems are part of Project Euler and are licensed under CC BY-NC-SA 2.0 UK" ]
[ null, "http://eulersprint.org/files/p_091_1.gif", null, "http://eulersprint.org/files/symbol_le.gif", null, "http://eulersprint.org/files/symbol_le.gif", null, "http://eulersprint.org/files/p_091_2.gif", null, "http://eulersprint.org/files/symbol_le.gif", null, "http://eulersprint.org/files/symbol_le.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9423344,"math_prob":0.9938992,"size":443,"snap":"2020-10-2020-16","text_gpt3_token_len":139,"char_repetition_ratio":0.113895215,"word_repetition_ratio":0.025,"special_character_ratio":0.30248308,"punctuation_ratio":0.16513762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9726718,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,4,null,null,null,null,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T22:44:46Z\",\"WARC-Record-ID\":\"<urn:uuid:165568d3-3e06-4139-8978-de7cad61e44f>\",\"Content-Length\":\"5388\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8aff9119-c814-4c30-989a-8654a0ddc315>\",\"WARC-Concurrent-To\":\"<urn:uuid:75414c64-15e5-45ca-bd08-415d3ee22c4a>\",\"WARC-IP-Address\":\"34.200.159.1\",\"WARC-Target-URI\":\"http://eulersprint.org/problem/91\",\"WARC-Payload-Digest\":\"sha1:MVG5BLX6GH5T764YUD26LB2HXJ43TYKR\",\"WARC-Block-Digest\":\"sha1:Z5K4VJOWYWQUCIPRN3JJJF4ZQG6BIQTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370506477.26_warc_CC-MAIN-20200401223807-20200402013807-00031.warc.gz\"}"}