URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://metanumbers.com/210699
[ "# 210699 (number)\n\n210,699 (two hundred ten thousand six hundred ninety-nine) is an odd six-digits composite number following 210698 and preceding 210700. In scientific notation, it is written as 2.10699 × 105. The sum of its digits is 27. It has a total of 4 prime factors and 12 positive divisors. There are 136,800 positive integers (up to 210699) that are relatively prime to 210699.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 27\n• Digital Root 9\n\n## Name\n\nShort name 210 thousand 699 two hundred ten thousand six hundred ninety-nine\n\n## Notation\n\nScientific notation 2.10699 × 105 210.699 × 103\n\n## Prime Factorization of 210699\n\nPrime Factorization 32 × 41 × 571\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 70233 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 210,699 is 32 × 41 × 571. Since it has a total of 4 prime factors, 210,699 is a composite number.\n\n## Divisors of 210699\n\n12 divisors\n\n Even divisors 0 12 6 6\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 312312 Sum of all the positive divisors of n s(n) 101613 Sum of the proper positive divisors of n A(n) 26026 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 459.02 Returns the nth root of the product of n divisors H(n) 8.09571 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 210,699 can be divided by 12 positive divisors (out of which 0 are even, and 12 are odd). The sum of these divisors (counting 210,699) is 312,312, the average is 26,026.\n\n## Other Arithmetic Functions (n = 210699)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 136800 Total number of positive integers not greater than n that are coprime to n λ(n) 2280 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 18807 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 136,800 positive integers (less than 210,699) that are coprime with 210,699. And there are approximately 18,807 prime numbers less than or equal to 210,699.\n\n## Divisibility of 210699\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 3 4 3 6 3 0\n\nThe number 210,699 is divisible by 3 and 9.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n## Base conversion (210699)\n\nBase System Value\n2 Binary 110011011100001011\n3 Ternary 101201000200\n4 Quaternary 303130023\n5 Quinary 23220244\n6 Senary 4303243\n8 Octal 633413\n10 Decimal 210699\n12 Duodecimal a1b23\n20 Vigesimal 166ej\n36 Base36 4ikr\n\n## Basic calculations (n = 210699)\n\n### Multiplication\n\nn×y\n n×2 421398 632097 842796 1053495\n\n### Division\n\nn÷y\n n÷2 105350 70233 52674.8 42139.8\n\n### Exponentiation\n\nny\n n2 44394068601 9353785860162099 1970833326950294097201 415252611155100015986153499\n\n### Nth Root\n\ny√n\n 2√n 459.02 59.5051 21.4247 11.6073\n\n## 210699 as geometric shapes\n\n### Circle\n\n Diameter 421398 1.32386e+06 1.39468e+11\n\n### Sphere\n\n Volume 3.9181e+16 5.57872e+11 1.32386e+06\n\n### Square\n\nLength = n\n Perimeter 842796 4.43941e+10 297973\n\n### Cube\n\nLength = n\n Surface area 2.66364e+11 9.35379e+15 364941\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 632097 1.92232e+10 182471\n\n### Triangular Pyramid\n\nLength = n\n Surface area 7.68928e+10 1.10235e+15 172035\n\n## Cryptographic Hash Functions\n\nmd5 4c2b7c2084c611433bac9299340b1728 fe05dfdd6b4fa4c6dceb8607e1bb301289b26f24 3f42738e77032f98e5a476e075d4f551abd20936a27168f123846f2fcf3473dc 461e8912eddca6eded854880ce30c195ce4452c5538434a719e360c3f87cbd5b4ec7e06e1417a6664b3936d06baa34a5ce93c086060c705c14fae068bd10575c 08e8c7724cf2960a07da7e89e7dbdeb9f28c8f16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5978907,"math_prob":0.9780631,"size":4583,"snap":"2021-43-2021-49","text_gpt3_token_len":1607,"char_repetition_ratio":0.119895175,"word_repetition_ratio":0.02827381,"special_character_ratio":0.46323368,"punctuation_ratio":0.07593308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99610597,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T07:29:08Z\",\"WARC-Record-ID\":\"<urn:uuid:fb40c2fd-63e5-46f5-98ca-4f8b114e6320>\",\"Content-Length\":\"39353\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3065024a-a1ab-4a3c-a6ed-8192130c3fee>\",\"WARC-Concurrent-To\":\"<urn:uuid:e17ce91a-f36e-48ff-9fb0-e3e9421b97ab>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/210699\",\"WARC-Payload-Digest\":\"sha1:4C2PO2LE7IWL2CB5PB3NOSFJQ2PQUTUI\",\"WARC-Block-Digest\":\"sha1:OCLJZ6FQNLI6S5TI7YFFQIETIUYQL3ZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363689.56_warc_CC-MAIN-20211209061259-20211209091259-00492.warc.gz\"}"}
https://zbmath.org/?q=an%3A0549.31001
[ "# zbMATH — the first resource for mathematics\n\nClassical potential theory and its probabilistic counterpart. (English) Zbl 0549.31001\nGrundlehren der Mathematischen Wissenschaften, 262. New York etc.: Springer-Verlag., XXIII, 846 p. DM 168.00; \\$ 62.70 (1984).\nFrom the introduction: ”The purpose of this book is to develop the correspondence between potential theory and probability theory by examining in detail classical potential theory, that is, the potential theory of Laplace’s equation, together with the corresponding probability theory, that is, martingale theory. The joining link which makes this correspondence especially perspicuous is the Brownian motion process, so this process is studied as needed. In order to carry through this program it is necessary to study parabolic potential theory, that is, the potential theory of the heat equation, and the corresponding process of space time Brownian motion. No knowledge of potential theory is presupposed but it is assumed that the reader is familiar with basic probability concepts through conditional expectations. The necessary lattice theory, analytic set theory and capacity theory are covered in the Appendices.”\n”One natural criticism of this project is that there is no reason to treat the very special potential theories of the Laplace and heat equations rather than general axiomatic potential theory. Another criticism is that there is no reason to treat potential theory other than as a special subhead of Markov process theory. In the author’s opinion, however, classical potential theory is too important to serve merely as a source of illustrations of axiomatic potential theory, which theory in turn is too important in its own right to be left to the probabilists.”\nThe author made the effort to make this book into an encyclopedia.\nReviewer: A.Spătaru\n\n##### MSC:\n 31-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to potential theory 31A05 Harmonic, subharmonic, superharmonic functions in two dimensions 31D05 Axiomatic potential theory 60J45 Probabilistic potential theory" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90220994,"math_prob":0.96188986,"size":2238,"snap":"2021-43-2021-49","text_gpt3_token_len":469,"char_repetition_ratio":0.18218443,"word_repetition_ratio":0.049844235,"special_character_ratio":0.19705094,"punctuation_ratio":0.13695091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846842,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T05:31:19Z\",\"WARC-Record-ID\":\"<urn:uuid:955c76e5-65ef-4726-b3e3-a83578767dad>\",\"Content-Length\":\"48425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27cdecaf-0c32-4ade-b549-3ca6d56b2d98>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c2ab7dd-2562-4cea-be34-0ed9d1bd69de>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A0549.31001\",\"WARC-Payload-Digest\":\"sha1:YL3RFODY7Z4JCK5KND6ALC5XWRVM2OQ7\",\"WARC-Block-Digest\":\"sha1:OJOF45U6KDQUCH5KMENEVFRJGUVVCJMN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363135.71_warc_CC-MAIN-20211205035505-20211205065505-00099.warc.gz\"}"}
https://socratic.org/questions/how-do-you-differentiate-y-3e-x-7log-10x
[ "# How do you differentiate y=3e^x-7log_10x?\n\n##### 1 Answer\nNov 29, 2017\n\n$y ' = 3 {e}^{x} - \\frac{7}{x}$\n\n#### Explanation:\n\nDifferentiating the $e$ part:\n\nrule: leave the ${e}^{x}$ then multiply by the differential of the power.\n\nSo it's $3 {e}^{x} \\cdot 1 = 3 {e}^{x}$\n\nLog part:\n\nrule: function on the bottom, differential on the top\n\nSo $- 7 \\cdot {\\log}_{10} x$ becomes $- 7 \\cdot \\frac{1}{x} = - \\frac{7}{x}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6372024,"math_prob":0.9995579,"size":316,"snap":"2021-21-2021-25","text_gpt3_token_len":87,"char_repetition_ratio":0.125,"word_repetition_ratio":0.0,"special_character_ratio":0.24367088,"punctuation_ratio":0.11666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998869,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T21:32:26Z\",\"WARC-Record-ID\":\"<urn:uuid:eb3dfa26-b532-4573-93c2-a4d95ebd7747>\",\"Content-Length\":\"32657\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7fde0608-be07-4703-aefc-00fd60f4b858>\",\"WARC-Concurrent-To\":\"<urn:uuid:76ee184e-74ff-4f18-9374-9a2d7e890fa0>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-differentiate-y-3e-x-7log-10x\",\"WARC-Payload-Digest\":\"sha1:2HEPWIB6GHCU6NGY3FRUJJSFP4IT5WQS\",\"WARC-Block-Digest\":\"sha1:4LRJ7BK7PQXLW266V4Y7KCW2WVYMOEJH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487610841.7_warc_CC-MAIN-20210613192529-20210613222529-00138.warc.gz\"}"}
http://programarcadegames.com/index.php?chapter=lab_create_a_quiz&lang=en
[ " Program Arcade Games With Python And Pygame\n\nProgram Arcade GamesWith Python And Pygame\n\n < Previous Home Next >\n\nLab 3: Create-a-Quiz\n\nNow is your chance to write your own quiz. Use these quizzes to filter job applicants, weed out potential mates, or just plain have a chance to sit on the other side of the desk and make, rather than take, the quiz.\n\nThis lab applies the material used in Chapter 3 on using if statements. It also requires a bit of Chapter 1 because the program must calculate a percentage.\n\n3.1 Description\n\nThis is the list of features your quiz needs to have:\n\n1. Create your own quiz with five or more questions. You can ask questions that require:\n• a number as an answer (e.g., What is 1+1?)\n• text (e.g. What is Harry Potter's last name?)\n• a selection (Which of these choices are correct? A, B, or C?)\n2. If you have the user enter non-numeric answers, think and cover the different ways a user could enter a correct answer. For example, if the answer is “a”, would “A” also be acceptable? See Section 3.6 for a reminder on how to do this.\n3. Let the user know if he or she gets the question correct. Print a message depending on the user's answer.\n4. You need to keep track of how many questions they get correct.\n5. At the end of the program print the percentage of questions the user gets right.\n\nKeep the following in mind when creating the program:\n\n1. Variable names should start with a lower case letter. Upper case letters work, but it is not considered proper. (Right, you didn't realize that programming was going to be like English Tea Time, did you?)\n2. To create a running total of the number correct, create a variable to store this score. Set it to zero. With an if statement, add one to the variable each time the user gets a correct answer. (How do you know if they got it correct? Remember that if you are printing out “correct” then you have already done that part. Just add a line there to add one to the number correct.) If you don't remember how to add one to a variable, go back and review Section 1.5.\n3. Treat true/false questions like multiple choice questions, just compare to “True” or “False.” Don't try to do if a: we'll implement if statements like that later on in the class, but this isn't the place.\n4. Calculate the percentage by using a formula at the end of the game. Don't just add 20% for each question the user gets correct. If you add 20% each time, then you have to change the program 5 places if you add a 6th question. With a formula, you only need 1 change.\n5. To print a blank line so that all the questions don't run into each other, use the following code:\nprint()\n\n6. Remember the program can print multiple items on one line. This can be useful when printing the user's score at the end.\nprint(\"The value in x is\", x)\n\n7. Separate out your code by using blank lines to group sections together. For example, put a blank line between the code for each question.\n8. Sometimes it makes sense to re-use variables. Rather than having a different variable to hold the user's answer for each question, you could reuse the same one.\n9. Use descriptive variable names. x is a terrible variable name. Instead use something like number_correct.\n10. Don't make super-long lines. Chances are you don't need to use \\n at all. Just use multiple print statements.\n\nWhen you are done turn in the assignment according to your teacher/mentor's instructions.\n\n3.2 Example Run\n\nHere's an example from my program. Please create your own original questions. I like to be entertained while I check these programs.\n\nQuiz time!\n\nHow many books are there in the Harry Potter series? 7\nCorrect!\n\nWhat is 3*(2-1)? 3\nCorrect!\n\nWhat is 3*2-1? 5\nCorrect!\n\nWho sings Black Horse and the Cherry Tree?\n1. Kelly Clarkson\n2. K.T. Tunstall\n3. Hillary Duff\n4. Bon Jovi\n? 2\nCorrect!\n\nWho is on the front of a one dollar bill\n1. George Washington\n2. Abraham Lincoln" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8968884,"math_prob":0.718948,"size":4377,"snap":"2019-43-2019-47","text_gpt3_token_len":1058,"char_repetition_ratio":0.122798994,"word_repetition_ratio":0.0,"special_character_ratio":0.235321,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.964194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T15:26:34Z\",\"WARC-Record-ID\":\"<urn:uuid:35c53c97-3f07-4b2d-a403-a01dcc0816ed>\",\"Content-Length\":\"26290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70a2d49c-0360-4d2d-8854-9c746bb08e40>\",\"WARC-Concurrent-To\":\"<urn:uuid:178e552e-55d6-463c-84cc-cc1a2ad68e18>\",\"WARC-IP-Address\":\"52.10.77.68\",\"WARC-Target-URI\":\"http://programarcadegames.com/index.php?chapter=lab_create_a_quiz&lang=en\",\"WARC-Payload-Digest\":\"sha1:V2YLACA4QNV5YK4HHH5VPSWFBNHZWM4M\",\"WARC-Block-Digest\":\"sha1:G3UBRQDY47GXV7A63MF57CTAFSPGAUNS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986653876.31_warc_CC-MAIN-20191014150930-20191014174430-00421.warc.gz\"}"}
https://www.colorhexa.com/47eb63
[ "# #47eb63 Color Information\n\nIn a RGB color space, hex #47eb63 is composed of 27.8% red, 92.2% green and 38.8% blue. Whereas in a CMYK color space, it is composed of 69.8% cyan, 0% magenta, 57.9% yellow and 7.8% black. It has a hue angle of 130.2 degrees, a saturation of 80.4% and a lightness of 60%. #47eb63 color hex could be obtained by blending #8effc6 with #00d700. Closest websafe color is: #33ff66.\n\n• R 28\n• G 92\n• B 39\nRGB color chart\n• C 70\n• M 0\n• Y 58\n• K 8\nCMYK color chart\n\n#47eb63 color description : Soft lime green.\n\n# #47eb63 Color Conversion\n\nThe hexadecimal color #47eb63 has RGB values of R:71, G:235, B:99 and CMYK values of C:0.7, M:0, Y:0.58, K:0.08. Its decimal value is 4713315.\n\nHex triplet RGB Decimal 47eb63 `#47eb63` 71, 235, 99 `rgb(71,235,99)` 27.8, 92.2, 38.8 `rgb(27.8%,92.2%,38.8%)` 70, 0, 58, 8 130.2°, 80.4, 60 `hsl(130.2,80.4%,60%)` 130.2°, 69.8, 92.2 33ff66 `#33ff66`\nCIE-LAB 82.729, -68.692, 53.072 34.557, 61.654, 21.883 0.293, 0.522, 61.654 82.729, 86.806, 142.31 82.729, -67.739, 78.518 78.52, -58.851, 38.44 01000111, 11101011, 01100011\n\n# Color Schemes with #47eb63\n\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #eb47cf\n``#eb47cf` `rgb(235,71,207)``\nComplementary Color\n• #7deb47\n``#7deb47` `rgb(125,235,71)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #47ebb5\n``#47ebb5` `rgb(71,235,181)``\nAnalogous Color\n• #eb477d\n``#eb477d` `rgb(235,71,125)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #b547eb\n``#b547eb` `rgb(181,71,235)``\nSplit Complementary Color\n• #eb6347\n``#eb6347` `rgb(235,99,71)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #6347eb\n``#6347eb` `rgb(99,71,235)``\n• #cfeb47\n``#cfeb47` `rgb(207,235,71)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #6347eb\n``#6347eb` `rgb(99,71,235)``\n• #eb47cf\n``#eb47cf` `rgb(235,71,207)``\n• #17cf36\n``#17cf36` `rgb(23,207,54)``\n• #19e63c\n``#19e63c` `rgb(25,230,60)``\n• #30e950\n``#30e950` `rgb(48,233,80)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #5eee77\n``#5eee77` `rgb(94,238,119)``\n• #75f08a\n``#75f08a` `rgb(117,240,138)``\n• #8cf39e\n``#8cf39e` `rgb(140,243,158)``\nMonochromatic Color\n\n# Alternatives to #47eb63\n\nBelow, you can see some colors close to #47eb63. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #54eb47\n``#54eb47` `rgb(84,235,71)``\n• #47eb48\n``#47eb48` `rgb(71,235,72)``\n• #47eb55\n``#47eb55` `rgb(71,235,85)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #47eb71\n``#47eb71` `rgb(71,235,113)``\n• #47eb7e\n``#47eb7e` `rgb(71,235,126)``\n• #47eb8c\n``#47eb8c` `rgb(71,235,140)``\nSimilar Colors\n\n# #47eb63 Preview\n\nThis text has a font color of #47eb63.\n\n``<span style=\"color:#47eb63;\">Text here</span>``\n#47eb63 background color\n\nThis paragraph has a background color of #47eb63.\n\n``<p style=\"background-color:#47eb63;\">Content here</p>``\n#47eb63 border color\n\nThis element has a border color of #47eb63.\n\n``<div style=\"border:1px solid #47eb63;\">Content here</div>``\nCSS codes\n``.text {color:#47eb63;}``\n``.background {background-color:#47eb63;}``\n``.border {border:1px solid #47eb63;}``\n\n# Shades and Tints of #47eb63\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010b03 is the darkest color, while #f8fef9 is the lightest one.\n\n• #010b03\n``#010b03` `rgb(1,11,3)``\n• #031c07\n``#031c07` `rgb(3,28,7)``\n• #052e0c\n``#052e0c` `rgb(5,46,12)``\n• #074011\n``#074011` `rgb(7,64,17)``\n• #095115\n``#095115` `rgb(9,81,21)``\n• #0b631a\n``#0b631a` `rgb(11,99,26)``\n• #0d751e\n``#0d751e` `rgb(13,117,30)``\n• #0f8623\n``#0f8623` `rgb(15,134,35)``\n• #119828\n``#119828` `rgb(17,152,40)``\n• #12aa2c\n``#12aa2c` `rgb(18,170,44)``\n• #14bc31\n``#14bc31` `rgb(20,188,49)``\n• #16cd36\n``#16cd36` `rgb(22,205,54)``\n• #18df3a\n``#18df3a` `rgb(24,223,58)``\n• #24e745\n``#24e745` `rgb(36,231,69)``\n• #35e954\n``#35e954` `rgb(53,233,84)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #59ed72\n``#59ed72` `rgb(89,237,114)``\n• #6aef81\n``#6aef81` `rgb(106,239,129)``\n• #7cf190\n``#7cf190` `rgb(124,241,144)``\n• #8ef39f\n``#8ef39f` `rgb(142,243,159)``\n• #9ff5ae\n``#9ff5ae` `rgb(159,245,174)``\n• #b1f7bd\n``#b1f7bd` `rgb(177,247,189)``\n• #c3f8cc\n``#c3f8cc` `rgb(195,248,204)``\n``#d5fadb` `rgb(213,250,219)``\n• #e6fcea\n``#e6fcea` `rgb(230,252,234)``\n• #f8fef9\n``#f8fef9` `rgb(248,254,249)``\nTint Color Variation\n\n# Tones of #47eb63\n\nA tone is produced by adding gray to any pure hue. In this case, #959d97 is the less saturated color, while #37fb59 is the most saturated one.\n\n• #959d97\n``#959d97` `rgb(149,157,151)``\n• #8ea492\n``#8ea492` `rgb(142,164,146)``\n• #86ac8c\n``#86ac8c` `rgb(134,172,140)``\n• #7eb487\n``#7eb487` `rgb(126,180,135)``\n• #76bc82\n``#76bc82` `rgb(118,188,130)``\n• #6ec47d\n``#6ec47d` `rgb(110,196,125)``\n• #66cc78\n``#66cc78` `rgb(102,204,120)``\n• #5fd373\n``#5fd373` `rgb(95,211,115)``\n• #57db6d\n``#57db6d` `rgb(87,219,109)``\n• #4fe368\n``#4fe368` `rgb(79,227,104)``\n• #47eb63\n``#47eb63` `rgb(71,235,99)``\n• #3ff35e\n``#3ff35e` `rgb(63,243,94)``\n• #37fb59\n``#37fb59` `rgb(55,251,89)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #47eb63 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5489019,"math_prob":0.5678834,"size":3695,"snap":"2020-34-2020-40","text_gpt3_token_len":1625,"char_repetition_ratio":0.12516934,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5545331,"punctuation_ratio":0.23608018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9807096,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T18:04:21Z\",\"WARC-Record-ID\":\"<urn:uuid:0a70861f-875a-48c2-b89b-39ace760c036>\",\"Content-Length\":\"36286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfbc13c0-d7e8-46b7-ac59-ddb4cfa9a7a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:07684fdb-a09f-4388-aa5e-38b8fd316986>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/47eb63\",\"WARC-Payload-Digest\":\"sha1:A2XUF3RRQ2Z6QJIHP27APJDVNMTPMMMH\",\"WARC-Block-Digest\":\"sha1:IQAZY5IXR2GE72MSWLVYYE6I2STGA23M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198287.23_warc_CC-MAIN-20200920161009-20200920191009-00535.warc.gz\"}"}
https://puzzling.stackexchange.com/questions/23847/professor-halfbrain-and-the-odd-perfect-number
[ "# Professor Halfbrain and the odd perfect number\n\nTwo hours ago, I received a phone call from professor Halfbrain. The professor told me that he has detected an odd perfect number. The professor was very excited, since the existence of odd perfect numbers is an outstanding open question in number theory. Let me explain this question to you.\n\nDefinition: An integer $N$ is perfect, if the sum of all divisors of $N$ below $N$ exactly equals $N$.\n\nFor instance $N=6$ is perfect, as $1+2+3=6$. And also the number $N=28$ is perfect, as $1+2+4+7+14=28$.\n\nTheorem: For a number $N$ with prime factorization $N=p_1^{e_1}p_2^{e_2}\\cdots p_k^{e_k}$ the sum $\\sigma(N)$ of all divisors (including the number $N$ itself) is given by the expression $$(1+p_1+p_1^2+\\cdots+p_1^{e_1})\\, (1+p_2+p_2^2+\\cdots+p_2^{e_2})\\, \\cdots (1+p_k+p_k^2+\\cdots+p_k^{e_k}).$$\n\nThe theorem implies that an integer $N$ is perfect, if and only if $\\sigma(N)=2N$ holds true. With the help of the theorem, one easily checks and verifies that $6$ and $28$ are perfect:\n\n• $6=2\\cdot3~~$ yields $~~\\sigma(6)=(1+2)(1+3)=12=2\\cdot6$\n• $28=2^2\\cdot7~~$ yields $~~\\sigma(28)=(1+2+4)(1+7)=56 =2\\cdot 28$\n\nTo the current day, mathematicians have been hunting for odd perfect numbers. Although thousands of hours of research time have been invested into this problem, all these hours were of no avail. The big breakthrough of Professor Halfbrain on 8 November 2015 might be a turning point in the history of mathematics.\n\nProfessor Halfbrain's odd perfect number theorem:\nFor $N=3^2\\cdot7^2\\cdot11^2\\cdot13^2\\cdot22021 ~=~ 198,585,576,189$, we have\n\\begin{eqnarray} \\sigma(N) &=& (1+3+3^2)(1+7+7^2)(1+11+11^2)(1+13+13^2)(1+22021) \\\\ &=& 397,171,152,378 ~~=~~ 2\\cdot 198,585,576,189 ~~=~~ 2N. \\end{eqnarray} Therefore $N=198,585,576,189$ constitutes an odd perfect number.\n\nQuestion: Has the professor indeed managed to detect an odd perfect number, or is there a mistake hidden somewhere in the above paragraphs?\n\n... has made a mistake. $22021$ is not prime, it's $19^2 \\cdot 61$.\n... If you factor the individual products, $1 + 3 + 3^2 = 13$, $1 + 7 + 7^2 = 57 = 3 \\cdot 19$, $1 + 11 + 11^2 = 133 = 7 \\cdot 19$, $1 + 13 + 13^2 = 183 = 3 \\cdot 61$, so there are a few factors not accounted for. It would seem that the two 19's and the 61 must be factors of 22021." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8577754,"math_prob":0.99986696,"size":1914,"snap":"2020-45-2020-50","text_gpt3_token_len":633,"char_repetition_ratio":0.1329843,"word_repetition_ratio":0.0076045627,"special_character_ratio":0.3719958,"punctuation_ratio":0.11166253,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999895,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T00:24:43Z\",\"WARC-Record-ID\":\"<urn:uuid:5e522bfc-7974-44f1-8d85-03db3759d0c8>\",\"Content-Length\":\"157843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cec5be00-85a0-42a8-a3f4-f3d246fbeb30>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c465ada-61b1-4509-97c9-2faa0ac3e902>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/23847/professor-halfbrain-and-the-odd-perfect-number\",\"WARC-Payload-Digest\":\"sha1:NNHY47R3KQJSGQ4ATYZCBC3HZ5ERRYKM\",\"WARC-Block-Digest\":\"sha1:7GJRNGMGGVTL5VZZHPRXDXBZJTO3EAJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00252.warc.gz\"}"}
https://primenumbers.info/109.htm
[ "# Prime Numbers\n\n## Is number 109 a prime number?\n\nNumber 109 is a prime number.\n\nA prime number is a natural number greater than 1 that is not a product of two smaller natural numbers. Therefore we consider 109 as a prime number, because:\n\n• 109 is a natural number greater than 1,\n• 109 is not a product of 2 smaller natural numbers (it can be divided only by 1 and itself)\n\n### Other properties of number 109\n\nNumber of factors: 2.\n\nList of factors/divisors: 1, 109.\n\nParity: 109 is an odd number.\n\nPerfect square: no (a square number or perfect square is an integer that is the square of an integer).\n\nPerfect number: no, because the sum of its proper divisors is 1 (perfect number is a positive integer that is equal to the sum of its proper divisors).\n\n Number:Prime number: 102no 103yes 104no 105no 106no 107yes 108no 109yes 110no 111no 112no 113yes 114no 115no 116no" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8774002,"math_prob":0.99748546,"size":675,"snap":"2023-40-2023-50","text_gpt3_token_len":208,"char_repetition_ratio":0.17734724,"word_repetition_ratio":0.015873017,"special_character_ratio":0.33925927,"punctuation_ratio":0.12080537,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9776375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T14:04:26Z\",\"WARC-Record-ID\":\"<urn:uuid:82586ffd-7ddd-4b29-a0f9-6686559e8533>\",\"Content-Length\":\"9737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:273665f3-b523-4d5d-9ff4-f92a6acf2451>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ab10dee-1445-46d2-84f7-c869629696d6>\",\"WARC-IP-Address\":\"149.28.234.134\",\"WARC-Target-URI\":\"https://primenumbers.info/109.htm\",\"WARC-Payload-Digest\":\"sha1:KK7TSBK4JXMBAQCZF322JF5DM2EXYDAO\",\"WARC-Block-Digest\":\"sha1:JCWWFEQIYVTHVLQIPJYJCCZN6F5SAB5F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00203.warc.gz\"}"}
http://www.csplib.org/Problems/prob056/
[ "Proposed by Peter Nightingale\n\nIn the SONET problem we are given a set of nodes, and for each pair of nodes we are given the demand (which is the number of channels required to carry network traffic between the two nodes). The demand may be zero, in which case the two nodes do not need to be connected.\n\nA SONET ring connects a set of nodes. A node is installed on a ring using a piece of equipment called an add-drop multiplexer (ADM). Each node may be installed on more than one ring. Network traffic can be transmitted from one node to another only if they are both installed on the same ring. Each ring has an upper limit on the number of nodes, and a limit on the number of channels. The demand of a pair of nodes may be split between multiple rings.\n\nThe objective is to minimise the total number of ADMs used while satisfying all demands.\n\n## The Unlimited Traffic Capacity Problem\n\nIn the unlimited traffic capacity problem, the magnitude of the demands is ignored. If a pair of nodes $n_1$ and $n_2$ has a non-zero demand, then there must exist a ring connecting $n_1$ and $n_2$. The upper limit on the number of channels per ring has no significance in this simplified problem. The objective function remains the same." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9139701,"math_prob":0.97332287,"size":1182,"snap":"2020-34-2020-40","text_gpt3_token_len":263,"char_repetition_ratio":0.1400679,"word_repetition_ratio":0.023364486,"special_character_ratio":0.21742809,"punctuation_ratio":0.075630255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9824358,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T15:39:33Z\",\"WARC-Record-ID\":\"<urn:uuid:1b1d9ca0-be7e-45ca-8519-6e25911b813d>\",\"Content-Length\":\"4626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:beea90ec-92a5-4bcc-8d94-4f6db0ea209b>\",\"WARC-Concurrent-To\":\"<urn:uuid:347d5cf0-cc04-4e45-bfb5-b3c9775e5894>\",\"WARC-IP-Address\":\"138.251.206.16\",\"WARC-Target-URI\":\"http://www.csplib.org/Problems/prob056/\",\"WARC-Payload-Digest\":\"sha1:L3MKYWECAP7BSDD37JPRLJML62BUV2KZ\",\"WARC-Block-Digest\":\"sha1:B5O4XCOPFHY3PS7JYQ2FUB57Q3EGHWDE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400244231.61_warc_CC-MAIN-20200926134026-20200926164026-00674.warc.gz\"}"}
https://nixdoc.net/man-pages/NetBSD/page-72
[ "·  Home\n+   man pages\n -> Linux -> FreeBSD -> OpenBSD -> NetBSD -> Tru64 Unix -> HP-UX 11i -> IRIX\n·  Linux HOWTOs\n·  FreeBSD Tips\n·  *niX Forums\n\nman pages->NetBSD man pages\n Title\n Content\n Arch\n Section All Sections 1 - General Commands 2 - System Calls 3 - Subroutines 4 - Special Files 5 - File Formats 6 - Games 7 - Macros and Conventions 8 - Maintenance Commands 9 - Kernel Interface n - New Commands\n lockf(3) -- record locking on files The lockf() function allows sections of a file to be locked with advisory-mode locks. Calls to lockf() from other processes which attempt to lock the locked file section will either return an error va... log(3) -- exponential, logarithm, power functions The exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nlog10(3) -- exponential, logarithm, power functions\nThe exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nlog10f(3) -- exponential, logarithm, power functions\nThe exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nlog1p(3) -- exponential, logarithm, power functions\nThe exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nlog1pf(3) -- exponential, logarithm, power functions\nThe exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nlogb(3) -- IEEE test functions\nThese functions allow users to test conformance to IEEE Std 754-1985. Their use is not otherwise recommended. logb(x) returns x's exponent n, a signed integer converted to double-precision floating-p...\nlogbf(3) -- IEEE test functions\nThese functions allow users to test conformance to IEEE Std 754-1985. Their use is not otherwise recommended. logb(x) returns x's exponent n, a signed integer converted to double-precision floating-p...\nlogf(3) -- exponential, logarithm, power functions\nThe exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value o...\nThe login(), logout(), and logwtmp() functions operate on the database of current users in /var/run/utmp and on the logfile /var/log/wtmp of logins and logouts. The login() function updates the /var/r..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5207957,"math_prob":0.94305336,"size":3878,"snap":"2021-43-2021-49","text_gpt3_token_len":918,"char_repetition_ratio":0.1881776,"word_repetition_ratio":0.760274,"special_character_ratio":0.2555441,"punctuation_ratio":0.14325069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99258316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T23:46:29Z\",\"WARC-Record-ID\":\"<urn:uuid:87447d7a-ec0e-4737-9d48-1bac14df3e6b>\",\"Content-Length\":\"21137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebf22032-7070-4f65-8d40-9659cfee4f12>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9ea9cc3-5167-41d8-a028-fe85f3ec7ab2>\",\"WARC-IP-Address\":\"176.9.204.182\",\"WARC-Target-URI\":\"https://nixdoc.net/man-pages/NetBSD/page-72\",\"WARC-Payload-Digest\":\"sha1:GBNV2DQAOTE453SQE3TIOYAMMP65IAOK\",\"WARC-Block-Digest\":\"sha1:IKEVDEDODMJXJBGJWXB2CJUMODTXPG3K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358847.80_warc_CC-MAIN-20211129225145-20211130015145-00371.warc.gz\"}"}
http://pycbc.org/pycbc/latest/html/_modules/pycbc/filter/autocorrelation.html
[ "# Source code for pycbc.filter.autocorrelation\n\n# Copyright (C) 2016 Christopher M. Biwer\n# This program is free software; you can redistribute it and/or modify it\n# Free Software Foundation; either version 3 of the License, or (at your\n# option) any later version.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General\n# Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n#\n# =============================================================================\n#\n# Preamble\n#\n# =============================================================================\n#\n\"\"\"\nThis modules provides functions for calculating the autocorrelation function\nand length of a data series.\n\"\"\"\n\nimport numpy\nfrom math import isnan\nfrom pycbc.filter.matchedfilter import correlate\nfrom pycbc.types import FrequencySeries, TimeSeries, zeros\n\n[docs]def calculate_acf(data, delta_t=1.0, unbiased=False):\nr\"\"\"Calculates the one-sided autocorrelation function.\n\nCalculates the autocorrelation function (ACF) and returns the one-sided\nACF. The ACF is defined as the autocovariance divided by the variance. The\nACF can be estimated using\n\n.. math::\n\n\\hat{R}(k) = \\frac{1}{n \\sigma^{2}} \\sum_{t=1}^{n-k} \\left( X_{t} - \\mu \\right) \\left( X_{t+k} - \\mu \\right)\n\nWhere :math:\\hat{R}(k) is the ACF, :math:X_{t} is the data series at\ntime t, :math:\\mu is the mean of :math:X_{t}, and :math:\\sigma^{2} is\nthe variance of :math:X_{t}.\n\nParameters\n-----------\ndata : TimeSeries or numpy.array\nA TimeSeries or numpy.array of data.\ndelta_t : float\nThe time step of the data series if it is not a TimeSeries instance.\nunbiased : bool\nIf True the normalization of the autocovariance function is n-k\ninstead of n. This is called the unbiased estimation of the\nautocovariance. Note that this does not mean the ACF is unbiased.\n\nReturns\n-------\nacf : numpy.array\nIf data is a TimeSeries then acf will be a TimeSeries of the\none-sided ACF. Else acf is a numpy.array.\n\"\"\"\n\n# if given a TimeSeries instance then get numpy.array\nif isinstance(data, TimeSeries):\ny = data.numpy()\ndelta_t = data.delta_t\nelse:\ny = data\n\n# Zero mean\ny = y - y.mean()\nny_orig = len(y)\n\n# FFT data minus the mean\n\n# correlate\n# do not need to give the congjugate since correlate function does it\ncdata = FrequencySeries(zeros(len(fdata), dtype=fdata.dtype),\ndelta_f=fdata.delta_f, copy=False)\ncorrelate(fdata, fdata, cdata)\n\n# IFFT correlated data to get unnormalized autocovariance time series\nacf = cdata.to_timeseries()\nacf = acf[:ny_orig]\n\n# normalize the autocovariance\n# note that dividing by acf is the same as ( y.var() * len(acf) )\nif unbiased:\nacf /= ( y.var() * numpy.arange(len(acf), 0, -1) )\nelse:\nacf /= acf\n\n# return input datatype\nif isinstance(data, TimeSeries):\nreturn TimeSeries(acf, delta_t=delta_t)\nelse:\nreturn acf\n\n[docs]def calculate_acl(data, m=5, dtype=int):\nr\"\"\"Calculates the autocorrelation length (ACL).\n\nGiven a normalized autocorrelation function :math:\\rho[i] (by normalized,\nwe mean that :math:\\rho = 1), the ACL :math:\\tau is:\n\n.. math::\n\n\\tau = 1 + 2 \\sum_{i=1}^{K} \\rho[i].\n\nThe number of samples used :math:K is found by using the first point\nsuch that:\n\n.. math::\n\nm \\tau[K] \\leq K,\n\nwhere :math:m is a tuneable parameter (default = 5). If no such point\nexists, then the given data set it too short to estimate the ACL; in this\ncase inf is returned.\n\nThis algorithm for computing the ACL is taken from:\n\nN. Madras and A.D. Sokal, J. Stat. Phys. 50, 109 (1988).\n\nParameters\n-----------\ndata : TimeSeries or array\nA TimeSeries of data.\nm : int\nThe number of autocorrelation lengths to use for determining the window\nsize :math:K (see above).\ndtype : int or float\nThe datatype of the output. If the dtype was set to int, then the\nceiling is returned.\n\nReturns\n-------\nacl : int or float\nThe autocorrelation length. If the ACL cannot be estimated, returns\nnumpy.inf.\n\"\"\"\n\n# sanity check output data type\nif dtype not in [int, float]:\nraise ValueError(\"The dtype must be either int or float.\")\n\n# if we have only a single point, just return 1\nif len(data) < 2:\nreturn 1\n\n# calculate ACF that is normalized by the zero-lag value\nacf = calculate_acf(data)\n\ncacf = 2 * acf.numpy().cumsum() - 1\nwin = m * cacf <= numpy.arange(len(cacf))\nif win.any():\nacl = cacf[numpy.where(win)]\nif dtype == int:\nacl = int(numpy.ceil(acl))\nelse:\nacl = numpy.inf\nreturn acl" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6163404,"math_prob":0.97339416,"size":4855,"snap":"2020-34-2020-40","text_gpt3_token_len":1336,"char_repetition_ratio":0.15233973,"word_repetition_ratio":0.013297873,"special_character_ratio":0.3130793,"punctuation_ratio":0.18057021,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992355,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T01:06:25Z\",\"WARC-Record-ID\":\"<urn:uuid:091bb7fa-8fa9-42c0-a679-00780e0cca37>\",\"Content-Length\":\"25770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4afe8d68-22be-4faf-929d-8714b969de4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f49f529a-1063-4038-bf89-330289611c5c>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://pycbc.org/pycbc/latest/html/_modules/pycbc/filter/autocorrelation.html\",\"WARC-Payload-Digest\":\"sha1:DLZO76IZPTFZDS266K2KHHSMWDB4DH2F\",\"WARC-Block-Digest\":\"sha1:AJTIL7JLCYA42DUH4UBKMFP3STRSVYBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400193087.0_warc_CC-MAIN-20200920000137-20200920030137-00193.warc.gz\"}"}
https://os.mbed.com/docs/mbed-os/v5.14/apis/mbedcrc.html
[ "Report an issue in GitHub or email us\n\n# MbedCRC\n\nThe MbedCRC Class provides support for Cyclic Redundancy Check (CRC) algorithms. MbedCRC is a template class with polynomial value and polynomial width as arguments.\n\nYou can use the `compute` API to calculate CRC for the selected polynomial. If data is available in parts, you must call the `compute_partial_start`, `compute_partial` and `compute_partial_stop` APIs in the proper order to get the correct CRC value. You can use the `get_polynomial` and `get_width` APIs to learn the current object's polynomial and width values.\n\nROM polynomial tables are for supported 8/16-bit CCITT, 16-bit IBM and 32-bit ANSI polynomials. By default, ROM tables are used for CRC computation. If ROM tables are not available, then CRC is computed at runtime bit by bit for all data input.\n\nFor platforms that support hardware CRC, the MbedCRC API replaces the software implementation of CRC to take advantage of the hardware acceleration the platform provides.\n\n## MbedCRC examples\n\n### Example 1\n\nBelow is a CRC example to compute 32-bit CRC.\n\n``````#include \"mbed.h\"\n\nint main()\n{\nMbedCRC<POLY_32BIT_ANSI, 32> ct;\n\nchar test[] = \"123456789\";\nuint32_t crc = 0;\n\nprintf(\"\\nPolynomial = 0x%lx Width = %d \\n\", ct.get_polynomial(), ct.get_width());\n\nct.compute((void *)test, strlen((const char*)test), &crc);\nprintf(\"The CRC of data \\\"123456789\\\" is : 0x%lx\\n\", crc);\nreturn 0;\n}\n\n``````\n\n### Example 2\n\nBelow is a 32-bit CRC example using `compute_partial` APIs.\n\n``````#include \"mbed.h\"\n\nint main() {\nMbedCRC<POLY_32BIT_ANSI, 32> ct;\n\nchar test[] = \"123456789\";\nuint32_t crc;\n\nct.compute_partial_start(&crc);\nct.compute_partial((void *)&test, 4, &crc);\nct.compute_partial((void *)&test, 5, &crc);\nct.compute_partial_stop(&crc);\n\nprintf(\"The CRC of 0x%lx \\\"123456789\\\" is \\\"0xCBF43926\\\" Result: 0x%lx\\n\",\nct.get_polynomial(), crc);\n\nreturn 0;\n}\n\n``````\n\n### Example 3\n\nBelow is a CRC example for the SD driver.\n\n``````#include \"mbed.h\"\n\nint crc_sd_7bit()\n{\nMbedCRC<POLY_7BIT_SD, 7> ct;\nchar test;\nuint32_t crc;\n\ntest = 0x40;\ntest = 0x00;\ntest = 0x00;\ntest = 0x00;\ntest = 0x00;\n\nct.compute((void *)test, 5, &crc);\n// CRC 7-bit as 8-bit data\ncrc = (crc | 0x01) & 0xFF;\nprintf(\"The CRC of 0x%lx \\\"CMD0\\\" is \\\"0x95\\\" Result: 0x%lx\\n\",\nct.get_polynomial(), crc);\n\ntest = 0x48;\ntest = 0x00;\ntest = 0x00;\ntest = 0x01;\ntest = 0xAA;\n\nct.compute((void *)test, 5, &crc);\n// CRC 7-bit as 8-bit data\ncrc = (crc | 0x01) & 0xFF;\nprintf(\"The CRC of 0x%lx \\\"CMD8\\\" is \\\"0x87\\\" Result: 0x%lx\\n\",\nct.get_polynomial(), crc);\n\ntest = 0x51;\ntest = 0x00;\ntest = 0x00;\ntest = 0x00;\ntest = 0x00;\n\nct.compute((void *)test, 5, &crc);\n// CRC 7-bit as 8-bit data\ncrc = (crc | 0x01) & 0xFF;\nprintf(\"The CRC of 0x%lx \\\"CMD17\\\" is \\\"0x55\\\" Result: 0x%lx\\n\",\nct.get_polynomial(), crc);\n\nreturn 0;\n}\n\nint crc_sd_16bit()\n{\nchar test;\nuint32_t crc;\nMbedCRC<POLY_16BIT_CCITT, 16> sd(0, 0, false, false);\n\nmemset(test, 0xFF, 512);\n// 512 bytes with 0xFF data --> CRC16 = 0x7FA1\nsd.compute((void *)test, 512, &crc);\nprintf(\"16BIT SD CRC (512 bytes 0xFF) is \\\"0x7FA1\\\" Result: 0x%lx\\n\", crc);\nreturn 0;\n}\n\nint main()\n{\ncrc_sd_16bit();\ncrc_sd_7bit();\nreturn 0;\n}\n\n``````\n##### Important Information for this Arm website\n\nThis site uses cookies to store information on your computer. By continuing to use our site, you consent to our cookies. If you are not happy with the use of these cookies, please review our Cookie Policy to learn how they can be disabled. By disabling cookies, some features of the site will not work." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60990596,"math_prob":0.9749763,"size":3099,"snap":"2019-43-2019-47","text_gpt3_token_len":1071,"char_repetition_ratio":0.14701131,"word_repetition_ratio":0.2510917,"special_character_ratio":0.3765731,"punctuation_ratio":0.20629922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9791849,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T07:49:01Z\",\"WARC-Record-ID\":\"<urn:uuid:ca7189e9-fb23-46b0-b68c-b6dda415c8ff>\",\"Content-Length\":\"81186\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ab54b5d-425d-4396-b655-df740e8fef13>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4347e1e-c047-43ea-b084-aac3f7a096d4>\",\"WARC-IP-Address\":\"52.43.249.249\",\"WARC-Target-URI\":\"https://os.mbed.com/docs/mbed-os/v5.14/apis/mbedcrc.html\",\"WARC-Payload-Digest\":\"sha1:WUIOIK3DEU6YJXW4Q4H3NTUPZADEPXG7\",\"WARC-Block-Digest\":\"sha1:36LWJAZLYJMRJVAZLXB2MKRTX5V3RFAD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670743.44_warc_CC-MAIN-20191121074016-20191121102016-00522.warc.gz\"}"}
https://metanumbers.com/52723
[ "## 52723\n\n52,723 (fifty-two thousand seven hundred twenty-three) is an odd five-digits composite number following 52722 and preceding 52724. In scientific notation, it is written as 5.2723 × 104. The sum of its digits is 19. It has a total of 2 prime factors and 4 positive divisors. There are 47,920 positive integers (up to 52723) that are relatively prime to 52723.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 19\n• Digital Root 1\n\n## Name\n\nShort name 52 thousand 723 fifty-two thousand seven hundred twenty-three\n\n## Notation\n\nScientific notation 5.2723 × 104 52.723 × 103\n\n## Prime Factorization of 52723\n\nPrime Factorization 11 × 4793\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 52723 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 52,723 is 11 × 4793. Since it has a total of 2 prime factors, 52,723 is a composite number.\n\n## Divisors of 52723\n\n1, 11, 4793, 52723\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 57528 Sum of all the positive divisors of n s(n) 4805 Sum of the proper positive divisors of n A(n) 14382 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 229.615 Returns the nth root of the product of n divisors H(n) 3.6659 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 52,723 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 52,723) is 57,528, the average is 14,382.\n\n## Other Arithmetic Functions (n = 52723)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 47920 Total number of positive integers not greater than n that are coprime to n λ(n) 23960 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5380 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 47,920 positive integers (less than 52,723) that are coprime with 52,723. And there are approximately 5,380 prime numbers less than or equal to 52,723.\n\n## Divisibility of 52723\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 3 3 1 6 3 1\n\n52,723 is not divisible by any number less than or equal to 9.\n\n## Classification of 52723\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (52723)\n\nBase System Value\n2 Binary 1100110111110011\n3 Ternary 2200022201\n4 Quaternary 30313303\n5 Quinary 3141343\n6 Senary 1044031\n8 Octal 146763\n10 Decimal 52723\n12 Duodecimal 26617\n20 Vigesimal 6bg3\n36 Base36 14oj\n\n## Basic calculations (n = 52723)\n\n### Multiplication\n\nn×i\n n×2 105446 158169 210892 263615\n\n### Division\n\nni\n n⁄2 26361.5 17574.3 13180.8 10544.6\n\n### Exponentiation\n\nni\n n2 2779714729 146554899657067 7726813974619543441 407380813183866188839843\n\n### Nth Root\n\ni√n\n 2√n 229.615 37.4973 15.153 8.79833\n\n## 52723 as geometric shapes\n\n### Circle\n\n Diameter 105446 331268 8.73273e+09\n\n### Sphere\n\n Volume 6.13888e+14 3.49309e+10 331268\n\n### Square\n\nLength = n\n Perimeter 210892 2.77971e+09 74561.6\n\n### Cube\n\nLength = n\n Surface area 1.66783e+10 1.46555e+14 91318.9\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 158169 1.20365e+09 45659.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 4.81461e+09 1.72717e+13 43048.1\n\n## Cryptographic Hash Functions\n\nmd5 1455d8443f7d124bcbbd60278824ee58 e2977f963327ca11b431da9a67c713e55d272a35 d7fdfaf656c2f6e5b9496efb9798c661b90e58e80ced9680f2be9d6c52482b2f 8cb6175f546d13990e2d882e51898c6bc2eba23ede15a237c6e0f962e85b35820990a3fd6c2b208e619ff917fbbd44d348b0859fb9c1324a94800404287a9faf 50732ad5275ab09bbd0977aec5ed3c7d6b3d5b95" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6305082,"math_prob":0.98255265,"size":4573,"snap":"2020-34-2020-40","text_gpt3_token_len":1610,"char_repetition_ratio":0.117750056,"word_repetition_ratio":0.029498525,"special_character_ratio":0.45047015,"punctuation_ratio":0.07455013,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956711,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T22:50:59Z\",\"WARC-Record-ID\":\"<urn:uuid:07cd14f4-c2d8-4c39-8766-1ecaaeee3d26>\",\"Content-Length\":\"48194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00064752-fd9f-46d1-b467-1189e1b61989>\",\"WARC-Concurrent-To\":\"<urn:uuid:46eb9bce-840e-4bb1-9d60-f0ea0d1beaa4>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/52723\",\"WARC-Payload-Digest\":\"sha1:GTYEYTNUZYSUPX7C6ONC2GIECMEBUPL5\",\"WARC-Block-Digest\":\"sha1:UCURRQYX2AOESZ2PLUTBSL66ZF25EKO7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738855.80_warc_CC-MAIN-20200811205740-20200811235740-00316.warc.gz\"}"}
https://www.electronicsassignments.com/oscillators-12769
[ "# OSCILLATORS Electronics Help\n\nAn oscillator is a device that generates a periodic, ac output signal without any form of input signal required. TIle term is generally used in the context of a sine-wave signal generator, while a square-wave generator is usually called a multioibrator. A function generator is a laboratory instrument that a user can set to produce sine, square, or triangular waves, with amplitudes and frequencies that can be adjusted at will. Desirable features of a sine-wave oscillator include the ability to produce a low distortion (“pure”) sinusoidal waveform, and, in many applications, the capability of being easily adjusted so that the user can vary the frequency over some reasonable range.\n\nOscillation is: a form of instability caused by feedback that regenerates, or reinforces, a signal that would otherwise die out due to energy losses. In order for the feedback to be regenerative. it must satisfy certain amplitude and phase relations that we will discuss shortly. Oscillation often plagues designers and users of high-gain amplifiers been. Use of unintentional feedback paths that create signal regeneration at one or more frequencies. By contras • an oscillator is designed to have a feedback path with known characteristics, so that a predictable oscillation will occur at a predetermined frequency\n\nThe Barkhausen Criterion\n\nWe have stated that an oscillator has no input per se, so the reader may wonder what we mean by “feedback”-feedback to where? In reality, it makes no difference where, because we have a closed loop with no summing junction at which any external mp t i added. Thus, we could start anywhere in the loop and call ‘that point both the “input” and the “output”; in other words, we could think of the “feedback” path as the entire path through which signal flows in going completely around the loop. However, it is customary and convenient to take the output of an amplifier as a reference point and to regard the Iecdbackjpath as that portion of the loop that lies between amplifier output and amplifier input. This viewpoint is illustrated in Figure 14-41, where we show an amplifier having gain A and a feedback I”ltB ~~.l1g gain . p. is t n .:’!~I feedback ratio that specifics the portion of amplifier output voltage fcrl hack to amplifier input. Every oscillator must have an amplifier, or equivalent device, that supplies energy (from the de supply) to replenish resistive losses and thus sustain oscillation.\n\nIn order for the system shown in Figure 14-41 to osc;~\\’.~’. l\\lc loop gain “/3 must satisfy the Barkhausen criterion, namely,\n\nImagine a small variation in signal level occurring at the input to the arn rlifier, perhaps due to noise. The essence of the Barkhausen criterion is that this variation will be reinforced and signal regeneration will occur only if the net gain around the loop. beginning and ending at the point where the variation occurred, is unity. It is important to realize that unity gain means not only a gain magnitude of I, hut also an in-phase signal reinforcement. Negative feedback causes signal cancellation because the feedback voltage is out of phase. By contrast, the unity loop-gain criterion for oscillation is often called positive feedback.\n\nTo understand and apply the Barkhausen criterion, we must regard both the gain and the phase shift of AJ3 asfunctions offrequency. Reactive elements, capacitance in particular, contained in the amplifier and/or feedback cause the gain magnitude and phase shift to change with frequency. In general, there will be only one frequency at which the gain magnitude is unity and at which, simultaneously, the total phase shift is equivalent to 0 degrees (in phase-a multiple of 360°). The system will oscillate at the [requency that satisfies those conditions. Designing an oscillator amounts to selecting reactive components a id incorporating them into circuitry in such a way that the conditions will be satisfied at a predetermined frequency.", null, "To show the dependence of the loop gain Af3 on frequency, we write Af3(jw), a complex phasor that can be expressed in both polar and rectangular form: The gain of a certain amplifier as a function of frequency is A(jw) == -16 X 1()1’/  . A feedback path connected mound it has f3(jw} = 1O~/(2 X 103 + jW)2. Will the system oscillate? If so, at what frequency?", null, "Example 14-15 illustrated an application of the polar form of the Barkhausen criterion, since we solved for IAf3 and then determined the frequency at which that angle equals -360°. It is instructive to demonstrate how the same result can, be obtained using the rectangular form of the criterion: A{3 = 1 + jO. Toward that end, we first expand the denominator:\n\nTo satisfy the Barkhauscn criterion, this expression for Af3 must equal I. We therefore set it equal to I and simplify:\n\nIn order for this expression to equal 0, both the real and imaginary parts must equal O. Setting either part equal to 0 and solving for t» will give us the same result we obtained before\n\nThis result was obtained with somewhat more algebraic effort than previously. In some applications. it is easier to work with the polar form than the rectangular form. and ill others. the reverse is true.\n\nPosted on November 19, 2015 in Applications of Operational Amplifiers" ]
[ null, "https://www.electronicsassignments.com/wp-content/uploads/2015/11/Capture308.jpg", null, "https://www.electronicsassignments.com/wp-content/uploads/2015/11/Capture310-300x45.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9247704,"math_prob":0.9316724,"size":5184,"snap":"2021-31-2021-39","text_gpt3_token_len":1096,"char_repetition_ratio":0.12915058,"word_repetition_ratio":0.0,"special_character_ratio":0.20524691,"punctuation_ratio":0.09720786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9624773,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T02:03:53Z\",\"WARC-Record-ID\":\"<urn:uuid:5b98335f-d76c-4dd6-aa83-fa618645a652>\",\"Content-Length\":\"57844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b923cdd-1ebd-4e59-866b-3a7745b44669>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fd4fc05-fa0e-41ab-906c-86227a1e9451>\",\"WARC-IP-Address\":\"172.67.141.152\",\"WARC-Target-URI\":\"https://www.electronicsassignments.com/oscillators-12769\",\"WARC-Payload-Digest\":\"sha1:ECQ2IRN7ARI4B3B5QKFX2FJXK3HQFXN6\",\"WARC-Block-Digest\":\"sha1:K3K6KWXUI4O4RFDWAYP4BW5T75RAQQB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057496.18_warc_CC-MAIN-20210924020020-20210924050020-00249.warc.gz\"}"}
http://cbat.coroilcontrappunto.it/calculating-wave-speed-frequency-and-wavelength-worksheet-answers.html
[ "Calculating Wave Speed Frequency And Wavelength Worksheet Answers\n_____ is the number of waves per unit of time. Wave Calculations Worksheet Lena dunham has opened up about grief infertility and what it means to quot make a family quot in a heartfelt post on mother s day dunham shared a photo of herself the night before she underwent her Students will identify the primary causes for ocean currents and waves students will explain how and why ocean currents vary with increasing latitude students will. The speed of any periodic wave is the product of its wavelength and frequency. c (m/s) = νx λ ν(Hz) = c ÷ λ λ(m) = c ÷ ν. Speed of light = wavelength x frequency. 8 meters, what is the frequency of the wave? (3) Knowns Unknowns Formula 9. You were given the frequency (8000 Hz) and the velocity of the wave (343 meters/ second). Only if changing the frequency, somehow changes the stiffness or inertia, of the medium. Repeat your measurements by increasing the water level in the tube. Find the frequency of a sound wave of speed 330 m/s and wavelength 11 m. Although you blow in through the mouth piece of a flute, the opening you're blowing into isn't at the end of the pipe, it's along the side of the flute. If you are talking about electromagnetic waves, speed is constant, it is 2. FREQUENCY OF OSCILLATION x WAVELENGTH = SPEED OF LIGHT. ★★★ Correct answer to the question: An elephant can hear sound with a frequency of 15 hz. 3 | Page Teaching Lesson 9 Lesson 9 ! 2. which is a faster wave?. What is its. In the relation speed = wavelength x frequency, if you have. 40 m, respectively. Wavelength = 100m. A photon has a frequency (() of 2. every 10 s, so the frequency is 0. Use a calculator and do the actual math – don’t just leave the answer as a fraction! Violet light has a wavelength of 4. What is it's frequency? 8. Calculate the wavelength and energy of light that has a frequency of 1. wavelength of 0. Dividing speed by frequency gives you the wavelength. wavelength = 2 x L = 65 x 2 = 130 cm = 1. We can use the formula for wave speed on a string: Calculate speed=12m/0. Which of the following statements is true? The wave with the longer wavelength has (a) higher speed. The amplitude of a wave is the maximum distance the wave is displaced. A sound wave in a steel rail has a frequency of 620 Hz and a wavelength of 10. If you know the speed and frequency of the wave, you can use the basic. Just plug in the wave's speed and frequency to solve for the wavelength. Its wavelength is 200 × 10-9 (or 200 nm). The frequency of the George Washington Bridge is 2. % As frequency increases, wavelength increases. Waves worksheet. Sound and Waves worksheet - Algonquin & Lakeshore. Like the speed of any object, the speed of a wave refers to the distance that a crest (or trough) of a wave travels per unit of time. 6262 x 10-34 J•s) 1. Planck's equation makes it possible to understand blackbody radiation and the photoelectric effect. The amplitude of a wave is the maximum distance the wave is displaced. For many waves it is important to remember these points: the frequency of the wave is set by whatever is driving the oscillation in the medium. In the relation speed = wavelength x frequency, if you have. Calculate the wavelength of radiation with a frequency of 8. Light is measured by its wavelength (in nanometers) or frequency (in Hertz). what is the wavelength of this wave if the speed of sound in air is 343 m/s? - edu-answer. If you draw a beam of light in the form of a wave (without worrying too much about what exactly is causing the wave!), the distance between two crests is called the wavelength of the light. Given: frequency. A wave with a higher frequency, or a longer wavelength, transmits more energy with each photon. 00 × 108m/s 1 m = 1 × 109nm 1 kJ = 1000 J. Wave Basics Wave speed = Frequency x Wavelength This Study Pack aims to cover: 1. a wavelength is the dimensions of the wave. A wave has a frequency of 900 Hz and a wavelength of 200 m. Calculate the wave speed for a wave with a wavelength of 2 m and a. SP9 Study Packs are prepared by Qualified Teachers and Specialists and are a complete range of. A signal's wavelength inside a waveguide is dependent on the medium inside the waveguide. 5 amplitude (cm) 0. A wave on a certain guitar string travels at a speed of 200m/s. Displaying top 8 worksheets found for - Amplitude Frequency And Wavelength. 5 MHz (FM 99. Given Rearranged Equation Work Final Answer 540hz 2. The wave equation relates the frequency, wavelength and speed (HS-PS4-1). 10 x 10-12 m. Calculate the frequency of a wave whose wavelength is 6. The answer from the tool will be given in the S. 4 m? Wave Speed and Frequency: The speed of a wave is defined as the distance traveled by. Waves travel at different speeds in different media. 0 F Wave 2 # of cycles _____5 frequency (Hz) ___5. Calculations of wavelength, frequency and energy ANSWERS General Chemistry Mr. How to calculate Wave speed, Wavelength and frequency using The Wave speed Equation. Frequency: 5. The same applies for higher order harmonics. (\\si{\\hertz}). 9 × 1014 = 4. But what factors affect the speed of a wave. This is a worksheet that asks students to calculate wave speed, then rearrange the equation and calculate frequency and wavelength. 00} the frequency of the wave produced: f s the speed of the wave: v = 5. wave speed _____ 5. The frequency is measured in Hertz and the Wavelength is measured in meters. wave-like properties b. Speed of Light [and all Electromagnetic Spectrum Waves] (c) = 3. We can use the formula for wave speed on a string: Calculate speed=12m/0. 0 Hz to have a wavelength of 0. A photon has a frequency (() of 2. By default, our Doppler effect calculator has this value set to 343. A single frequency wave will appear as a sine wave (sinusoid) in either case. Get Free Access See Review. Answer (b). If a sound wave travels at a frequency of 55 Hz, what would its wavelength be? 4. amplitude, speed. Demonstrate how to calculate frequency of the. Define the following terms. X Research source Calculating wavelength is dependent upon the information you are given. A wave with an amplitude of 1. If you're seeing this message, it means we're having trouble loading external resources on our website. A wave along a guitar string has a frequency of 540 Hz and a wavelength of 2. • Using the settings from the part 1, measure the wavelength of the wave. What is the frequency of a spectral line having the wavelength 46 x 10 7 m. Equation Rearranged Equation Work Final Answer The speed of sound in air is about 340 m/s. Play this game to review Atoms & Molecules. Calculations of wavelength, frequency and energy ANSWERS General Chemistry Mr. Calculate the frequency of yellow light with a wavelength of 580 x 10-9 m. How long does it take the sound waves to reach 5 meters? What is the speed of the wave now? Decrease the frequency to 200Hz. Its wavelength is 200 × 10-9 (or 200 nm). Speed of all Electromagnetic Spectrum Waves (c) = 3. 6 metres (m) and travel through space at a speed of 300 000 000 metres per second (m/s). penod, frequency f. Wavelength is the distance of 1 frequency wave peak to the other and is most commonly associated with the electromagnetic spectrum. Most people have heard the variation in frequency of sound from an ambulance as it speeds past. Frequency And Wavelength Of A Wave. If you know the speed and frequency of the wave, you can use the basic. Then enter a number value in one of the display boxes, and press the Calculate button, The corresponding conversions will appear in exponential form in the remaining boxes. 68 x 106 Hz. ★★★ Correct answer to the question: An elephant can hear sound with a frequency of 15 hz. (c) The wave speed decreases and the wavelength decreases. Some of the worksheets for this concept are Wave speed equation practice problems, Name key period speed frequency wavelength, Physics work b frequency period and wavespeed name, Wavelength frequency energy work, Wave speed frequency wavelength practice problems, Universal wave. amplitude requen High pitched sounds have relatively large and small nod. Different colors of light carry different amounts of energy. Chapter 5 Assessment pages 166–169 Section 5. Created Date: 12/9/2016 10:50:39 AM. the next higher harmonic has a frequency of 23. A ship uses a sonar system to detect underwater objects in the ocean. Check the speed calculator for more informations about speed and velocity. Wavelength, Frequency, Energy, Speed, Amplitude, Period Equations & Formulas - Chemistry & Physics This chemistry and physics video tutorial focuses on electromagnetic waves. Calculate its wavelength. #T# is the period of the wave in seconds #2. 5 Hz? Data Equations Math Answer. the wavelength of the wave Sally Sue sent: — 2 waves 20 = 5. By completing this activity, 8th and 9th grade science students will learn how to calculate wave speed. Many of our dipole antennas have adjustable element lengths and by entering the frequency of interest this calculator will calculate the wavelength (or element length) corresponding to the required frequency. 82 seconds after the scream. (Hint: Convert the wavelength to meters before calculating the frequency. Your answer should be less than 4 meters. 998 × 10 8 m/s:. v is the speed of sound and T is the temperature of the air. X-ray with wavelength of 1 x 10-11 m. Where, v = velocity of the wave, f = frequency of the wave, λ = wavelength. You need some information to get the wave speed. The speed a wave travels is the wavelength multiplied by this frequency. What are the frequency and wavelength of the hum. Frequency can be defined as the number of oscillations in a unit time. Calculate the wavelength and energy of light that has a frequency of 1. 4 m? Wave Speed and Frequency: The speed of a wave is defined as the distance traveled by. Wave Practice Answer Key. wave speed = frequency x wavelength. A wave along a guitar string has a frequency of 540 Hz and a wavelength of 2. The wavelength, λ = 406. 5 cm and a wavelength of 3 cm3. 74 × 10 Hz 7. What is the frequency of this wave? 10. Sound Wave Equations Calculator to calculate the wave velocity (V) from the given frequency (f) and wavelength (λ) Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. 1Human eyes detect these orange “sea goldie” fish swimming over a coral reef in the blue waters of the Gulf of Eilat (Red Sea) using visible light. Humans can normally hear anywhere from. 99 x 10^-25. Substitute the value for the speed of light in meters per second into Equation $$\\ref{6. What is the speed of this wave? 12. and what is the SI unit for frequency? Sketch a diagram of a wave and label its wavelength and its amplitude. Calculating Frequency (F) and Wavelength (W) Show your work! Use a calculator and do the actual math - don't just leave the answer as a fraction! Violet light has a wavelength of 4. Given Rearranged Equation Work Final Answer 2. and what is the SI unit for frequency? Sketch a diagram of a wave and label its wavelength and its amplitude. Frequency and Period are inversely related f = 1 / T and T = 1 / f. How far apart are the wave crests? 1. What is the v if λ = 8 m and ƒ = 20 Hz? 2. So the wavelength is 35. Calculate the speed of a wave that has a frequency of 5000 Hz and a wavelength of 20 m 3. Calculate the λ given the ν of radiation is 5. ) Look at the EM spectrum below to answer this question. 75 Hz; period: 0. Calculate wavelength ofviolet light with a frequency of 750 x 1012 Hz. A wave with an amplitude of 1. Maxwell’s Equations: Electromagnetic Waves Predicted and Observed • Restate Maxwell’s. This worksheet has 25 detailed example problems that will assess your students' ability to calculate wave speed, frequency, wavelength and period. Frequency is how many complete waves go by per second. Calculate:- (a) the frequency, (b) the speed, and (c) the wavelength of the waves. • Using a general equation for speed of the front of the wave that is. Waveform C. v = ‚f: (5) 3. The amplitude is the size of the crest, which is 12. 5 540hz * 2. This equation is also called the 'wave equation' and applicable to all types of wave. Then report. Calculate the wavelength of light that has a frequency of 5. 5M H z f = 3 × 10 8 m e t e r s p e r s e c o n d 4 × 10 m e t e r s = 0. wavelength = 2*5 = 10 m. The wavelength will be twice the length of the string and this. From both together, the wave speed can be determined. 99x10^8 m/s, so you will always be looking for wavelength or frequency. Violet light has a wavelength of 4. Calculations of wavelength, frequency and energy ANSWERS General Chemistry Mr. Experiment : The wavelength and speed of a wave can be influenced by many factors. period _____ 4. WFNX broadcasts radio waves at a frequency of Hertz. Calculate its frequency. Wavelength D. the wave travels from air into water? Justify your answer. For the relationship to hold mathematically, if the speed. The speed of sound at the current temperature T=20°C is 343. A sound wave in a steel rail has a frequency of 620 Hz and a wavelength of 10. EM SPECTRUM, WAVELENGTH, FREQUENCY, AND ENERGY WORKSHEET 1. High School Physics Chapter 13 Section 2. Question 1: The chart below shows the range of hearing of various mammals: Use the chart and a value for the speed of sound of 340 ms-1 to answer the following questions:. Wavelength is the distance of 1 frequency wave peak to the other and is most commonly associated with the electromagnetic spectrum. Wave spread T3B04 How fast does a radio wave travel through free space? A. Since the fundamental frequency is proportional to the speed then the fundamental frequency will also increase. Explanation: The speed of a wave is given by the product of its frequency and wavelength. In question 5 of Sound Thinking, students derived a formula for the speed of sound (speed = wavelength x frequency). Calculate the speed of the wave. The speed, c, of a transverse wave in a string depends on the string’s density3, ⇢, and the tension, T. A photon has a frequency (n) of 2. A wave cycle consists of one complete wave - starting at the zero point, going up to a wave crest, going back down to a wave trough, and back to the zero point again. Waves travel at different speeds in different media. 5 x 1010 cycles/s. Multiply the the Planck constant, 6. Frequency wavelength in feet:. f = 3×108 meters per second 4 ×10 meters = 0. speed = (pi/2 m) *. f=1/T# where: #f# is the frequency of the wave in hertz. A wave on a certain guitar string travels at a speed of 200m/s. Calculate the speed of the wave. Calculate the wavelength of light that has a frequency of 5. 5 meters 540hz * 2. Find the speed of a 20 Hz wave that has a 5 meter wavelength. 4 MHz (megahertz; MHz = 10 6 s-1). A wave has a frequency of 900 Hz and a wavelength of 200 m. s Energy = h x (c ÷ wavelength) 9. The speed of microwaves is 3. Just click on a worksheet, print it out and get to work. c = speed of light (3. Calculate the frequency of a 2. Given the wavelength of the wave, the frequency is equal to the speed (of sound) in the medium divided by the wavelength. Wave Properties III (144-150) 1. A wave on the sea passes by an observer. Using this equation, calculate the de Broglie wave-length of a helium nucleus (mass=6. What is the speed of this wave? 12. Describe how to calculate each of wavelength speed and frequency if you know the other two factors What is the wavelength of a 25 hertz wave traveling at 35 cms? Wiki User 2009-08-27 16:49:31. 00005 second. Speed of sound in air is approx. wavelength = 2 x L = 65 x 2 = 130 cm = 1. By looking on the chart you may convert from wavelength to frequency and frequency to wavelength. What frequency do we need to tune our receiver to in order to hear the broadcast? Radio waves travel at the speed of light, so in this case v is equal to 299,792,458 metres per second (m/s). [and all Electromagnetic Spectrum Waves] (c) = 3. Just plug in the wave's speed and frequency to solve for the wavelength. One wavelength equals the distance between two successive wave crests or troughs. )The frequency of the radio wave emitted by a cordless telephone is 900 MHz. Since each wave source generates ( f ) wavelengths per second and each wavelength is ( λ ) units of length long; therefore the wave speed formula is: v = f λ. 5M H z f = 3 × 10 8 m e t e r s p e r s e c o n d 4 × 10 m e t e r s = 0. Unformatted text preview: WAVE SPEED, FREQUENCY AND WAVELENGTH PRACTICE PROBLEMS Sound waves in air travel at approximately 330m/s. 5 waves in 10 s. 283 x 10 14 s-1. Frequency d. Note that the speed of sound in solids is much greater than in gases like air (~340 m/s). Favorite Answer. As the frequency of a wave increases, the shorter its wavelength is. Wavelength (lambda) - Distance after which the wave begins to repeat (Units: metres). 01 x 1014 Hz. A periodic wave has a wavelength of 0. (observations on separate paper). Hertz is the SI unit for frequency. AND use S = D/T to find distance or time (using vs for S). Ans: E = 1. Which of the following statements is true? The wave with the longer wavelength has (a) higher speed. Asked for: wavelength. So solving for wavelength, we get (3 x 10^8 m/s)/(900 x 10^6 Hz) = 0. What is the speed of the. Answer: 1 MHzWavelength practice problem 2:What is the wavelength of sound which has the speed of 1. 3 = ~5200 m/s. v = ‚f: (5) 3. By completing this activity, 8th and 9th grade science students will learn how to calculate wave speed. 63 x 10^-34, by the wave's speed. 4 MHz (megahertz; MHz = 10 6 s-1). How many complete waves are. Question: What is the frequency of a sound wave in air at {eq}20^{\\circ}C {/eq} that has a wavelength of 0. 10-m wavelength when the speed of sound is 340 m/s? (OpenStax 17. 10 x 10-12 m. Calculate its wavelength. Wavelength to Frequency Formula Questions: 1) One of the violet lines of a Krypton laser is at 406. frequency/speed=wavelength. f = 1/T f=frequency, measured in Hz T= period, measured in s. Answer: speed = frequency x wavelength. The wavelength of this particle is of the same order. The!ratio!of!a!wave’s!height!to!wavelength!(H:L!ratio)!can!tell!us!some!information! about!the!wave,!for!example!if!it!is. The speed of any electromagnetic waves in free spaceis the speed of lightc = 3*108 m/s. Where, v = velocity of the wave, f = frequency of the wave, λ = wavelength. Frequency And Wavelength Of A Wave. Play this game to review Atoms & Molecules. 02 x 1020 Hz? 11. A wave along a guitar string has a frequency of 540 Hz and a wavelength of 2. Frequency (Hz) to Wavelength (m) Hz = 299800000 m λ = Wavelength in meters, C = Speed of the wave in meters per second, f = frequency of the wave in herz. EM SPECTRUM, WAVELENGTH, FREQUENCY, AND ENERGY WORKSHEET 1. What is the frequency of a spectral line having the wavelength 46 x 10 7 m. What is the frequency in hertz of blue light having a wavelength of 425 nm? (nano = 1 X 10-9). Download Light Worksheet Wavelength Frequency And Energy Answers - Wavelength, Frequency, Speed & Energy Worksheet c = λ ν ν = c / λ λ= c / ν E = hv E = h c/λ c = speed of light (30 x 10 8 m/s) λ = wavelength ν = frequency E = energy h = Planck’s constant (66262 x 10-34 J•s) 1 Calculate the λ given the ν of radiation is 510 x 10. This worksheet has 25 detailed example problems that will assess your students' ability to calculate wave speed, frequency, wavelength and period. Once all of the circles are in place, the child will also need to change one of the boxes. But what factors affect the speed of a wave. Frequency (f) - Number of waves passing a fixed point in one second (Units: Hertz). Question: What is the frequency of a sound wave in air at {eq}20^{\\circ}C {/eq} that has a wavelength of 0. Calculate speed and pitch using algebraic equations. Can be used for AQA - P1 - Waves - Measuring waves. Its unit is hertz or s⁻¹. 4 For a periodic wave, wavelength is the ratio of speed over frequency. Calculate its frequency. c = speed of light (3 x 108 m/sec) f = frequency. Calculate the wavelength of radiation with a frequency of 8. 84 m U = ? S = (342 m/s) / 50 Hz 12. e) Calculate the frequency of the waves. 2, we know that the product of the wavelength and the frequency is the speed of the wave, which for electromagnetic radiation is 2. This worksheet has 25 detailed example problems that will assess your students' ability to calculate wave speed, frequency, wavelength and period. But what factors affect the speed of a wave. (a) A laser used in eye surgery to fuse detached retinas produces radiation with a wavelength of 640. ) Look at the EM spectrum below to answer this question. The tension in the cable is 500. Wave Calculations Worksheet Lena dunham has opened up about grief infertility and what it means to quot make a family quot in a heartfelt post on mother s day dunham shared a photo of herself the night before she underwent her Students will identify the primary causes for ocean currents and waves students will explain how and why ocean currents vary with increasing latitude students will. Calculate the frequency of this radiation. 8 meters, what is the frequency of the wave? (3) Knowns Unknowns Formula 9. Each of these properties is described in more detail below. f = 1/T f=frequency, measured in Hz T= period, measured in s. In these assessments, you can expect to encounter questions that require you to calculate the wavelength, recall facts about electromagnetic waves and understand the. What is its frequency? 13. During the course of this unit, you should become very comfortable with the process of solving problems like the following. 5m-long sound wave. The user gave us a request - /4386/, where asked to create a calculator \"calculation of the wave height and intervals between waves (frequency)?\". 0 nm and the energy of a mole of these photons. Wave spread T3B04 How fast does a radio wave travel through free space? A. What is the frequency? 7. - find the frequency: c= l·u l. We can use the formula for wave speed on a string: Calculate speed=12m/0. b Determine the wavelength of the waves when the frequency of the dipper is doubled. If you are looking for the wavelength use the yellow equation above. s Energy = h x (c ÷ wavelength) 9. Wave velocity= frequencyx wavelength Wavelength = m =x10^m = x10^ft. 5 540hz * 2. by Ron Kurtus (revised 3 December 2012) The Doppler Effect is the change in the observed wavelength or frequency of a waveform, as compared with that emitted from the source, when the source and/or observer are moving with respect to the wave medium. Physical science waves sound. what is the wavelength of this wave if the speed of sound in air is 343 m/s? - edu-answer. Question: What is the frequency of a sound wave in air at {eq}20^{\\circ}C {/eq} that has a wavelength of 0. For sound, the frequency is measured in Hertz, abbreviated Hz, which means is period cycle, from the top of one wave to another, per second. A standing waves pattern is produced that has 4. 01 x 1014 Hz. Calculate its speed. Sound waves travel about one million times more slowly than light waves but their frequency and wavelength formulas are somewhat similar to light wave formulas. Sound and Waves worksheet - Algonquin & Lakeshore. The frequency of the George Washington Bridge is 2. Calculate the frequency of green light with a wavelength of 530 u10-9 m. Calculate the wavelength of any frequency. Wavelength, Frequency, Speed & Energy Worksheet c = λ ν ν = c / λ λ= c / ν E = hv E = h c/λ c = speed of light (3. Make waves with water, sound, and light and see how they are related. frequency/speed=wavelength. 626x10-34J∙s c = the speed of light in a vacuum, 3. Wavelength 1 meter Low Frequency 3 Hz High Frequency 12 Hz 1 second of time The equation for calculating the velocity of a wave is: Velocity = Wavelength x Frequency v = λ x f. The resource includes a PowerPoint presentation with worked solutions to all twelve calculations. Calculate the frequency of this radiation. 4670 × 10-3 m. 0mm, and travel at 60 cm s^-1. Louis de Broglie extended the idea of wave-particle du-ality to all of nature with his matter-wave equation: λ= h mv where λ is the particle’s wavelength, m is its mass, v is its velocity, and h is Planck’s constant. What is the wavelength of sound waves produced by a guitar string vibrating at 490 Hz? Equation Rearranged Equation Work Final Answer. Speed of a wave = wavelength x frequency v = If v= velocity (speed), measured in m/s wavelength, measured in m f= fre uency, measured in I-IZ (Hz = Vs) Calculate the ee f the wave. 5 540hz * 2. Calculate the wavelength of an “A” note sounding at 440Hz. Equation Rearranged Equation Work Final Answer. Insert the known values into the equations, and solve. 63 x 10^-34, by the wave's speed. Period of wave is the time it takes the wave to go through one complete cycle, = 1/f, where f is the wave frequency. Box your Answers. 8 × 10-7 m 5. 00 x 108 m/s h = 6. Question 13: You are given the transverse wave below: Draw the following: a. We are to: A. PHYSICS 11 WAVES WORKSHEET 1 Refer to your notes as well as Chapter 14 of the text to answer the following questions. Make waves with water, sound, and light and see how they are related. Energy / Frequency / Wavelength Energy (J) = h x Frequency h (Planck's Constant) = 6. quantum A quantum is the minimum amount of energy. Describing Waves using keywords - wavelength, amplitude & frequency 2. As the frequency of a wave increases, the shorter its wavelength is. The worksheet has to be short, crisp, basic and child friendly. find the amplitude, frequency(Hz), velocity(cm) and wavelength(cm) of the wave. You'll be expected to use this equation correctly or the upcoming chapter test, sound lab and TAKS test. For each wave answer the questions and measure parts of the wave. These problems are pe. Then: speed = wavelength x frequency. The speed of microwaves is 3. Wavelength is the distance of 1 frequency wave peak to the other and is most commonly associated with the electromagnetic spectrum. You need some information to get the wave speed. 91 × 10 7 m 6. Describe how to calculate each of wavelength speed and frequency if you know the other two factors What is the wavelength of a 25 hertz wave traveling at 35 cms? Wiki User 2009-08-27 16:49:31. Then enter a number value in one of the display boxes, and press the Calculate button, The corresponding conversions will appear in exponential form in the remaining boxes. period _____ 4. Calculate the wavelength of radiation with a frequency of 8. This has been designed to be suitable for both A5 and A4 printing. The blood cells reflect sound waves at a frequency of 1. Frequency is a familiar concept that we use every day, and it's not much different in physics! A wave's frequency f f f is the number of complete wavelengths that pass a point in a certain amount of time. [and all Electromagnetic Spectrum Waves] (c) = 3. Wave Worksheet. Calculating Energy and Frequency (f) Calculate the energy of a photon of radiation with a frequency of 8. Speed = 340 m/s speed = frequency × wavelength 340 = frequency × 0·25 frequency = 340 ÷ 0·25 = 1,360 Hz. Adjust the amplitude, frequency, tension, and density as described in the table below. Take the reciprocal and you have the frequency in Hz. Wavelength (λ) — the length of one cycle of the wave. Frequency is how many complete waves go by per second. 14Calculate the λ given the frequency of radiation is 5. The speed of wave B must be _____ the speed of wave A. Equation Rearranged Equation Work Final Answer. The wavelength, λ = 406. wave speed = frequency x wavelength. Sound waves in air travel at approximately 330m/s. 2, we know that the product of the wavelength and the frequency is the speed of the wave, which for electromagnetic radiation is 2. Frequency of a wave is given by the equations: #1. 5 x 1014 Hz. The frequency of a sound wave determines the pitch of the sound, i. 165 x 1014 Hz. 01 x 1014 Hz. Question: GOAL Perform Elementary Calculations Using Speed, Wavelength, And Frequency. 2 to calculate the wavelength in meters. Wave Worksheet. 5 wavelengths between the two poles. Can be used for AQA - P1 - Waves - Measuring waves. Calculate the speed of the wave. Speed of a wave wavelength x frequency. Violet light has a wavelength of 4. frequency/speed=wavelength. 1) Determine the fundamental node and wavelength for each tuning fork. If you know the speed and frequency of the wave, you can use the basic. 10 x 10-12 m. Frequency and wavelength are related to each other through the velocity of propagation and this relation is inversely proportional. What is the frequency of the waves? Write what you know: highlight/. Adjust the amplitude, frequency, tension, and density as described in the table below. Just click on a worksheet, print it out and get to work. The Electromagnetic Spectrum POGIL Activity. 0 × 106 meters per second. So, if you know either the frequency or the wavelength you can calculate the other value. Calculate: a. Because the cells are moving, the wavelength is Doppler shifted. Calculating wave speed. 10 x 1014 s-1 2. What is the frequency of microwaves with a wavelength of. In this worksheet, we will practice using the wave speed formula, s = fλ, to calculate the movement of waves of different frequencies and wavelengths. Record: The speed of a wave is the distance a wave pulse travels per second. Download Light Worksheet Wavelength Frequency And Energy Answers - Wavelength, Frequency, Speed & Energy Worksheet c = λ ν ν = c / λ λ= c / ν E = hv E = h c/λ c = speed of light (30 x 10 8 m/s) λ = wavelength ν = frequency E = energy h = Planck's constant (66262 x 10-34 J•s) 1 Calculate the λ given the ν of radiation is 510 x 10. A wave has a speed of 50 m/s and a frequency of 10 Hz. Math Practice On a separate sheet of paper, solve the following problems. A wave has a wavelength of 0. Calculate the frequency of a 2. Just click on a worksheet, print it out and get to work. Waveform C. The relationship between wavelength and frequency is λ=c/f. Find the wavelength Q/ a 200 H: sound. Consider an ocean wave with a wavelength of 3 meters and a frequency of 1 hertz. Question 13: You are given the transverse wave below: Draw the following: a. All light travels at the same speed, but each color has a different wavelength and frequency. Displaying all worksheets related to - Wave Speed Frequency And Wavelength. (Closed pipe) 2) Determine the fundamental and harmonics for 5 tuning forks (long pipe) Find the speed of sound Find the frequency of an unknown 3) Calculate beats for 2 different sets of resonant forks. A shorter wavelength has a greater. This file contains the Wave Speed Worksheet. The speed of sound in air is about 340 m/s. three times larger than 55. A wave has a wavelength of 125 meters is moving at a speed of 20 m/s. Brick's Web Page. The most common example of a loudspeaker that relies on a quarter wavelength acoustic standing wave is a transmission line enclosure. One light beam has wavelength, Il, and. what is the wavelength of this wave if the speed of sound in air is 343 m/s? - edu-answer. The second sheet is a completed teacher's answer key. Although you blow in through the mouth piece of a flute, the opening you're blowing into isn't at the end of the pipe, it's along the side of the flute. What is the period of a water wave with a frequency of 0. The unit \"Hz\" is short for hertz, named after the German physicist Heinrich Hertz (1857 - 94). If you know the speed and frequency of the wave, you can use the basic. You should find a gap of about 12 cm between melted marsh mallows!. This answer is nornally given in units of J. Wave Speed, Frequency, & Wavelength Practice Problems Use the above formulas and information to help you solve the following problems. Showing top 8 worksheets in the category - Frequency And Wavelength Of A Wave. The variable c is the speed of light. A sound wave in a steel rail has a frequency of 620 Hz and a wavelength of 10. 0 Hz travels through vulcanized rubber with a wavelength of 0. (e) Determine the maximum transverse speed of the string. This worksheet will prepare students to solve simple problems including wave speed calculations. 0 x 10 8 m/s) λ = wavelength ν = frequency E = energy h = Planck’s constant (6. 50 Hz and a speed of 4. v is the speed of sound and T is the temperature of the air. As the frequency of a wave increases, its wavelength remains the same. 68 x 106 Hz. Wavelength Frequency and Energy Worksheet Answer Key and Wave Review Worksheet Answers Image Collections Worksheet for Kids. Describing Waves using keywords - wavelength, amplitude & frequency 2. 10 x 1014 s-1 2. I measurement unit for that variable. 26 x 10 14 Hz. Waves can interfere with one another and be either constructive or destructive. 2014 Hz Calculate the wavelength of red light with a frequency of460x 1012 6. To get the answers, click here. Speed of light = wavelength x frequency. Calculate the wavelength of this radiation. The wavelength of a wave on a string is 4 meters and its speed is 23 m/sec. ★★★ Correct answer to the question: An elephant can hear sound with a frequency of 15 hz. Calculate the wavelength of radiation with a frequency of 8. Green light has a frequency of 6. 0 m/s and sub B has a speed of 8. The relationship of the speed of sound vw, its frequency f, and its wavelength λ is given by vwfλ , which is the same relationship given for all waves. Calculate: a. Question: What is the frequency of a sound wave in air at {eq}20^{\\circ}C {/eq} that has a wavelength of 0. Solution: From Equation 6. Some of the worksheets displayed are Name key period speed frequency wavelength, Wavelength frequency energy work, Wave speed equation practice problems, Plancks equation name chem work 5 2, Physics work b frequency period and wavespeed name, Em spectrum wavelength frequency and energy work, Em. The unit \"Hz\" is short for hertz, named after the German physicist Heinrich Hertz (1857 - 94). What is the wave speed? 2. Visible light of wavelength 400 nm. 00t) He said that x and y are expressed in cm, and time in seconds. 750 m? Answer: T= 43. If you are looking for the distance the wave travels use the purple equation above. A wave with a frequency of 14 Hz has a wavelength of 3 meters. E = energy. v = ‚f: (5) 3. A wave traveling at 230 msec has a wavelength of 21 meters. Using a setup. What is its speed? 4. 700 m? v = (0. A worksheet to practice using the wave speed equation, complete with answers. Calculating Energy and Frequency (f) Calculate the energy of a photon of radiation with a frequency of 8. Multiply the the Planck constant, 6. E = h × h = 6. Where: λ= wavelength in metres. For the relationship to hold mathematically, if the speed. 8 meters, what is the frequency of the wave? 81,2SHz 3. The speed of sound at the current temperature T=20°C is 343. Wavelength = 100m. 84 m U = ? S = (342 m/s) / 50 Hz 12. (a) Diver will hear the sound first because the speed of sound is more in water than in air. What is the wave speed? 2. For example, a sound wave with a frequency of 20 hertz would have a period of 0. Use the wavelength ‚ and the measured resonant frequency of the standing wave f to calculate the wave speed v. , is the 2 lowest frequency resulting in standing waves. Calculate the maximum kinetic energy of the emitted photoelectrons. This is a worksheet that asks students to calculate wave speed, then rearrange the equation and calculate frequency and wavelength. the wave's speed is 340m/s. The relationship of the speed of sound vw, its frequency f, and its wavelength λ is given by vwfλ , which is the same relationship given for all waves. The relationship between frequency (f) and wavelength (() of a wave is described by the equation Waves speed = frequency X wavelength OR c = (. WFNX broadcasts radio waves at a frequency of Hertz. Adjust the amplitude, frequency, tension, and density as described in the table below. and what is the SI unit for frequency? Sketch a diagram of a wave and label its wavelength and its amplitude. What are the frequency and wavelength of the hum. f = V / or f = 1/T 11. A periodic wave has a wavelength of 0. 8 meters, what is the frequency of the wave? 81,2SHz 3. The frequency f of the wave is f = ω/2π, ω is the angular frequency. C = λν E = hν C = 3. A wave with a frequency of 60. The resource includes a PowerPoint presentation with worked solutions to all twelve calculations. Created Date: 12/9/2016 10:50:39 AM. True / False: A higher frequency results when a wave source moves towards an observer. This Site Might Help You. 2 s 5 s 4 s; 7. Calculate Speedwave And Wavelength. Calculate the wavelength of any frequency. 8 meters, what is the frequency of the wave? (3) Knowns Unknowns Formula 9. A wave has a speed of 30 m/sec and a wavelength of 3 meters. 50 x 10-7m. The speed of any electromagnetic waves in free spaceis the speed of lightc = 3*108 m/s. 8 The graph shows the displacement of particles in a sound wave. Its speed is 3 × 10 8 m/s. The frequency of the George Washington Bridge is 2. One light beam has wavelength, O and frequency, f 1. The wavelength of the light could be measured within \\(S'$$ — for example, by using a mirror to set up standing waves and measuring the distance between nodes. (b) An FM radio station broadcasts electromagnetic radiation at a frequency of 103. Calculate the energy of a photon of radiation with a frequency of 8. Wavelength, Frequency, Speed & Energy Worksheet c = λ ν ν = c / λ λ= c / ν E = hv E = h c/λ c = speed of light (3. What frequency and period would be for ally and er cheerful, easant, hard-working partner to produce a standing wave with three nodes? xplai your reasomng by identifying your steps. Given Rearranged Equation Work Final Answer 540hz 2. State one piece of evidence that electromagnetic radiation has: a. Examples of Wave Calculations - Speed, Frequency and Wavelength. What is the wavelength? 3. In the mean time we talk related with Wave Calculations Worksheet Answers, scroll the page to see various variation of pictures to complete your references. 7 × 10-27 kg) moving with a speed of 2. Calculate its speed. speed = frequency x wavelength = 4000 x 1. In this quiz, based on AQA's Syllabus A, we help Year 10 and Year 11 pupils revise what they've learned about the wave equation, which describes the relationship between velocity, frequency and wavelength. Note that the results are presented in millimeters. 5 meters 540hz * 2. The speed of of light is 3. Wavelength is used to measure the length of sound waves while frequency is used to measure the recurrence of sound waves. 700 m? v = (0. Speed of Light [and all Electromagnetic Spectrum Waves] (c) = 3. 9 x 10-13m wave?. Physical science waves sound. Calculate the wavelength of radiation with a frequency of 8. speed = (pi/2 m) *. Electromagnetic Waves Example Problems What is the frequency green light that has a wavelength of 5. Using a setup. X Research source Calculating wavelength is dependent upon the information you are given. Or we can measure the height from highest. ULTRASOUND - ultrasonic sound. Calculations of wavelength, frequency and energy ANSWERS General Chemistry Mr. The amplitude of a wave is the maximum distance the wave is displaced. Standing waves on a piece of elastic shock cord, moving water waves in a ripple tank. Light moves with a speed. Example: Calculate the wavelength of a sound wave propagating in sea water from a transducer at a frequency of 50 kHz if the speed of sound in salt water is 1530 m/s. )The frequency of the radio wave emitted by a cordless telephone is 900 MHz. PROPERTIES OF LIGHT WORKSHEET Part 1 - Select the best answer 1. Since each wave source generates ( f ) wavelengths per second and each wavelength is ( λ ) units of length long; therefore the wave speed formula is: v = f λ. In this quiz, based on AQA's Syllabus A, we help Year 10 and Year 11 pupils revise what they've learned about the wave equation, which describes the relationship between velocity, frequency and wavelength. Calculating Frequency Of Waves. 2 Frequency, wavelength and the speed of sound The speed of sound has a joint relationship with both the wavelength and the frequency of the sound. The 'high notes' have a high frequency and a short wavelength. Doubling the frequency of a wave source doubles the speed of the waves. What is the frequency? 7. The worksheet is concise, but covers a variety of wave computational skills. Speed of Light [and all Electromagnetic Spectrum Waves] (c) = 3. All waves are 1 second long. A wave has a speed of 30 m/s and a wavelength of 3 meters. The wave passes with all its energy into medium B. Its wavelength is 200 × 10-9 (or 200 nm). Which has a lower frequency, radio waves or green light? 4. It has a wavelength of 0. The wavelength of this sound wave is 8 9 10 itch of a sound is directly related to the _Q_ of the sound wave. Calculations of wavelength, frequency and energy ANSWERS General Chemistry Mr. 5 cm and a wavelength of 3 cm3. AND use S = D/T to find distance or time (using vs for S). Just click on a worksheet, print it out and get to work." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9090831,"math_prob":0.996346,"size":41706,"snap":"2020-45-2020-50","text_gpt3_token_len":10467,"char_repetition_ratio":0.26459163,"word_repetition_ratio":0.33205754,"special_character_ratio":0.25768474,"punctuation_ratio":0.11887456,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9993129,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T02:19:54Z\",\"WARC-Record-ID\":\"<urn:uuid:258b18af-568e-4ced-8b2a-39ccf287f89f>\",\"Content-Length\":\"49832\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ceb7e61-40c7-458f-bd08-4850f85c7577>\",\"WARC-Concurrent-To\":\"<urn:uuid:5318d22d-c80c-424e-bf20-6b4b1d0cb3ff>\",\"WARC-IP-Address\":\"104.28.17.231\",\"WARC-Target-URI\":\"http://cbat.coroilcontrappunto.it/calculating-wave-speed-frequency-and-wavelength-worksheet-answers.html\",\"WARC-Payload-Digest\":\"sha1:EW2PKEST4WRVIHDBOS2XR2A35MAJQUGH\",\"WARC-Block-Digest\":\"sha1:3I773F2KVIBYNIFL7MD6VSMNIBIMAQZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141194982.45_warc_CC-MAIN-20201128011115-20201128041115-00476.warc.gz\"}"}
https://www.crazy-numbers.com/en/47147
[ "# Everything about number 47147\n\nDiscover a lot of information on the number 47147: properties, mathematical operations, how to write it, symbolism, numerology, representations and many other interesting things!\n\n## Mathematical properties of 47147\n\nQuestions and answers\nIs 47147 a prime number? Yes\nIs 47147 a perfect number? No\nNumber of divisors 2\nList of dividers 1, 47147\nSum of divisors 47148\nPrime factorization 47147\nPrime factors 47147\n\n## How to write / spell 47147 in letters?\n\nIn letters, the number 47147 is written as: Forty-seven thousand hundred and forty-seven. And in other languages? how does it spell?\n\n47147 in other languages\nWrite 47147 in english Forty-seven thousand hundred and forty-seven\nWrite 47147 in french Quarante-sept mille cent quarante-sept\nWrite 47147 in spanish Cuarenta y siete mil ciento cuarenta y siete\nWrite 47147 in portuguese Quarenta e sete mil cento quarenta e sete\n\n## Decomposition of the number 47147\n\nThe number 47147 is composed of:\n\n2 iterations of the number 4 : The number 4 (four) is the symbol of the square. It represents structuring, organization, work and construction.... Find out more about the number 4\n\n2 iterations of the number 7 : The number 7 (seven) represents faith, teaching. It symbolizes reflection, the spiritual life.... Find out more about the number 7\n\n1 iteration of the number 1 : The number 1 (one) represents the uniqueness, the unique, a starting point, a beginning.... Find out more about the number 1\n\n## Mathematical representations and links\n\nOther ways to write 47147\nIn letter Forty-seven thousand hundred and forty-seven\nIn roman numeral\nIn binary 1011100000101011\nIn octal 134053\nIn hexadecimal b82b\nIn US dollars USD 47,147.00 (\\$)\nIn euros 47 147,00 EUR (€)\nSome related numbers\nPrevious number 47146\nNext number 47148\nNext prime number 47149\n\n## Mathematical operations\n\nOperations and solutions\n47147*2 = 94294 The double of 47147 is 94294\n47147*3 = 141441 The triple of 47147 is 141441\n47147/2 = 23573.5 The half of 47147 is 23573.500000\n47147/3 = 15715.666666667 The third of 47147 is 15715.666667\n471472 = 2222839609 The square of 47147 is 2222839609.000000\n471473 = 104800219045523 The cube of 47147 is 104800219045523.000000\n√47147 = 217.13359942671 The square root of 47147 is 217.133599\nlog(47147) = 10.761025659314 The natural (Neperian) logarithm of 47147 is 10.761026\nlog10(47147) = 4.6734540634594 The decimal logarithm (base 10) of 47147 is 4.673454\nsin(47147) = -0.89968508043657 The sine of 47147 is -0.899685\ncos(47147) = -0.43653952402942 The cosine of 47147 is -0.436540\ntan(47147) = 2.0609475910271 The tangent of 47147 is 2.060948" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6466706,"math_prob":0.8812522,"size":2203,"snap":"2021-21-2021-25","text_gpt3_token_len":698,"char_repetition_ratio":0.17598909,"word_repetition_ratio":0.03021148,"special_character_ratio":0.4543804,"punctuation_ratio":0.14009662,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99455225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T00:28:23Z\",\"WARC-Record-ID\":\"<urn:uuid:55836044-44ec-4ec9-af85-c3dcfbfa8a30>\",\"Content-Length\":\"27779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8ca2062-6fc9-4793-9581-95c1fd6db105>\",\"WARC-Concurrent-To\":\"<urn:uuid:71ea7a9b-c0a9-4662-837f-7770385033d4>\",\"WARC-IP-Address\":\"128.65.195.174\",\"WARC-Target-URI\":\"https://www.crazy-numbers.com/en/47147\",\"WARC-Payload-Digest\":\"sha1:KR5C56DV63MIO2LKSYVQXDBJDFKP6N33\",\"WARC-Block-Digest\":\"sha1:EKWFN7PGPY5JWMT3HFUKR2WR26E2E32K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488544264.91_warc_CC-MAIN-20210623225535-20210624015535-00096.warc.gz\"}"}
https://oak.go.kr/central/journallist/journaldetail.do?article_seq=10717
[ "", null, "", null, "PDF\nOA 학술지\nDirectional Emission from Photonic Crystal Waveguide Output by Terminating with CROW and Employing the PSO Algorithm\n•", null, "•", null, "ABSTRACT\n\nWe have designed two photonic crystal waveguide (PCW) structures with output focused beams in order to achieve more coupling between photonic devices and decrease the mismatch losses in photonic integrated circuits. PCW with coupled resonator optical waveguide (CROW) termination has been optimized by both one dimensional (1D) and seven dimensional (7D) particle swarm optimization (PSO) algorithms by evaluating the fitness function by the finite difference time domain (FDTD) method. The 1D and 7Doptimizations caused the factors of 2.79 and 3.875 improvements in intensity of the main lobe compared to the non-optimized structure, whereas the FWHM in 7D-optimized structure was increased, unlike the 1D case. It has also been shown that the increment of focusing causes decrement of the bandwidth.\n\nKEYWORD\nPhotonic crystal waveguide , CROW , FDTD , Resonator , PSO , (230.0230) Optical devices , (130.3120) Integrated optics devices , (060.4510) Optical communications , (140.4780) Optical resonators\n• ### I. INTRODUCTION\n\nPhotonic crystals (PCs) [1, 2] have attracted increasing attention in the past decade, due to their unique properties and potential applications in wavelength-scale photonic integrated circuits (PICs) [3-5]. These structures are artificial dielectric or metallic periodic structures in which the refractive index modulation causes stop bands for waves within a certain frequency band. These crystals have many applications because of their ability to control wave propagation.The greatest motivation behind these investigations has been the promise that they hold for miniaturizing photonic circuits .\n\nLocal defects in PCs introduce defect modes within the photonic band gaps (PBGs) . Thus, a point and a line defect can act as a micro-cavity and a waveguide, respectively.The resonant frequency of a defect mode is shifted by changing the size or the shape of the defect .\n\nThe diffraction limit is a basic principle in classical optics. The emission of light from a structure with a subwavelength feature size will spread into a wide angle range. This is an undesirable property in PICs, because it causes mismatch loss [8, 9]. There have been some methods to obtain beam shaping effects in PCWs [10-23]. One of the methods is to reduce the radii of the surface rods in a PCW in order to create non-radiative surface modes, and then adding a periodic modulated layer of the interface cylinders to support the leaky surface modes . Another method is the reshaping of the surface rods to achieve non-radiative surface modes, and then, for changing the modes to radiative ones, a PC grating layer with a row of rods with the same shape PC rods but different lattice constant is added. In this beam shaping mechanism, the added layers help to induce the electromagnetic waves at the surface to radiate their energy into a beam just a few degrees wide . A directional emitter can also be achieved by covering the termination of a PCW by a self-collimating PC, in which the interference of the multiple self-collimated beams excited by the waveguide reshape the output beam. Also, in another scheme, a multimode (MM) PCW is terminated by a waveguide array. The output of the MMPCW,regarded as a secondary source, splits into two beams to be launched to the waveguide array. As a result,many split light beams can be generated in the waveguide array by coupling among the waveguides. Hence the interference of these light beams after passing through the system leads to the desired directional emission .\n\nAdding a CROW to the end of the PCW can collimate the PCW output lightwave over a wide bandwidth range.These CROWs produce resonant modes, which radiate from the PC structure. These resonant modes and the lightwave emitted from the PCW can be regarded as radiating sources to interfere in free space. The total output field is determined by the vector addition of the fields radiated by the individual sources. To provide very directional patterns,it is required that the fields from the elements of the array interfere constructively in the desired direction and interfere destructively in the remaining areas. The interference produces a directional emitting beam. The mechanism providing a large operational bandwidth is due to the lower Q factor of the CROWs near the termination surface, and hence the higher radiation of the resonators [19, 24].\n\nIn addition, researchers have introduced some structures for achieving off-axis directional beaming via engineering the PC surface layer [28, 29]. They have also presented some structures to split the output beam of PC into more than one beam in various desired directions, for which the output beam property such as angles, intensities and FWHMs are related to the engineering of the surface layer of the PCs [13, 30, 31]. These structures are very useful beam splitters in PICs to drive output lightwaves to be launched to photonic devices with more than one input or to several devices.\n\nHowever, due to the impedance mismatch and reflection at the termination of PCWs, some of the reported directional emissions are inefficient and are based on intuition and trial and error without any genuine optimization. So, the researchers have tried to improve the efficiency of the PCW directional emission [14-35].\n\nIn many of the proposed methods, a surface layer is formed to excite the non-radiative surface modes, and then another layer is added to convert these modes to the radiative ones. The performance of the directional emission depends mainly on the coupling efficiency between PCW and the surface modes [15, 33]. Also, the coupling efficiency is sensitive to surface layer parameters such as radius,refractive index , and lattice period . Further enhancement of the coupling efficiency is possible, if one can optimize the period of the surface cylinders, simultaneously with the other parameters.\n\nSome optimization methods have been used for obtaining the optimized parameters of the termination structure of the PCWs to achieve the most improved directional emission.A PCW with grating-like surface, added for highly-efficient directional emission, has been optimized by the genetic algorithm (GA) method. The interference of the lightwaves emitted from the output of the waveguide and the modes of the grating-like surface is believed to affect the directional beaming of these structures .\n\nOne of the powerful algorithms that can be employed for optimization of the multidimensional problems of this kind, especially in the domain of computational electromagnetism,is the particle swarm optimization (PSO) method [37-47]. Recently, some different PC structures have been optimized by using the PSO algorithm to evaluate a fitness function [48, 49].\n\nIn this paper, we have used the PSO algorithm to optimize the parameters of a CROW in the output surface of the PCW structure to get a powerful directional beam. The paper structure is as follows. In section II, the details of the structure and simulation space will be described. In section III, the PSO algorithm will be explained.In section IV, the beam shaping effects of the non-optimized,1D-PSO optimized and 7D-PSO optimized CROW will be demonstrated. Also, the FWHM and bandwidth of each structure will be derived. The paper will be concluded in section V.\n\n### II. DETAILS OF STRUCTURE AND SIMULATION SPACE\n\nWe have considered a PC structure with a square lattice of dielectric rods in air, as shown in FIG. 1. The relative dielectric constant of the rods is 11.56, corresponding to that of InGaAsP-InP semiconductor material at 1.55 ㎛ wavelength , and the rod cross-sectional diameter is chosen to be 0.36a, where a is the PC lattice constant. For TM polarization with the electric field Ez parallel to and magnetic field perpendicular to the rods' axis with components Hx and Hy, the PBG of the PC structure, derived by either broadband Gaussian pulse excitation or the plane wave expansion (PWE) method, is in the normalized frequency\n\nrange of 0.306-0.439, where ω is the angular frequency,and c is the speed of light in free space [19, 24]. All the results are presented for TM polarization and have been obtained with the 2D-FDTD method with perfectly matched layer boundary conditions. In the simulations we have used 31×11 dielectric rods in free space. We have", null, "considered the spatial step in the FDTD method to be Δx=d/10, where d is the cross-sectional diameter of the rods.We have chosen a waveguide by removing one row of the rods along the x-direction and CROW by making defects at regular intervals of 2a at the PCW termination by changing the diameter of the central rod of the resonators.In this structure each resonator can radiate to free space.The total field of the array is determined by the vector summation of the radiated fields of the resonators, the result of which is the radiation pattern of the structure. The excitation point, the waveguide, the CROW with defect rod diameter of d=0, and the target plane as fitness function in PSO through which the power flow has to be maximized are depicted in FIG. 1. The target plane is defined at a distance of Da from the waveguide exit, where D is a constant number, within an angle of θ degrees, as shown in FIG. 1.\n\nThe incident lightwave signal is a modulated Gaussian:\n\nwhere f0 is the resonant frequency of the resonators with defect diameter of d=0, t is time, T0 is the time of the peak and σ is the width controller of the Gaussian pulse.\n\nIn this paper, the FDTD software was prepared in C++and MATLAB and was executed on a Pentium IV quad core CPU computer with processing capacity of 2.84 GHz and 3.25 Gb of RAM.\n\n### III. AN OVERVIEW OF THE PARTICLE SWARM OPTIMIZATION METHOD\n\nPSO is a stochastic evolutionary optimization method based on the movement and intelligence of swarms proposed first by Kennedy and Eberhart .\n\nIn this method, the population of potential solutions to the problem under consideration is used to probe the search space. Each particle adjusts its movement according to its own and its companions’ movement experiences.This process is continued until the best solution is achieved. Both genetic algorithm (GA) and PSO are similar in the sense that both are population-based search methods and search for the optimal solution by updating generations.Unlike GA, PSO has no evolution operators such as crossover and mutation. Also, GA and PSO employ different strategies and computational efforts. PSO has been demonstrated to be superior to the genetic algorithm for certain difficult optimization problems . Furthermore, compared to the genetic algorithm, PSO is easier to be implemented and has lower parameters to be controlled. Moreover,researchers have achieved improved and simplified models in the PSO algorithm, for example local PSO and Boolean PSO. However, specialists are trying to find more simplified models. PSO has already given some promising results in the domain of photonics in particular and electromagnetics in general [44-51].\n\nIn each PSO problem a “fitness function” is defined to guide the particles through the solution space to that position where the fitness function has its target value. Each particle is treated as a mass-less and volume-less point in a D-dimensional space. The ith particle is represented as xi=(xi1, xi2,?, xiD). A “best position” is a place that the fitness function has a value closest to its final desired value. The best previous position of the ith particle is named “personal best position”, which gives the best fitness value for that particle, is represented as pbesti=(pi1, pi2, ?piD). The best particle among all the particles in the population is named “global best position” and represented by gbest=(g1, g2, ?gD). The global best position does not have the superscript identifying the particle. Velocity, the rate of position change for the ith particle, is represented as Vi=(Vi1, Vi2, ViD). At every iteration, the velocity and the position of each particle are updated by using the two best values according to the following equations:\n\nwhere k is the iteration number, d = 1, 2, …, D, i = 1, 2, …, N, and N is the size of the population (swarm). c1 and c2 are two positive values called acceleration constants,rand1( ) and rand2( ) are two independent random numbers that are uniformly distributed between 0 and 1 and are used to stochastically vary the relative attraction of pbesti and gbest. ω is the inertial weight, a constant acting as the inertia of the particle, which determines that how the velocity of particles in (k+1)th iteration are affected by the velocities in kth iteration. The inertial weight improves the performance of the PSO algorithm [41, 49]. As the iteration count increases, each particle in the swarm will progressively be guided to the position where the fitness function has its desired value. One of the most important issues encountered during the PSO implementation is the ability to control the search space of the swarm. Without any boundary or limit on the velocity, particles could essentially fly out of the physically meaningful solution space. One approach for solving this problem is to simply assign a maximum allowed velocity, Vmax. It has been found that without inertial weight (ω =1), Vmax around 10-20% of the dynamic range of each dimension is the best. We hoped that the introduction of inertial weight would negate the need for Vmax; however, we noticed that the PSO can be performed better if Vmax in each dimension is set equal to the dynamic range of that dimension .\n\nSome experimental results demonstrated that the global best model converges quickly on problem solutions but has a weakness for becoming trapped in local optima, while the local best model converges slowly on problem solutions but is able to “flow around” local optima, as the individuals explore different regions. The global best model is strongly recommended for the single-modal objective functions, while a variable neighborhood model is recommended for the multi-modal objective functions. For the local best model Eqs. (2) and (3) do not alter significantly. But the subscript g (global) is changed to l (local) .\n\nWhen the solution space is discrete, Boolean PSO is preferred, and Eqs. (2) and (3) change by replacing additions and products with exclusive OR and AND operations,respectively. A full description of Boolean PSO and conventional PSO can be found in [44, 50].\n\n### IV. EFFECTS OF PHOTONIC CRYSTAL WAVEGUIDE SURFACE PARAMETERS ON OUTPUT POWER\n\nWe consider two PC waveguides with parameters described above, without and with CROW surface structure with defect rods diameter of d=0. We launch an impulse lightwave signal, very narrow Gaussian source to the PCW, the frequency spectrum of which contains the total band-gap of the PC. FIGs. 2a and 2b show the spectra of the total x-directed Poynting vector at the target plane of the structures at a distance of 35a (D=35) and the angle ofθ = 4° , depicted in FIG. 1. The spectra have been obtained by discrete Fourier transform (DFT) of the electric and magnetic fields and the Poynting vector. The spectra are depicted in normalized frequency range of 0.3-0.44 coinciding with the band-gap of the PC. As we can see and have explained previously , the CROW in the PCW surface has caused the beam shaping effect and therefore more power is received by the target plane in the whole frequency range. As shown, the maximization of x-directed Poynting vector by means of CROW is more sensible in a special range of normalized frequency, say 0.4-0.42, because of the maximum coupling between the waveguide and the resonators and minimum quality factor(Q) of the resonators in this range. The resonators’lightwaves can radiate to free space, interfere with the waveguide output, cause the beam shaping effect and increase the power transmission to the target plane in this", null, "specified bandwidth. FIGs. 2c and 2d show the electric field distributions in normalized frequency of 0.334 and 0.410, for which the maximum power is received by the target plane from the PC waveguides without and with the CROW surface, respectively. As illustrated, the x-directed Poynting vector received by the target plane from the PC structure with CROW surface is 3.044 times higher than that of the structure without CROW.\n\n### 4.1. Optimization of the Crow Cavity Rods with the Same Diameter\n\nFurther improvement of the directional emission of the PCW with CROW structures is possible by variations of the diameters of the defect rods. First, we assume the same defect rod diameter and try to optimize the diameter by a 1D-PSO algorithm. There is a one dimensional solution space that can be searched for the optimum solution by using the PSO algorithm. Since the purpose of the optimization process is to increase the PCW directional emission,we choose the fitness function as the power (x-directed Poynting vector) received by the target plane located at the distance 35a (D=35) and the angle of θ =4° from the PCW output (FIG. 1). Maximization of the power received by the target plane is the aim of optimization.\n\nIn this optimization process the same diameters of 0.21a have been derived for the defect rods. FIG. 3a shows the spectrum of the total x-directed Poynting vector received by the target plane of this optimized structure. As we can see the normalized frequency for which the maximum power is received by the target plane is shifted to 0.344.Since the frequencies have been normalized to the lattice constant a, manufacturers can choose suitable a value to shift the maximum power wavelength to a desired wavelength,say 1.55 ㎛. The maximum power received by the target plane is 2.79 times higher than that of the previous non-optimized structure.\n\nIn exchange for the greater power received by the target plane and more intensive beam, the normalized bandwidth of the beam shaping effect decreased from 0.014 to lower than its half amount, 0.0052 which can be deduced by comparing FIGs. 2b and 3a. FIG. 3b demonstrates the electric field distribution in normalized frequency of 0.344.The stronger beam focusing compared to that of the previous structure is obvious.\n\nOptimization of other parameters of the defect rods,such as dielectric constant, was not executed because of the probability of obtaining impractical results.\n\nIn this 1D-PSO algorithm we have chosen 80 particles for global best algorithm with acceleration constants c1=c2=1.5, linearly descending inertial weight from 0.95 to 0.2, Vmax= 0.2 and ε =10-3. When the condition of gbest(k)- gbest(k-1) < ε was satisfied, the PSO algorithm was stopped. The PSO with 20 iterations was sufficient for this condition.\n\nThe execution time for optimization was 148 hours with the computer described in section II.", null, "### 4.2. Optimization of the Crow Cavity Rods with Different Diameter\n\nOptimization of the combination of the diameters of the CROW rods can cause better results in the beam focusing effect. Therefore, we have numbered the defect rods consecutively from the top of the waveguide, as shown in FIG. 4. Since this structure has seven resonators in both sides of the waveguide, due to the symmetry of the structure,we have seven defect rods to be optimized. So, we have a seven dimensional (7D) solution space that can be searched for the optimum solution by using a 7D-PSO algorithm.The fitness function is the same as that of the previous subsection. The optimization results for the diameters of the defect rods are given in TABLE 1. FIG. 5a depicts the spectrum of the total x-directed Poynting vector received by the target plane of this optimized structure. The maximum power has been received in the normalized frequency of 0.344. The electric field distribution in this frequency is shown in FIG. 5b. The 7D-PSO optimized structure has caused 3.875 and 1.389 times improvement in the received power by the target plane compared to the previous non-optimized and 1D-PSO optimized structure, respectively.", null, "", null, "The angle θ in fitness function depicted in FIG. 1 was chosen to be 4°. As can be intuitively deduced from electric field distribution in FIGs. 3b and 5b, if we had chosen higher θ , the improvement in received power by target plane would have been higher than 1.389 compared to the 1D-PSO. In exchange for the higher received power by target plane and more intensive beam, the normalized frequency bandwidth of beam shaping effect is decreased from 0.052 to lower than its half amount, 0.0023. The higher directivity and intensity caused the lower bandwidth because the directionality and intensity deeply depend on the resonators of the optimized structure. Comparison of FIGs. 2b, 3a and 5a confirms this effect.\n\nAlthough global best model converges quickly on problem solutions, in this 7D-PSO algorithm, it was not appropriate because of trapping in local optima. So, we divided the solution space of the PSO to six different explore regions to use local algorithm of PSO. In each region we had 70 particles with acceleration constants of c1=c2=1.5, linearly descending inertial weight from 0.95 to 0.4, Vmax=2 and ε =10-3. When the condition of lbest(k) -lbest(k-1) < ε and the iteration count k > 25 were simultaneously satisfied, the PSO algorithm was stopped.\n\nIn both 1D and 7D-PSO algorithms, the boundary values of the defect rods' diameters and the particles’ velocities have been assigned. The diameters of the defect rods must vary from 0 to 1.64a (2a-0.36a=1.64a) and the velocity of particles must be in the range of [-Vmax ,+Vmax].\n\nThe PSO with 25 iterations was sufficient for each of", null, "the explored regions that were simultaneously optimized.The execution time for optimization of each region was about 430 hours with the computer described in section II.\n\nFIG. 6 shows the polar diagram of the normalized electric field pattern in the azimuthal plane at a distance of 35a from the output of PCW for the four analyzed and simulated structures. We can see the beam focusing improvement in each step of the above cases. In each step, the power at the angles out of the specific narrow angle in front of the PCW decreases and their power transfers to the main lobe lying in this specific angle. So, the main lobe becomes very intensive.\n\nTo compare the FWHM of output beams, we measure the Poynting vector in radius direction in cylindrical coordinate system, in the view angle range of -25° to +25°in free space shown in FIG. 7a, the results of which are illustrated in FIG. 7b. Each diagram in this figure has been plotted for the normalized frequency in which the maximum power is received by the target plane of FIG. 1 in each structure. The red (solid), green (dashed) and blue(dotted) lines describe the power pattern in the normalized frequencies of 0.344, 0.344 and 0.410 for 7D-PSO optimized,1D-PSO optimized and non-optimized CROW surface structure, respectively. The blue (dotted) diagram belongs to the structure without optimization which the defect rods diameter are d=0. Its FWHM is 10°. Implementation of the 1D-PSO algorithm has caused the green (dashed) diagram with", null, "", null, "FWHM of 7°. This FWHM shows a decrement of 1.43 times compared to the non-optimized structure. The increased intensity and beam shaping effect is significant in this structure.In the 7D-PSO algorithm of the red (solid) diagram, all the minor lobes have been eliminated and the main lobe becomes very intensive, but the FWHM is increased to 14°; 1.4 and 2 times increments compared to the non-optimized and 1D-PSO optimized structures, respectively. Although the FWHM becomes higher than the previous cases, the power distribution is very significant in the main lobe and very low (approximately zero) in the angles out of the main lobe.\n\nFor more study of the intensity, FWHM and bandwidth", null, "of beam shaping of the PCW output, we launch similar pulses to the non-optimized, 1D-PSO optimized and 7D-PSO optimized CROW surface structures. FIG. 8a-8c depict the spectra of r-directed Poynting vector of some points in various angles between 0° to 25° on the arc of FIG. 7a for three cases of the CROW surface structure. The measured powers at the angle of 0° of the 1D-PSO optimized and 7D-PSO optimized CROW surface structures increase compared to the non-optimized one, which means that the radiated beams have become more focused around the 0° angle.\n\nThe ratio of the intensities at angles of 5° and 0° of non-optimized, 1D-PSO optimized and 7D-PSO optimized beams in their maximum power transfer frequency are 0.5,0.21 and 0.72, respectively, which confirms that the FWHM of 1D-PSO optimized and 7D-PSO optimized beams are the least and the most, respectively.\n\nFIGs. 8a-8c also show the frequency bandwidth of the higher intensity of detector at 0° angle. These confirm the lower frequency bandwidth of the beam focusing for the more intensive output structure.\n\n### V. CONCLUSION\n\nIn this paper, we have designed two photonic crystal waveguide (PCW) structures with focused output beams in order to achieve more coupling between photonic devices and decrease the mismatch losses in PICs. We have used the particle swarm optimization (PSO) algorithm for PCW terminated by a CROW structure. We could focus some of the powers of the minor lobes to the main lobe by one dimensional (1D) optimization of the same resonator rods diameter. In this optimized structure the intensity calculated by the fitness function of the PSO increases and the FWHM of the main lobe decreases. But there were some undesirable minor lobes that should be eliminated.So, we used 7D-PSO algorithm to optimize the various diameters of the seven resonator's rods. The power of minor lobes became very low and most of the powers were focused to the main lobe. So, the main lobe radiated power became more intensive. But the FWHM of the main lobe increased compared to the non-optimized and 1Doptimized structure. We have also shown that the higher directivity and intensity causes the lower bandwidth, because the directionality and intensity heavily depend on the resonators in the optimized structure. The method can be extended for simulation and optimization of the photonic crystal power splitters and multi/demultiplexers.\n\n참고문헌\nOAK XML 통계\n이미지 / 테이블\n• [ FIG. 1. ]  Waveguide structure, CROW with defect rod diameter of d=0 at surface, excitation point and the target plane as fitness function in PSO through which the power flow has to be maximized.", null, "• [ FIG. 2. ]  Spectra of the total x-directed Poynting vector received by the target plane at a distance of 35a (D=35) and the angle of θ = 4°depicted in FIG. 1, for (a) a PCW without CROW surface layer and (b) a PCW with non-optimized CROW surface layer with defect rod diameter of d=0. Electric field distributions in the frequency of the maximum power transfer of (c) the first structure at normalized frequency of 0.334 and (d) the second structure at normalized frequency of 0.410. These two frequencies are obtained from the peaks of the spectra (a) and (b), respectively.", null, "• [ FIG. 3. ]  (a) Spectrum of the total x-directed Poynting vectorreceived by the target plane for a PCW with 1D-PSOoptimized CROW surface layer, (b) electric field distributionof the structure in the normalized frequency of maximumpower transfer of 0.344 which is obtained from the peak of thespectrum (a).", null, "• [ FIG. 4. ]  The defect rods are consecutively numbered from thetop of the waveguide as the 7D-PSO algorithm parameters.", null, "• [ TABLE 1. ]  Optimized diameters of the defect rods of thestructure of FIG. 4, obtained by the 7D-PSO algorithm", null, "• [ FIG. 5. ]  (a) Spectrum of the total x-directed Poynting vectorreceived by the target plane for a PCW with 7D-PSOoptimized CROW surface layer depicted in FIG. 4, (b)electric field distribution of the structure in the normalizedfrequency of maximum power transfer of 0.344, which isobtained from the peak of the spectrum (a).", null, "• [ FIG. 6. ]  Polar diagram of the normalized electric field patternin the azimuthal plane at distance of 35a from the output of thePCW for four waveguide structures of 7D-optimized,1D-optimized, without optimization, and without CROWsurface structure.", null, "• [ FIG. 7. ]  (a) An arc with radius 35a (D=35) and angle of θ =50°in front of the waveguide, (b) The r-directed Poynting vector over the arc for 7D-PSO optimized (red solid line), 1D-PSO optimized (green dashed line) and non-optimized CROW surface structure (blue dotted).", null, "• [ FIG. 8. ]  Spectra of r-directed Poynting vector obtained by thedetectors at angles of 0° to 25° with angle intervals of 5° for(a) non-optimized (b) 1D-PSO optimized, and (c) 7D-PSOoptimized CROW surface structure.", null, "(우)06579 서울시 서초구 반포대로 201(반포동)\nTel. 02-537-6389 | Fax. 02-590-0571 | 문의 : [email protected]" ]
[ null, "https://oak.go.kr/central/images/n2021/sub-top-search.png", null, "https://oak.go.kr/central/images/n2021/sub-t-menu.png", null, "https://oak.go.kr/central/images/2015/cc_img.png", null, "https://oak.go.kr/central/images/2015/cc_img.png", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f001.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f002.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f003.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f004.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_t001.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f005.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f006.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f007.jpg", null, "http://oak.go.kr//repository/journal/10717/E1OSAB_2011_v15n2_187_f008.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f001.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f002.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f003.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f004.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_t001.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f005.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f006.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f007.jpg", null, "http://oak.go.kr/repository/journal/10717/E1OSAB_2011_v15n2_187_f008.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9242044,"math_prob":0.95010906,"size":21511,"snap":"2022-40-2023-06","text_gpt3_token_len":4792,"char_repetition_ratio":0.15836704,"word_repetition_ratio":0.063376166,"special_character_ratio":0.20640603,"punctuation_ratio":0.09242424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710589,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T23:24:57Z\",\"WARC-Record-ID\":\"<urn:uuid:266007bf-f8f4-4d68-bfa8-c2e2295de574>\",\"Content-Length\":\"237315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c08218b7-360e-4c34-9a1d-540b993a72ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7284083-d093-40f3-bf96-0e4b5962bda0>\",\"WARC-IP-Address\":\"124.137.58.137\",\"WARC-Target-URI\":\"https://oak.go.kr/central/journallist/journaldetail.do?article_seq=10717\",\"WARC-Payload-Digest\":\"sha1:BEMI2RQTMSBBEV7PG6ETK3VBCQDBRVHR\",\"WARC-Block-Digest\":\"sha1:6PBBS4TOP3ESAZMJAJPQUW4L74WC5OUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338280.51_warc_CC-MAIN-20221007210452-20221008000452-00164.warc.gz\"}"}
https://math.stackexchange.com/questions/3760779/increasing-convergence-of-sequence-bounded-below
[ "# Increasing convergence of sequence bounded below.\n\nAssume that you have a measurespace $$(A,\\mathcal{A},\\mu)$$. And you sequence of measurable functions $$f_n \\rightarrow \\mathbb{R}$$, that are increasing, and each function is bounded below by a common value $$-M$$.\n\nDo we then have that $$\\lim\\limits_{n \\rightarrow \\infty}\\int\\limits_{A}f_n(x)d\\mu=\\int\\limits_{A}\\lim\\limits_{n \\rightarrow \\infty}f_nn(x)d\\mu$$?\n\nI am able to prove this for a finite measure space by considering the non-negative and increasing sequence $$\\{f_n+M\\}$$ and using the monotone convergint theorem. But does it hold for a measure-space with infinite measure?\n\nThe reason I don't get it to work with a general measure space is that the integral of the constant function $$M$$ may not be finite, so I get in a situation where I can't cancel the parts.\n\nNot true. On the real line with Lebesgue measure let $$f_n(x)=-1$$ for $$x \\geq n$$ and $$0$$ for $$x . Then $$f_n \\geq -1, (f_n)$$ is increasing and $$\\lim \\int f_n=-\\infty \\neq 0 =\\int \\lim f_n$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8625073,"math_prob":0.9999666,"size":764,"snap":"2021-43-2021-49","text_gpt3_token_len":205,"char_repetition_ratio":0.12368421,"word_repetition_ratio":0.0,"special_character_ratio":0.2486911,"punctuation_ratio":0.07333333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999998,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T09:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:eeea1255-a617-4897-bef6-1e725b30f99e>\",\"Content-Length\":\"164194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67d3c255-8c04-4968-b57b-b72c2b802f6c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e035206-4afc-42a9-85ec-329495dc3e00>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3760779/increasing-convergence-of-sequence-bounded-below\",\"WARC-Payload-Digest\":\"sha1:OGQUY6PLH2TBRU3LWUOA4HWBXLWF24TW\",\"WARC-Block-Digest\":\"sha1:BXMSFVJSRVP5DK5TJ3FHXWI6J2K6I7JT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588282.80_warc_CC-MAIN-20211028065732-20211028095732-00382.warc.gz\"}"}
https://www.meta-religion.com/Mathematics/Biography/archimedes.htm
[ "# Archimedes\n\nBorn: 298 BC in Syracuse, Sicily\n\nDied: 212 BC in Syracuse, Sicily\n\nShort Biography", null, "Archimedes was a great mathematician of ancient times. His greatest contributions were in geometry. He also spent some time in Egypt, where he invented the machine now called Archimedes' screw, which was a mechanical water pump. Among his most famous works is Measurement of the Circle, where he determined the exact value of pi between the two fractions, 3 10/71 and 3 1/7. He got this information by inscribing and circumscribing a circle with a 96-sided regular polygon.\n\nArchimedes made many contributions to geometry in his work on the areas of plane figures and on the areas of area and volumes of curved surfaces. His methods started the idea for calculus which was \"invented\" 2,000 years later by Sir Isaac Newton and Gottfried Wilhelm von Leibniz. Archimedes proved that the volume of an inscribed sphere is two-thirds the volume of a circumscribed cylinder. He requested that this formula/diagram be inscribed on his tomb.", null, "His works (that survived) include:\n\n• Measurement of a Circle\n• On the Sphere and Cylinder\n• On Spirals\n• The Sand Reckoner\n\nThe Roman's highest numeral was a myriad (10,000). Archimedes was not content to use that as the biggest number, so he decided to conduct an experiment using large numbers. The question: How many grains of sand there are in the universe? He made up a system to measure the sand. While solving this problem, Archimedes discovered something called powers. The answer to Archimedes' question was one with 62 zeros after it (1 x 1062)..\n\nWhen numbers are multiplied by themselves, they are called powers.", null, "Some powers of two are:\n\n1 = 0 power=20\n\n2 = 1st power=21\n\n2 x 2 = 2nd power (squared)=22\n\n2 x 2 x 2= 3rd power (cubed)=23\n\n2 x 2 x 2 x 2= 4th power=24", null, "There are short ways to write exponents. For example, a short way to write 81 is 34.This is read as three to the fourth power.\n\n• On Plane Equilibriums\n• On Floating Bodies\n\nThis problem was after Archimedes had solved the problem of King Hiero's gold crown. He experimented with liquids. He discovered density and specific gravity." ]
[ null, "https://www.meta-religion.com/Mathematics/Images/archime.jpg", null, "http://tqjunior.thinkquest.org/4116/History/images/Italy.gif", null, "http://tqjunior.thinkquest.org/4116/History/images/power2.gif", null, "http://tqjunior.thinkquest.org/4116/History/images/power3.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9761263,"math_prob":0.85555124,"size":1981,"snap":"2021-43-2021-49","text_gpt3_token_len":490,"char_repetition_ratio":0.1047041,"word_repetition_ratio":0.00591716,"special_character_ratio":0.2392731,"punctuation_ratio":0.10204082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823581,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T00:16:57Z\",\"WARC-Record-ID\":\"<urn:uuid:fbbd2f09-70ca-47fa-a26f-4d26978783b1>\",\"Content-Length\":\"17379\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c61e2f0-c730-4fb4-bef9-3beecb75571d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2636fe06-e44a-40f2-9734-71af98a02027>\",\"WARC-IP-Address\":\"208.70.246.193\",\"WARC-Target-URI\":\"https://www.meta-religion.com/Mathematics/Biography/archimedes.htm\",\"WARC-Payload-Digest\":\"sha1:BYUCOSOJ7OMW576RCIUXOZLZ4WREBYDL\",\"WARC-Block-Digest\":\"sha1:MLDQID45WG3G3KK6WHLQL5BWCOHPAW2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359082.76_warc_CC-MAIN-20211130232232-20211201022232-00153.warc.gz\"}"}
https://www.electricaltechnology.org/2019/12/difference-between-emf-and-mmf.html
[ "# Difference Between EMF and MMF\n\n## Difference Between Electromotive Force and Magnetomotive Force\n\n### What is Magnetomotive Force (MMF)?\n\nThe pressure required to establish magnetic flux is a ferromagnetic material (material having permeabilities hundreds and thousands time greater than of free space) is known as magnetomotive force (MMF). It is measured in ampere-turns.\n\nWhen current flow through a conductor coil, a force has been produced which drives magnetic lines or flux which is know magnetomotive force or MMF. In other words, a pressure which drives the gametic flux from north pole to the south pole is called MMF (Magnetomotive force).\n\nIn short, a force which is responsible to drive flux in the magnetic circuit (same as electromotive (EMF) which drives electron in an electric circuit)  is known as magnetomotive force .\n\nThe SI unit of MMF is AT (Ampere-Turns) and G (Gilbert) is the CGS unit of magnetomotive force. It is also known as Ohm’s law for magnetic circuits which can be expressed as:\n\nℱ = ΦR\n\nor\n\nF = Hl\n\nor\n\nF = NI\n\nWhere:\n\n• F or ℱ = Magnetomotive force\n• Φ = Magnetic flux\n• R = Reluctance (magnetic resistance) of the circuit\n• H = Magnetizing force (strength of magnetizing field)\n• l = Mean length of solenoid\n• I = Current\n• N = Numbers of coil turns\n\nRelated Post: Difference Between Electric and Magnetic Circuit\n\n### What is Electromotive Force (EMF)?\n\nEMF is the cause and voltage is the effect. Electromotive force (EMF) produces and maintains potential difference or voltage inside an active cell. EMF supplies energy in joules to each unit of coulomb charge. The symbol of EMF is E or ε and the SI unit of electromotive force is V (Volt) same as for voltage.\n\nEMF can be expressed by the following equation:\n\nε or E = W/Q … in Volts\n\nWhere:\n\n• ε or E = Electromotive force in volts\n• W = Work done in joules\n• Q = Charge in Columbus\n\nRelated Post: Difference Between Voltage and EMF?", null, "Related Posts:" ]
[ null, "https://www.electricaltechnology.org/wp-content/ewww/lazy/placeholder-500x269.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91396296,"math_prob":0.98855597,"size":2205,"snap":"2020-45-2020-50","text_gpt3_token_len":536,"char_repetition_ratio":0.17582917,"word_repetition_ratio":0.01058201,"special_character_ratio":0.21088435,"punctuation_ratio":0.061007958,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99270356,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T07:38:32Z\",\"WARC-Record-ID\":\"<urn:uuid:fd2487ef-1850-48c9-901f-88cdda0dc205>\",\"Content-Length\":\"182141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:969ec3d9-a311-4b9d-a872-3014b92b40b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:bca60cf9-86e1-4e90-96cf-4a18cfadb380>\",\"WARC-IP-Address\":\"162.144.37.97\",\"WARC-Target-URI\":\"https://www.electricaltechnology.org/2019/12/difference-between-emf-and-mmf.html\",\"WARC-Payload-Digest\":\"sha1:IDGTH5JNYWVBV2EIYIZKYJU6CRYT2Y4A\",\"WARC-Block-Digest\":\"sha1:QIBHCZUAFHXBEC66LXCHZ2HNSYNGSYK3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107903419.77_warc_CC-MAIN-20201029065424-20201029095424-00298.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php?title=2012_AIME_I_Problems/Problem_13&diff=45551&oldid=45539
[ "# Difference between revisions of \"2012 AIME I Problems/Problem 13\"\n\n## Problem 13\n\nThree concentric circles have radii", null, "$3,$", null, "$4,$ and", null, "$5.$ An equilateral triangle with one vertex on each circle has side length", null, "$s.$ The largest possible area of the triangle can be written as", null, "$a + \\frac{b}{c} \\sqrt{d},$ where", null, "$a,$", null, "$b,$", null, "$c,$ and", null, "$d$ are positive integers,", null, "$b$ and", null, "$c$ are relatively prime, and", null, "$d$ is not divisible by the square of any prime. Find", null, "$a+b+c+d.$" ]
[ null, "https://latex.artofproblemsolving.com/2/f/8/2f885f96d2fb7e65f800de4ff71d7dcd0a5d2db5.png ", null, "https://latex.artofproblemsolving.com/4/0/5/4054552638cf9f29e857a1306bd3ec31df3c19b6.png ", null, "https://latex.artofproblemsolving.com/b/8/3/b835f2c6c592edc4583ce996f86bcc0d07ca8da5.png ", null, "https://latex.artofproblemsolving.com/0/9/d/09d9c01a214955f96b174391be96b26abf545cac.png ", null, "https://latex.artofproblemsolving.com/0/0/6/006467ff2a87dff11f57cbba61c0e590c9b77a2a.png ", null, "https://latex.artofproblemsolving.com/7/c/8/7c8acfd7d5ee559262593701b8dbd02e43ad96e3.png ", null, "https://latex.artofproblemsolving.com/5/b/1/5b1d6265e67657b5886ce257671d45ff9c0282eb.png ", null, "https://latex.artofproblemsolving.com/4/2/1/421dbe5ac249bea2c9d25145a7eb9b73644c5c61.png ", null, "https://latex.artofproblemsolving.com/9/6/a/96ab646de7704969b91c76a214126b45f2b07b25.png ", null, "https://latex.artofproblemsolving.com/8/1/3/8136a7ef6a03334a7246df9097e5bcc31ba33fd2.png ", null, "https://latex.artofproblemsolving.com/3/3/7/3372c1cb6d68cf97c2d231acc0b47b95a9ed04cc.png ", null, "https://latex.artofproblemsolving.com/9/6/a/96ab646de7704969b91c76a214126b45f2b07b25.png ", null, "https://latex.artofproblemsolving.com/3/b/e/3bea7cbba4320251df071ca876c849037bd87617.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7094159,"math_prob":0.99998176,"size":1337,"snap":"2020-45-2020-50","text_gpt3_token_len":408,"char_repetition_ratio":0.16729182,"word_repetition_ratio":0.20465116,"special_character_ratio":0.34629768,"punctuation_ratio":0.07539683,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999464,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T03:45:57Z\",\"WARC-Record-ID\":\"<urn:uuid:f71b35fe-ee94-4a9c-b098-b294fce12e06>\",\"Content-Length\":\"42880\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:754dce1d-e924-4a80-9c38-3bf73ec8cca3>\",\"WARC-Concurrent-To\":\"<urn:uuid:50b7f343-ef72-44f3-9e77-ab6cafddabbc>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php?title=2012_AIME_I_Problems/Problem_13&diff=45551&oldid=45539\",\"WARC-Payload-Digest\":\"sha1:C5KGHNC6VIY7DBQJPRQ23ZI7J2G4U57E\",\"WARC-Block-Digest\":\"sha1:E4OFSSU5DF2HW6RVEZHKNRNPGJLD5KZ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881640.29_warc_CC-MAIN-20201024022853-20201024052853-00641.warc.gz\"}"}
https://statsidea.com/the-way-to-carry-out-bivariate-research-in-r-with-examples/
[ "# The way to Carry out Bivariate Research in R (With Examples)\n\nThe time period bivariate research refers back to the research of 2 variables. You’ll take note this for the reason that prefix “bi” approach “two.”\n\nThe aim of bivariate research is to know the connection between two variables\n\nThere are 3 habitual tactics to accomplish bivariate research:\n\n1. Scatterplots\n\n2. Correlation Coefficients\n\n3. Easy Straight Regression\n\nRefer to instance presentations the way to carry out every of some of these bivariate research the use of please see dataset that incorporates details about two variables: (1) Hours spent finding out and (2) Examination rating won through 20 other scholars:\n\n```#manufacture information body\ndf <- information.body(hours=c(1, 1, 1, 2, 2, 2, 3, 3, 3, 3,\n3, 4, 4, 5, 5, 6, 6, 6, 7, 8),\nrating=c(75, 66, 68, 74, 78, 72, 85, 82, 90, 82,\n80, 88, 85, 90, 92, 94, 94, 88, 91, 96))\n\n#view first six rows of information body\n\nhours rating\n1 1 75\n2 1 66\n3 1 68\n4 2 74\n5 2 78\n6 2 72```\n\n### 1. Scatterplots\n\nWe will be able to importance please see syntax to manufacture a scatterplot of hours studied vs. examination rating in R:\n\n```#manufacture scatterplot of hours studied vs. examination rating\nplot(df\\$hours, df\\$rating, pch=16, col=\"steelblue\",\nmajor='Hours Studied vs. Examination Rating',\nxlab='Hours Studied', ylab='Examination Rating')\n```", null, "The x-axis presentations the hours studied and the y-axis presentations the examination rating won.\n\nFrom the plot we will be able to see that there’s a certain courting between the 2 variables: As hours studied will increase, examination rating has a tendency to extend as neatly.\n\n### 2. Correlation Coefficients\n\nA Pearson Correlation Coefficient is a approach to quantify the symmetrical courting between two variables.\n\nWe will be able to importance the cor() serve as in R to calculate the Pearson Correlation Coefficient between two variables:\n\n```#calculate correlation between hours studied and examination rating won\ncor(df\\$hours, df\\$rating)\n\n 0.891306\n```\n\nThe correlation coefficient seems to be 0.891.\n\nThis cost is alike to one, which signifies a powerful certain correlation between hours studied and examination rating won.\n\n### 3. Easy Straight Regression\n\nEasy symmetrical regression is a statistical mode we will be able to importance to search out the equation of the form that best possible “fits” a dataset, which we will be able to upcoming importance to know the precise courting between two variables.\n\nWe will be able to importance the lm() serve as in R to suit a easy symmetrical regression style for hours studied and examination rating won:\n\n```#are compatible easy symmetrical regression style\nare compatible <- lm(rating ~ hours, information=df)\n\n#view abstract of style\nabstract(are compatible)\n\nName:\nlm(system = rating ~ hours, information = df)\n\nResiduals:\nMin 1Q Median 3Q Max\n-6.920 -3.927 1.309 1.903 9.385\n\nCoefficients:\nEstimate Std. Error t cost Pr(>|t|)\n(Intercept) 69.0734 1.9651 35.15 < 2e-16 ***\nhours 3.8471 0.4613 8.34 1.35e-07 ***\n---\nSignif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n\nResidual usual error: 4.171 on 18 levels of liberty\nMore than one R-squared: 0.7944,\tAdjusted R-squared: 0.783\nF-statistic: 69.56 on 1 and 18 DF, p-value: 1.347e-07```\n\nThe fitted regression equation seems to be:\n\nExamination Rating = 69.0734 + 3.8471*(hours studied)\n\nThis tells us that every alternative pace studied is related to a median build up of 3.8471 in examination rating.\n\nWe will be able to additionally importance the fitted regression equation to expect the rating {that a} pupil will obtain in accordance with their general hours studied.\n\nFor instance, a pupil who research for three hours is anticipated to obtain a rating of 81.6147:\n\n• Examination Rating = 69.0734 + 3.8471*(hours studied)\n• Examination Rating = 69.0734 + 3.8471*(3)\n• Examination Rating = 81.6147\n\n### Backup Assets\n\nRefer to tutorials grant alternative details about bivariate research:" ]
[ null, "https://www.statology.org/wp-content/uploads/2021/11/biv1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8672693,"math_prob":0.90696794,"size":3982,"snap":"2023-14-2023-23","text_gpt3_token_len":1078,"char_repetition_ratio":0.144545,"word_repetition_ratio":0.070754714,"special_character_ratio":0.29256654,"punctuation_ratio":0.15590744,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98015875,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T21:18:59Z\",\"WARC-Record-ID\":\"<urn:uuid:0b9d5cc3-d93a-4a90-b613-9a54dfdbc308>\",\"Content-Length\":\"57160\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efc976d5-4edf-487c-843e-fc31606ea8d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b918bcd-22e4-4bda-97bd-64a021323325>\",\"WARC-IP-Address\":\"103.157.97.104\",\"WARC-Target-URI\":\"https://statsidea.com/the-way-to-carry-out-bivariate-research-in-r-with-examples/\",\"WARC-Payload-Digest\":\"sha1:LU3GOEGSPD5SDS2CFOLBUH546P2VPB6M\",\"WARC-Block-Digest\":\"sha1:UUXNFVRJM3RXIF3AQP3FI4P7BHGPI5ZP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656833.99_warc_CC-MAIN-20230609201549-20230609231549-00391.warc.gz\"}"}
http://dwarffortresswiki.org/index.php/40d:Citrine
[ "# 40d:Citrine\n\n `☼` `☼` `☼` `☼` `☼` `☼` `☼` `☼` `=` `=` `=` `☼` `☼` `☼` `☼` `☼` `=` `=` `=` `☼` `☼` `☼` `☼` `☼` `☼` `=` `=` `☼` `☼` `☼` `☼` `☼` `☼` `☼` `=`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8336802,"math_prob":0.6040403,"size":589,"snap":"2019-43-2019-47","text_gpt3_token_len":155,"char_repetition_ratio":0.07008547,"word_repetition_ratio":0.0,"special_character_ratio":0.22241087,"punctuation_ratio":0.16513762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999591,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T23:49:27Z\",\"WARC-Record-ID\":\"<urn:uuid:15e7df04-464d-48e2-908b-948ada9d5e94>\",\"Content-Length\":\"50390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42a26e60-4c96-41a4-ac1b-fe6187036bad>\",\"WARC-Concurrent-To\":\"<urn:uuid:12f985ee-4872-48fd-b4fe-e62f2868ad4e>\",\"WARC-IP-Address\":\"34.95.77.12\",\"WARC-Target-URI\":\"http://dwarffortresswiki.org/index.php/40d:Citrine\",\"WARC-Payload-Digest\":\"sha1:WRY3ZY5H24YSXPU7XUJ7YQVOOJM42QC7\",\"WARC-Block-Digest\":\"sha1:MMT23RVNQW76X5WJOJH5REP52PJKHG6V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669868.3_warc_CC-MAIN-20191118232526-20191119020526-00525.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/answers/724693-breaking-vector-into-subvectors
[ "# Breaking Vector into Subvectors\n\n5 views (last 30 days)\nJoshua Peters on 23 Jan 2021\nHow can I evenly split a vector into subvectors. My problem is, in my code I am using dde23 with some randomness involved. I am trying to obtain the coordinates of the numerical solution in intervals of 1 second. The randomness I include however changes the length of the time vector and hence the coordinates of the numerical solution everytime I rund the code. Is there a nice way where I can split the solution vector into subvectors that correspond to 1 second time intervals whilst including this randomness? My current code is as seen below.\nHere sol.y corresponds to the coordinates at each time which is in the sol.x vector. I am wanting to split the Car1, Car2 and Car3 vector into intervals corresponding to 1 second.\n%3 Cars\n%Defines the time delay for each x\nlags = [1 1 1];\n%Creates a vector of times\ntspan = [0 10];\nrng('shuffle')\nsol = dde23(@ddefun, lags, @history, tspan);\nhold on\nplot(sol.x,sol.y,'-')\ngrid on\nxlabel('Time (s)');\nylabel('Velocity (m/s)');\nlegend('Car 1','Car 2','Car 3','Location','NorthWest');\nCar1=sol.y(1,:);\nCar2=sol.y(2,:);\nCar3=sol.y(3,:);\nfunction dydt = ddefun(t,x,Z)\na = -1;\nb = 1;\nalpha =0.5;\ns=rng;\nR=rand(1,1);\nrng(s);\n%Generates a random number between -1 and 1\nomega= (b-a).*R + a;\n%Calculates a value slowing down/speeding up car 1\ngamma=-1/2+mod(t+pi*omega*t,1);\nylag1 = Z(:,1);\nylag2 = Z(:,2);\nylag3 = Z(:,3);\n%Specifies system of equations\ndydt = zeros(3,1);\ndydt=[gamma*x(1);alpha*(ylag1(1)-ylag2(2));alpha*(ylag2(2)-ylag3(3))];\nend\n%Gives initial velocity profiles\nfunction s = history(t)\ns = [20 20 20];\nend\nJoshua Peters on 23 Jan 2021\nNote: I have attempted using mat2cell but I am finding that this has issues when the vecors change length every time the code is run. This is because it is not guaranteed that the vectors can be split evenly.\n\nPrahlad Gowtham Katte on 15 Feb 2022\nHello,\nAs per my understanding of the query, you are trying to split the vector into sub vectors. For this you can first get the index array of vector where index (i) corresponds to indices in X axis where elements lie in between i-1 and i. Then using those indices you can create a cell array for “Car 1”, “Car2” and “Car 3“.\nThe following modified code will help in achieving the desired functionality.\n%3 Cars\n%Defines the time delay for each x\nlags = [1 1 1];\n%Creates a vector of times\ntspan = [0 10];\nrng(\"shuffle\")\nsol = dde23(@ddefun, lags, @history, tspan);\nhold on\nplot(sol.x,sol.y,'-')\ngrid on\nxlabel(\"Time (s)\");\nylabel(\"Velocity (m/s)\");\nlegend(\"Car 1\",\"Car 2\",\"Car 3\",\"Location\",\"NorthWest\");\nCar1=sol.y(1,:);\nCar2=sol.y(2,:);\nCar3=sol.y(3,:);\n%%% Here is the code I've added\nmaximum=max(sol.x); %Find the maximum value of X axis\nCar1_split={};\nCar2_split={};\nCar3_split={};\nfor j=1:maximum\nif j==1\nl=sol.x>=j-1; %Including 0 in the 1st sub array\nelse\nl=sol.x>j-1;\nend\nm=sol.x<=j;\nn=l&m;\nidx=find(n);%Finding Indices of the elements satisfying above conditions\n%Appending selected Car1,2,3 coressponding indices to the split cell array.\nCar1_split=[Car1_split Car1(idx)];\nCar2_split=[Car2_split Car2(idx)];\nCar3_split=[Car3_split Car3(idx)];\nend\nfunction dydt = ddefun(t,x,Z)\na = -1;\nb = 1;\nalpha =0.5;\ns=rng;\nR=rand(1,1);\nrng(s);\n%Generates a random number between -1 and 1\nomega= (b-a).*R + a;\n%Calculates a value slowing down/speeding up car 1\ngamma=-1/2+mod(t+pi*omega*t,1);\nylag1 = Z(:,1);\nylag2 = Z(:,2);\nylag3 = Z(:,3);\n%Specifies system of equations\ndydt = zeros(3,1);\ndydt=[gamma*x(1);alpha*(ylag1(1)-ylag2(2));alpha*(ylag2(2)-ylag3(3))];\nend\n%Gives initial velocity profiles\nfunction s = history(t)\ns = [20 20 20];\nend\nFor better understanding of the used functions please refer to the following links.\nHope it helps." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7163528,"math_prob":0.9942173,"size":3935,"snap":"2022-27-2022-33","text_gpt3_token_len":1215,"char_repetition_ratio":0.097430676,"word_repetition_ratio":0.31986532,"special_character_ratio":0.31893265,"punctuation_ratio":0.18151447,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976685,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T15:26:39Z\",\"WARC-Record-ID\":\"<urn:uuid:02c0c713-aa5f-42b2-9cc1-ce9d24c7584c>\",\"Content-Length\":\"138832\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5059edec-6b46-4e5c-b8b5-77daeed632e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c9824e4-c29f-47d7-bc21-c75d94729109>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/answers/724693-breaking-vector-into-subvectors\",\"WARC-Payload-Digest\":\"sha1:QMQTYOV2XO6D4A6WZZSWENMXPTDWHI6J\",\"WARC-Block-Digest\":\"sha1:6GKU44NIS6JSQEJQC67F3M7GY6ZOF3G7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571472.69_warc_CC-MAIN-20220811133823-20220811163823-00564.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00886.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Column Product of a Matrix of zeros and 1's\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg88020] Re: Column Product of a Matrix of zeros and 1's\n• From: Bill Rowe <readnews at sbcglobal.net>\n• Date: Tue, 22 Apr 2008 06:26:53 -0400 (EDT)\n\n```On 4/21/08 at 3:26 AM, petervansummeren at gmail.com (P_ter) wrote:\n\n>I have a matrix of zero's and ones: myGlobalMatrix; also a set\n>called thisSet with the columns of myGlobalMatrix which have to be\n>multiplied (e.g. column 1,3,4). I would like to know the positions\n>of the 1's. I do this as follows:\n>Flatten[Position[Product[myGlobalMatrix[[All,i]],{i, thisSet}],1]]\n>But this is to general. It does not use that the matrix consists of\n>zeros and ones. Can this be taken into account? with friendly\n\nI am unclear as to what you are trying to accomplish. If\nmyGloblMatrix consists of just ones and zeros, then probably the\nfastest way to locate all of the ones would be\n\nMost@ArrayRules@SparseArray@myGlobalMatricx\n\nIt looks like the code you have above locates all rows of\nmyGlobalMatrix where the first thisSet columns are 1\n\nIf this is what you want, then one way would be\n\nPosition[Clip[Total/@(myGlobalMatrix[[All,;;thisSet]]),{thisSet,thisSet},{0=\n,1}],1]\n\nHere, I assume you are using verision 6.x\n\n```\n\n• Prev by Date: Defining output formats\n• Next by Date: Re: Converting Power terms to Times terms\n• Previous by thread: Column Product of a Matrix of zeros and 1's\n• Next by thread: Re: Column Product of a Matrix of zeros and 1's" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/8.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8818376,"math_prob":0.7541515,"size":1242,"snap":"2020-24-2020-29","text_gpt3_token_len":366,"char_repetition_ratio":0.13408723,"word_repetition_ratio":0.062176164,"special_character_ratio":0.27616748,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9840888,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T18:40:01Z\",\"WARC-Record-ID\":\"<urn:uuid:f2a797cf-a317-4192-be59-3a720bbed449>\",\"Content-Length\":\"44519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7cc50be-7a33-4260-9d7a-327198f93c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f0b2d0e-be45-4464-bdde-b19a58b9ff05>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00886.html\",\"WARC-Payload-Digest\":\"sha1:FK3TRW7YMUOWEYNUJRA22YK7F2R7GT7B\",\"WARC-Block-Digest\":\"sha1:4RNKVLOUUAXKU2A677MVU3MMDF2YOJHR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348517506.81_warc_CC-MAIN-20200606155701-20200606185701-00351.warc.gz\"}"}
https://www.jpost.com/israel/empty-threat-or-syrian-pressure-for-golan-negotiations
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95487636,"math_prob":0.96919554,"size":2490,"snap":"2023-40-2023-50","text_gpt3_token_len":482,"char_repetition_ratio":0.12831858,"word_repetition_ratio":0.06122449,"special_character_ratio":0.18232931,"punctuation_ratio":0.07424594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T05:12:39Z\",\"WARC-Record-ID\":\"<urn:uuid:63f9980a-f3ad-4739-85d9-63481480a70e>\",\"Content-Length\":\"83022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa1bfb29-3296-4683-8ebc-9794c68ecad8>\",\"WARC-Concurrent-To\":\"<urn:uuid:42751506-177a-4998-ae85-3c2510e6c67a>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/israel/empty-threat-or-syrian-pressure-for-golan-negotiations\",\"WARC-Payload-Digest\":\"sha1:L3H4D7WOMFL45DN7RSXEZRTMN5X2RVGM\",\"WARC-Block-Digest\":\"sha1:T2KQJB2H6RWSHT4SH3PPAEN3VDUEKCEO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00660.warc.gz\"}"}
https://www.geeksforgeeks.org/dynamic-programming-vs-divide-and-conquer/?ref=rp
[ "Related Articles\n\n# Dynamic Programming vs Divide-and-Conquer\n\n• Difficulty Level : Easy\n• Last Updated : 04 Jul, 2021\n\nTL;DR\n\nIn this article I’m trying to explain the difference/similarities between dynamic programming and divide and conquer approaches based on two examples: binary search and minimum edit distance (Levenshtein distance).\nThe Problem\nWhen I started to learn algorithms it was hard for me to understand the main idea of dynamic programming (DP) and how it is different from divide-and-conquer (DC) approach. When it gets to comparing those two paradigms usually Fibonacci function comes to the rescue as great example. But when we’re trying to solve the same problem using both DP and DC approaches to explain each of them, it feels for me like we may lose valuable detail that might help to catch the difference faster. And these detail tells us that each technique serves best for different types of problems.\nI’m still in the process of understanding DP and DC difference and I can’t say that I’ve fully grasped the concepts so far. But I hope this article will shed some extra light and help you to do another step of learning such valuable algorithm paradigms as dynamic programming and divide-and-conquer.\nDynamic Programming and Divide-and-Conquer Similarities\nAs I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.\nI would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.\nSo why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.\nLet’s go step by step…\nDynamic Programming Prerequisites/Restrictions\nAs we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:\n\nOnce these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.\nDynamic Programming Extension for Divide and Conquer\nDynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.\nMemoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:\n\n```memFib(n) {\nif (mem[n] is undefined)\nif (n < 2) result = n\nelse result = memFib(n-2) + memFib(n-1)\nmem[n] = result\nreturn mem[n]\n}```\n\nTabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:\n\n```tabFib(n) {\nmem = 0\nmem = 1\nfor i = 2...n\nmem[i] = mem[i-2] + mem[i-1]\nreturn mem[n]\n}```\n\nThe main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.\nSo What the Difference Between DP and DC After All\nSince we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.\n\nDynamic programming and divide and conquer paradigms dependency\n\nLet’s go and try to solve some problems using DP and DC approaches to make this illustration more clear.\nDivide and Conquer Example: Binary Search\nBinary search algorithm, also known as half-interval search, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.\nExample\nHere is a visualization of the binary search algorithm where 4 is the target value.\n\nBinary search algorithm logic\n\nLet’s draw the same logic but in form of decision tree.\n\nBinary search algorithm decision tree\n\nYou may clearly see here a divide and conquer principle of solving the problem. We’re iteratively breaking the original array into sub-arrays and trying to find required element in there.\nCan we apply dynamic programming to it? No. It is because there are no overlapping sub-problems. Every time we split the array into completely independent parts. And according to divide and conquer prerequisites/restrictions the sub-problems must be overlapped somehow.\nNormally every time you draw a decision tree and it is actually a tree (and not a decision graph) it would mean that you don’t have overlapping sub-problems and this is not dynamic programming problem.\nThe Code\nHere you may find complete source code of binary search function with test cases and explanations.\n\n```function binarySearch(sortedArray, seekElement) {\nlet startIndex = 0;\nlet endIndex = sortedArray.length - 1;\nwhile (startIndex <= endIndex) {\nconst middleIndex = startIndex + Math.floor((endIndex - startIndex) / 2);\n// If we've found the element just return its position.\nif (sortedArray[middleIndex] === seekElement)) {\nreturn middleIndex;\n}\n// Decide which half to choose: left or right one.\nif (sortedArray[middleIndex] < seekElement)) {\n// Go to the right half of the array.\nstartIndex = middleIndex + 1;\n} else {\n// Go to the left half of the array.\nendIndex = middleIndex - 1;\n}\n}\nreturn -1;\n}```\n\nDynamic Programming Example: Minimum Edit Distance\nNormally when it comes to dynamic programming examples the Fibonacci number algorithm is being taken by default. But let’s take a little bit more complex algorithm to have some kind of variety that should help us to grasp the concept.\nMinimum Edit Distance (or Levenshtein Distance) is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.\nExample\nFor example, the Levenshtein distance between “kitten” and “sitting” is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:\n\nApplications\nThis has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, fuzzy string searching, and software to assist natural language translation based on translation memory.\nMathematical Definition\nMathematically, the Levenshtein distance between two strings a, b (of length |a| and |b| respectively) is given by function lev(|a|, |b|) where\n\nNote that the first element in the minimum corresponds to deletion (from a to b), the second to insertion and the third to match or mismatch, depending on whether the respective symbols are the same.\nExplanation\nOk, let’s try to figure out what that formula is talking about. Let’s take a simple example of finding minimum edit distance between strings ME and MY. Intuitively you already know that minimum edit distance here is 1 operation and this operation is “replace E with Y”. But let’s try to formalize it in a form of the algorithm in order to be able to do more complex examples like transforming Saturday into Sunday.\nTo apply the formula to ME>MY transformation we need to know minimum edit distances of ME>M, M>MY and M>M transformations in prior. Then we will need to pick the minimum one and add +1 operation to transform last letters E?Y.\nSo we can already see here a recursive nature of the solution: minimum edit distance of ME>MY transformation is being calculated based on three previously possible transformations. Thus we may say that this is divide and conquer algorithm.\nTo explain this further let’s draw the following matrix.\n\nSimple example of finding minimum edit distance between ME and MY strings\n\nCell (0, 1) contains red number 1. It means that we need 1 operation to transform M to empty string: delete M. This is why this number is red.\nCell (0, 2) contains red number 2. It means that we need 2 operations to transform ME to empty string: delete E, delete M.\nCell (1, 0) contains green number 1. It means that we need 1 operation to transform empty string to M: insert M. This is why this number is green.\nCell (2, 0) contains green number 2. It means that we need 2 operations to transform empty string to MY: insert Y, insert M.\nCell (1, 1) contains number 0. It means that it costs nothing to transform M to M.\nCell (1, 2) contains red number 1. It means that we need 1 operation to transform ME to M: delete E.\nAnd so on…\nThis looks easy for such small matrix as ours (it is only 3×3). But how we could calculate all those numbers for bigger matrices (let’s say 9×7 one, for Saturday>Sunday transformation)?\nThe good news is that according to the formula you only need three adjacent cells (i-1, j), (i-1, j-1), and (i, j-1) to calculate the number for current cell (i, j) . All we need to do is to find the minimum of those three cells and then add +1 in case if we have different letters in i-s row and j-s column\nSo once again you may clearly see the recursive nature of the problem.\n\nRecursive nature of minimum edit distance problem\n\nOk we’ve just found out that we’re dealing with divide and conquer problem here. But can we apply dynamic programming approach to it? Does this problem satisfies our overlapping sub-problems and optimal substructure restrictions? Yes. Let’s see it from decision graph.\n\nDecision graph for minimum edit distance with overlapping sub-problems\n\nFirst of all this is not a decision tree. It is a decision graph. You may see a number of overlapping subproblems on the picture that are marked with red. Also there is no way to reduce the number of operations and make it less then a minimum of those three adjacent cells from the formula.\nAlso you may notice that each cell number in the matrix is being calculated based on previous ones. Thus the tabulation technique (filling the cache in bottom-up direction) is being applied here. You’ll see it in code example below.\nApplying this principles further we may solve more complicated cases like with Saturday > Sunday transformation.\n\nMinimum edit distance to convert Saturday to Sunday\n\nThe Code\nHere you may find complete source code of minimum edit distance function with test cases and explanations.\n\n```function levenshteinDistance(a, b) {\nconst distanceMatrix = Array(b.length + 1)\n.fill(null)\n.map(\n() => Array(a.length + 1).fill(null)\n);\n\nfor (let i = 0; i <= a.length; i += 1) {\ndistanceMatrix[i] = i;\n}\n\nfor (let j = 0; j <= b.length; j += 1) {\ndistanceMatrix[j] = j;\n}\n\nfor (let j = 1; j <= b.length; j += 1) {\nfor (let i = 1; i <= a.length; i += 1) {\nconst indicator = a[i - 1] === b[j - 1] ? 0 : 1;\n\ndistanceMatrix[j][i] = Math.min(\ndistanceMatrix[j][i - 1] + 1, // deletion\ndistanceMatrix[j - 1][i] + 1, // insertion\ndistanceMatrix[j - 1][i - 1] + indicator, // substitution\n);\n}\n}\n\nreturn distanceMatrix[b.length][a.length];\n}```\n\nConclusion\nIn this article we have compared two algorithmic approaches such as dynamic programming and divide-and-conquer. We’ve found out that dynamic programming is based on divide and conquer principle and may be applied only if the problem has overlapping sub-problems and optimal substructure (like in Levenshtein distance case). Dynamic programming then is using memoization or tabulation technique to store solutions of overlapping sub-problems for later usage.\nI hope this article hasn’t brought you more confusion but rather shed some light on these two important algorithmic concepts! 🙂\nYou may find more examples of divide and conquer and dynamic programming problems with explanations, comments and test cases in JavaScript Algorithms and Data Structures repository.\nHappy coding!\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.  To complete your preparation from learning a language to DS Algo and many more,  please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students.\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8952763,"math_prob":0.9412089,"size":12922,"snap":"2021-31-2021-39","text_gpt3_token_len":2803,"char_repetition_ratio":0.12935439,"word_repetition_ratio":0.050352942,"special_character_ratio":0.21451788,"punctuation_ratio":0.08550969,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913268,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T10:17:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4317d9b6-7b3d-4b36-a359-0943131e89ca>\",\"Content-Length\":\"114458\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aac57369-6b5b-4414-be85-232748ce3d95>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9a30f3a-dba0-4862-a0c7-dd23759cb65d>\",\"WARC-IP-Address\":\"23.222.5.151\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/dynamic-programming-vs-divide-and-conquer/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:DDMVTKHUTEP2SQ6DXZTEJCSMGISF3RQ7\",\"WARC-Block-Digest\":\"sha1:Q4M5JKHAYCXKQN3UE3YWHO42C3OMUCNL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154085.58_warc_CC-MAIN-20210731074335-20210731104335-00692.warc.gz\"}"}
https://chemistrycalculatorpro.com/empirical-calculator/
[ "# Empirical Calculator\n\nThe Online Empirical Calculator provides the empirical formula of chemical composition. It just takes the chemical composition of the substance and generates precise results.\n\nEmpirical Formula Calculator: There are several steps involved in calculating the empirical formula for chemical compounds. You can get the results quickly by using our user-friendly Empirical Formula Calculator. In the sections below, you'll find full instructions for determining the empirical formula as well as answers to the problems.\n\n## Empirical Rule Formula\n\nIn chemistry, an empirical formula in a given chemical compound yields the simplest positive integer ratio of the chemical compound's atoms. It does not provide complete information about the absolute number of atoms present in a single molecule of a chemical compound, unlike the molecular formula. If a compound's molecular formula cannot be reduced any further, the chemical compound's empirical formula is the same as the molecular formula.\n\n### How to Determine Empirical Formula?\n\nExamine the simple procedure for obtaining the empirical of a chemical compound.\n\n• Step 1: Understand a chemical compound's chemical composition.\n• Step 2: Calculate each component's molar mass.\n• Step 3: Convert each component's molar mass to moles.\n• Step 4: Find the mole value that is the smallest.\n• Step 5: Subtract the smallest mole value from all of the components.\n• Step 6: Divide each mole value by the fractional component.\n• Step 7: Round your results to the nearest whole number.\n• Step 8: To get the empirical formula, combine the components and numbers.\n\n### How Do I Use the Empirical Formula Calculator?\n\nThe following is the procedure how to use the empirical calculator:\n\n• Step 1: Fill in the appropriate input field with the chemical composition.\n• Step 2: To acquire the result, click the \"Calculate Empirical Formula\" button.\n• Step 3: Finally, in the output field, the empirical formula for the supplied chemical composition will be presented.\n\n### FAQ on Empirical Formula\n\n1. What is the importance of determining the empirical formula?\n\nIn chemistry, empirical formulas are useful because they show the link between the number of atoms in each element in a molecule.\n\n2. What is an example of empirical formula?\n\nThe empirical formula of a chemical compound is the simplest whole-number ratio of atoms present in the substance, as defined in chemistry. The empirical formula of sulphur monoxide, or SO, as well as the empirical formula of disulfur dioxide, S2O2, are basic examples of this concept.\n\n3. What method do you use to understand empirical formulas?\n\nA formula that shows the elements in a compound in their lowest whole-number ratio is known as an empirical formula. Glucose is a simple sugar that serves as the primary source of energy for cells. C6H12O6 is its molecular formula. The empirical formula for glucose is CH2O because each of the subscripts is divisible by 6.\n\n4. Who came up with the empirical formula?\n\nWhen the skewness is minor, the empirical formula (mean - mode) = 3(mean - median) found by Karl Pearson seems to work. In reality, under some circumstances, it can be demonstrated to be roughly correct." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9227387,"math_prob":0.9809892,"size":2263,"snap":"2022-27-2022-33","text_gpt3_token_len":434,"char_repetition_ratio":0.23328906,"word_repetition_ratio":0.0,"special_character_ratio":0.1882457,"punctuation_ratio":0.10224439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T19:35:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a9fe2e9c-0db0-4e41-a966-998371e1764e>\",\"Content-Length\":\"22827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64d05f26-1041-4bd2-a566-e393e2835001>\",\"WARC-Concurrent-To\":\"<urn:uuid:682135d4-fb54-4f44-b2c3-e155804a58ce>\",\"WARC-IP-Address\":\"134.209.152.22\",\"WARC-Target-URI\":\"https://chemistrycalculatorpro.com/empirical-calculator/\",\"WARC-Payload-Digest\":\"sha1:BQ7SUHPFNAJG24Y4WL65LHBXH5MIAQK7\",\"WARC-Block-Digest\":\"sha1:BQEXDTUXYAG4CV6PGLJIDPAHFCKOKBCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00369.warc.gz\"}"}
http://taja.dynip.com/June2009.htm
[ "Daily report for June 2009\n```Averages\\Extremes for day :01\n------------------------------------------------------------\n\nAverage temperature = 59.8°F\nAverage humidity = 62%\nAverage dewpoint = 46.1°F\nAverage barometer = 29.9 in.\nAverage windspeed = 1.6 mph\nAverage gustspeed = 2.8 mph\nAverage direction = 108° (ESE)\nRainfall for month = 0.04 in.\nRainfall for year = 5.47 in.\nRainfall for day = 0.04 in.\nMaximum rain per minute = 0.04 in. on day 01 at time 14:26\nMaximum temperature = 70.9°F on day 01 at time 11:18\nMinimum temperature = 51.8°F on day 01 at time 04:40\nMaximum humidity = 89% on day 01 at time 14:52\nMinimum humidity = 34% on day 01 at time 11:30\nMaximum pressure = 30.140 in. on day 01 at time 00:00\nMinimum pressure = 29.845 in. on day 01 at time 01:44\nMaximum windspeed = 6.9 mph on day 01 at time 18:00\nMaximum gust speed = 12 mph from 135 °( SE) on day 01 at time 17:39\nMaximum heat index = 79.0°F on day 01 at time 07:44\n\nAverages\\Extremes for day :02\n------------------------------------------------------------\n\nAverage temperature = 50.8°F\nAverage humidity = 71%\nAverage dewpoint = 41.4°F\nAverage barometer = 30.2 in.\nAverage windspeed = 0.9 mph\nAverage gustspeed = 2.1 mph\nAverage direction = 81° ( E )\nRainfall for month = 0.20 in.\nRainfall for year = 5.63 in.\nRainfall for day = 0.16 in.\nMaximum rain per minute = 0.04 in. on day 02 at time 22:55\nMaximum temperature = 55.9°F on day 02 at time 16:10\nMinimum temperature = 46.8°F on day 02 at time 23:31\nMaximum humidity = 96% on day 02 at time 23:57\nMinimum humidity = 48% on day 02 at time 16:11\nMaximum pressure = 30.288 in. on day 02 at time 14:55\nMinimum pressure = 30.140 in. on day 02 at time 00:40\nMaximum windspeed = 4.6 mph on day 02 at time 18:43\nMaximum gust speed = 7 mph from 090 °( E ) on day 02 at time 16:43\nMaximum heat index = 55.9°F on day 02 at time 16:10\n\nAverages\\Extremes for day :03\n------------------------------------------------------------\n\nAverage temperature = 47.9°F\nAverage humidity = 89%\nAverage dewpoint = 44.8°F\nAverage barometer = 30.2 in.\nAverage windspeed = 2.0 mph\nAverage gustspeed = 3.2 mph\nAverage direction = 101° ( E )\nRainfall for month = 0.39 in.\nRainfall for year = 5.83 in.\nRainfall for day = 0.20 in.\nMaximum rain per minute = 0.04 in. on day 03 at time 12:13\nMaximum temperature = 52.2°F on day 03 at time 18:30\nMinimum temperature = 44.2°F on day 03 at time 06:21\nMaximum humidity = 100% on day 03 at time 07:07\nMinimum humidity = 76% on day 03 at time 19:16\nMaximum pressure = 30.229 in. on day 03 at time 02:10\nMinimum pressure = 30.111 in. on day 03 at time 18:40\nMaximum windspeed = 5.8 mph on day 03 at time 12:05\nMaximum gust speed = 10 mph from 158 °(SSE) on day 03 at time 06:26\nMaximum heat index = 52.2°F on day 03 at time 18:30\n\nAverages\\Extremes for day :04\n------------------------------------------------------------\n\nAverage temperature = 57.7°F\nAverage humidity = 61%\nAverage dewpoint = 42.8°F\nAverage barometer = 30.1 in.\nAverage windspeed = 1.2 mph\nAverage gustspeed = 2.1 mph\nAverage direction = 324° ( NW)\nRainfall for month = 0.39 in.\nRainfall for year = 5.83 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 04 at time 23:57\nMaximum temperature = 68.4°F on day 04 at time 17:05\nMinimum temperature = 44.2°F on day 04 at time 05:47\nMaximum humidity = 96% on day 04 at time 00:47\nMinimum humidity = 35% on day 04 at time 17:18\nMaximum pressure = 30.210 in. on day 04 at time 02:09\nMinimum pressure = 30.003 in. on day 04 at time 18:54\nMaximum windspeed = 8.1 mph on day 04 at time 23:35\nMaximum gust speed = 10 mph from 225 °( SW) on day 04 at time 23:38\nMaximum heat index = 79.1°F on day 04 at time 12:31\n\nAverages\\Extremes for day :05\n------------------------------------------------------------\n\nAverage temperature = 54.9°F\nAverage humidity = 57%\nAverage dewpoint = 39.8°F\nAverage barometer = 30.0 in.\nAverage windspeed = 4.8 mph\nAverage gustspeed = 7.5 mph\nAverage direction = 105° (ESE)\nRainfall for month = 0.39 in.\nRainfall for year = 5.83 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 05 at time 23:57\nMaximum temperature = 63.1°F on day 05 at time 13:11\nMinimum temperature = 48.9°F on day 05 at time 06:10\nMaximum humidity = 78% on day 05 at time 06:16\nMinimum humidity = 45% on day 05 at time 13:25\nMaximum pressure = 30.062 in. on day 05 at time 11:21\nMinimum pressure = 29.804 in. on day 05 at time 23:40\nMaximum windspeed = 11.5 mph on day 05 at time 22:34\nMaximum gust speed = 22 mph from 113 °(ESE) on day 05 at time 22:33\nMaximum heat index = 63.1°F on day 05 at time 13:11\n\nAverages\\Extremes for day :06\n------------------------------------------------------------\n\nAverage temperature = 46.6°F\nAverage humidity = 86%\nAverage dewpoint = 42.4°F\nAverage barometer = 29.8 in.\nAverage windspeed = 2.3 mph\nAverage gustspeed = 4.0 mph\nAverage direction = 97° ( E )\nRainfall for month = 0.39 in.\nRainfall for year = 5.83 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 06 at time 23:57\nMaximum temperature = 55.2°F on day 06 at time 15:17\nMinimum temperature = 41.5°F on day 06 at time 05:51\nMaximum humidity = 97% on day 06 at time 23:57\nMinimum humidity = 61% on day 06 at time 00:19\nMaximum pressure = 29.892 in. on day 06 at time 23:57\nMinimum pressure = 29.715 in. on day 06 at time 15:40\nMaximum windspeed = 9.2 mph on day 06 at time 03:52\nMaximum gust speed = 14 mph from 113 °(ESE) on day 06 at time 03:51\nMaximum heat index = 55.2°F on day 06 at time 15:17\n\nAverages\\Extremes for day :07\n------------------------------------------------------------\n\nAverage temperature = 43.8°F\nAverage humidity = 97%\nAverage dewpoint = 43.1°F\nAverage barometer = 29.9 in.\nAverage windspeed = 0.6 mph\nAverage gustspeed = 1.5 mph\nAverage direction = 115° (ESE)\nRainfall for month = 0.71 in.\nRainfall for year = 6.14 in.\nRainfall for day = 0.31 in.\nMaximum rain per minute = 0.08 in. on day 07 at time 18:24\nMaximum temperature = 48.4°F on day 07 at time 14:03\nMinimum temperature = 38.4°F on day 07 at time 22:03\nMaximum humidity = 100% on day 07 at time 20:24\nMinimum humidity = 92% on day 07 at time 23:57\nMaximum pressure = 30.010 in. on day 07 at time 23:39\nMinimum pressure = 29.833 in. on day 07 at time 06:09\nMaximum windspeed = 4.6 mph on day 07 at time 03:43\nMaximum gust speed = 6 mph from 113 °(ESE) on day 07 at time 22:52\nMaximum heat index = 48.4°F on day 07 at time 14:03\n\nAverages\\Extremes for day :08\n------------------------------------------------------------\n\nAverage temperature = 47.4°F\nAverage humidity = 69%\nAverage dewpoint = 37.0°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.7 mph\nAverage gustspeed = 3.1 mph\nAverage direction = 75° (ENE)\nRainfall for month = 0.71 in.\nRainfall for year = 6.14 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 08 at time 23:57\nMaximum temperature = 56.7°F on day 08 at time 15:35\nMinimum temperature = 38.7°F on day 08 at time 01:06\nMaximum humidity = 93% on day 08 at time 00:01\nMinimum humidity = 47% on day 08 at time 15:39\nMaximum pressure = 30.010 in. on day 08 at time 10:25\nMinimum pressure = 29.922 in. on day 08 at time 23:57\nMaximum windspeed = 6.9 mph on day 08 at time 17:46\nMaximum gust speed = 10 mph from 090 °( E ) on day 08 at time 17:46\nMaximum heat index = 56.7°F on day 08 at time 15:35\n\nAverages\\Extremes for day :09\n------------------------------------------------------------\n\nAverage temperature = 51.4°F\nAverage humidity = 84%\nAverage dewpoint = 46.4°F\nAverage barometer = 29.9 in.\nAverage windspeed = 1.5 mph\nAverage gustspeed = 2.7 mph\nAverage direction = 99° ( E )\nRainfall for month = 1.18 in.\nRainfall for year = 6.61 in.\nRainfall for day = 0.47 in.\nMaximum rain per minute = 0.04 in. on day 09 at time 14:40\nMaximum temperature = 59.7°F on day 09 at time 13:15\nMinimum temperature = 46.9°F on day 09 at time 05:49\nMaximum humidity = 100% on day 09 at time 15:09\nMinimum humidity = 66% on day 09 at time 19:47\nMaximum pressure = 30.010 in. on day 09 at time 23:57\nMinimum pressure = 29.892 in. on day 09 at time 04:49\nMaximum windspeed = 6.9 mph on day 09 at time 11:30\nMaximum gust speed = 12 mph from 113 °(ESE) on day 09 at time 11:03\nMaximum heat index = 59.7°F on day 09 at time 13:15\n\nAverages\\Extremes for day :10\n------------------------------------------------------------\n\nAverage temperature = 47.7°F\nAverage humidity = 96%\nAverage dewpoint = 46.7°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.2 mph\nAverage gustspeed = 2.4 mph\nAverage direction = 96° ( E )\nRainfall for month = 1.61 in.\nRainfall for year = 7.05 in.\nRainfall for day = 0.43 in.\nMaximum rain per minute = 0.08 in. on day 10 at time 08:18\nMaximum temperature = 51.2°F on day 10 at time 13:48\nMinimum temperature = 44.6°F on day 10 at time 05:56\nMaximum humidity = 100% on day 10 at time 13:08\nMinimum humidity = 85% on day 10 at time 00:28\nMaximum pressure = 30.010 in. on day 10 at time 23:57\nMinimum pressure = 29.951 in. on day 10 at time 17:55\nMaximum windspeed = 6.9 mph on day 10 at time 17:41\nMaximum gust speed = 10 mph from 113 °(ESE) on day 10 at time 17:07\nMaximum heat index = 51.2°F on day 10 at time 13:48\n\nAverages\\Extremes for day :11\n------------------------------------------------------------\n\nAverage temperature = 50.7°F\nAverage humidity = 87%\nAverage dewpoint = 46.6°F\nAverage barometer = 30.0 in.\nAverage windspeed = 0.5 mph\nAverage gustspeed = 1.0 mph\nAverage direction = 182° ( S )\nRainfall for month = 1.65 in.\nRainfall for year = 7.09 in.\nRainfall for day = 0.04 in.\nMaximum rain per minute = 0.04 in. on day 11 at time 14:27\nMaximum temperature = 58.1°F on day 11 at time 18:02\nMinimum temperature = 42.3°F on day 11 at time 05:06\nMaximum humidity = 100% on day 11 at time 06:11\nMinimum humidity = 64% on day 11 at time 18:03\nMaximum pressure = 30.010 in. on day 11 at time 03:10\nMinimum pressure = 29.922 in. on day 11 at time 19:10\nMaximum windspeed = 5.8 mph on day 11 at time 23:41\nMaximum gust speed = 8 mph from 225 °( SW) on day 11 at time 23:40\nMaximum heat index = 58.1°F on day 11 at time 18:02\n\nAverages\\Extremes for day :12\n------------------------------------------------------------\n\nAverage temperature = 55.9°F\nAverage humidity = 61%\nAverage dewpoint = 41.2°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.0 mph\nAverage gustspeed = 2.0 mph\nAverage direction = 206° (SSW)\nRainfall for month = 1.65 in.\nRainfall for year = 7.09 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 12 at time 23:57\nMaximum temperature = 65.4°F on day 12 at time 15:44\nMinimum temperature = 44.2°F on day 12 at time 05:28\nMaximum humidity = 92% on day 12 at time 00:59\nMinimum humidity = 34% on day 12 at time 16:10\nMaximum pressure = 30.040 in. on day 12 at time 23:57\nMinimum pressure = 29.951 in. on day 12 at time 05:10\nMaximum windspeed = 6.9 mph on day 12 at time 01:49\nMaximum gust speed = 9 mph from 225 °( SW) on day 12 at time 01:47\nMaximum heat index = 78.7°F on day 12 at time 14:21\n\nAverages\\Extremes for day :13\n------------------------------------------------------------\n\nAverage temperature = 63.1°F\nAverage humidity = 52%\nAverage dewpoint = 44.3°F\nAverage barometer = 30.0 in.\nAverage windspeed = 2.8 mph\nAverage gustspeed = 4.5 mph\nAverage direction = 117° (ESE)\nRainfall for month = 1.65 in.\nRainfall for year = 7.09 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 13 at time 23:57\nMaximum temperature = 74.8°F on day 13 at time 15:18\nMinimum temperature = 47.8°F on day 13 at time 04:05\nMaximum humidity = 75% on day 13 at time 23:51\nMinimum humidity = 32% on day 13 at time 12:21\nMaximum pressure = 30.040 in. on day 13 at time 09:10\nMinimum pressure = 29.892 in. on day 13 at time 23:55\nMaximum windspeed = 8.1 mph on day 13 at time 15:57\nMaximum gust speed = 12 mph from 090 °( E ) on day 13 at time 18:27\nMaximum heat index = 78.8°F on day 13 at time 08:10\n\nAverages\\Extremes for day :14\n------------------------------------------------------------\n\nAverage temperature = 63.7°F\nAverage humidity = 66%\nAverage dewpoint = 51.9°F\nAverage barometer = 29.9 in.\nAverage windspeed = 3.2 mph\nAverage gustspeed = 4.9 mph\nAverage direction = 109° (ESE)\nRainfall for month = 1.69 in.\nRainfall for year = 7.13 in.\nRainfall for day = 0.04 in.\nMaximum rain per minute = 0.04 in. on day 14 at time 12:50\nMaximum temperature = 73.7°F on day 14 at time 12:30\nMinimum temperature = 56.7°F on day 14 at time 06:06\nMaximum humidity = 90% on day 14 at time 22:56\nMinimum humidity = 43% on day 14 at time 10:36\nMaximum pressure = 29.981 in. on day 14 at time 01:24\nMinimum pressure = 29.833 in. on day 14 at time 16:52\nMaximum windspeed = 11.5 mph on day 14 at time 16:37\nMaximum gust speed = 18 mph from 113 °(ESE) on day 14 at time 02:29\nMaximum heat index = 77.7°F on day 14 at time 08:30\n\nAverages\\Extremes for day :15\n------------------------------------------------------------\n\nAverage temperature = 61.3°F\nAverage humidity = 72%\nAverage dewpoint = 51.9°F\nAverage barometer = 29.8 in.\nAverage windspeed = 1.2 mph\nAverage gustspeed = 2.2 mph\nAverage direction = 233° ( SW)\nRainfall for month = 1.93 in.\nRainfall for year = 7.36 in.\nRainfall for day = 0.24 in.\nMaximum rain per minute = 0.04 in. on day 15 at time 14:45\nMaximum temperature = 74.7°F on day 15 at time 13:40\nMinimum temperature = 53.2°F on day 15 at time 03:25\nMaximum humidity = 94% on day 15 at time 15:48\nMinimum humidity = 42% on day 15 at time 13:40\nMaximum pressure = 29.863 in. on day 15 at time 01:25\nMinimum pressure = 29.774 in. on day 15 at time 18:40\nMaximum windspeed = 9.2 mph on day 15 at time 03:53\nMaximum gust speed = 13 mph from 225 °( SW) on day 15 at time 03:50\nMaximum heat index = 78.0°F on day 15 at time 18:19\n\nAverages\\Extremes for day :16\n------------------------------------------------------------\n\nAverage temperature = 62.8°F\nAverage humidity = 67%\nAverage dewpoint = 51.0°F\nAverage barometer = 29.9 in.\nAverage windspeed = 2.9 mph\nAverage gustspeed = 4.3 mph\nAverage direction = 247° (WSW)\nRainfall for month = 2.01 in.\nRainfall for year = 7.44 in.\nRainfall for day = 0.08 in.\nMaximum rain per minute = 0.04 in. on day 16 at time 17:09\nMaximum temperature = 72.7°F on day 16 at time 13:17\nMinimum temperature = 55.4°F on day 16 at time 03:57\nMaximum humidity = 88% on day 16 at time 04:00\nMinimum humidity = 41% on day 16 at time 13:21\nMaximum pressure = 29.922 in. on day 16 at time 23:57\nMinimum pressure = 29.833 in. on day 16 at time 06:29\nMaximum windspeed = 8.1 mph on day 16 at time 23:54\nMaximum gust speed = 16 mph from 225 °( SW) on day 16 at time 10:44\nMaximum heat index = 78.4°F on day 16 at time 08:57\n\nAverages\\Extremes for day :17\n------------------------------------------------------------\n\nAverage temperature = 63.6°F\nAverage humidity = 71%\nAverage dewpoint = 53.4°F\nAverage barometer = 29.9 in.\nAverage windspeed = 2.3 mph\nAverage gustspeed = 3.5 mph\nAverage direction = 162° (SSE)\nRainfall for month = 2.32 in.\nRainfall for year = 7.76 in.\nRainfall for day = 0.31 in.\nMaximum rain per minute = 0.12 in. on day 17 at time 15:29\nMaximum temperature = 76.1°F on day 17 at time 13:47\nMinimum temperature = 55.8°F on day 17 at time 02:53\nMaximum humidity = 94% on day 17 at time 21:37\nMinimum humidity = 42% on day 17 at time 09:20\nMaximum pressure = 29.922 in. on day 17 at time 00:25\nMinimum pressure = 29.745 in. on day 17 at time 23:36\nMaximum windspeed = 9.2 mph on day 17 at time 23:26\nMaximum gust speed = 13 mph from 225 °( SW) on day 17 at time 23:24\nMaximum heat index = 78.1°F on day 17 at time 13:46\n\nAverages\\Extremes for day :18\n------------------------------------------------------------\n\nAverage temperature = 62.3°F\nAverage humidity = 76%\nAverage dewpoint = 54.0°F\nAverage barometer = 29.8 in.\nAverage windspeed = 1.5 mph\nAverage gustspeed = 2.6 mph\nAverage direction = 255° (WSW)\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.39 in.\nMaximum rain per minute = 0.08 in. on day 18 at time 17:18\nMaximum temperature = 72.1°F on day 18 at time 12:54\nMinimum temperature = 54.5°F on day 18 at time 03:51\nMaximum humidity = 94% on day 18 at time 23:45\nMinimum humidity = 48% on day 18 at time 12:42\nMaximum pressure = 29.863 in. on day 18 at time 23:57\nMinimum pressure = 29.715 in. on day 18 at time 05:41\nMaximum windspeed = 8.1 mph on day 18 at time 02:49\nMaximum gust speed = 10 mph from 225 °( SW) on day 18 at time 02:52\nMaximum heat index = 77.1°F on day 18 at time 11:04\n\nAverages\\Extremes for day :19\n------------------------------------------------------------\n\nAverage temperature = 65.5°F\nAverage humidity = 57%\nAverage dewpoint = 48.5°F\nAverage barometer = 29.9 in.\nAverage windspeed = 1.9 mph\nAverage gustspeed = 3.3 mph\nAverage direction = 229° ( SW)\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 19 at time 00:00\nMaximum temperature = 73.7°F on day 19 at time 16:52\nMinimum temperature = 55.5°F on day 19 at time 02:48\nMaximum humidity = 92% on day 19 at time 00:07\nMinimum humidity = 34% on day 19 at time 16:55\nMaximum pressure = 29.892 in. on day 19 at time 12:25\nMinimum pressure = 29.833 in. on day 19 at time 04:40\nMaximum windspeed = 5.8 mph on day 19 at time 19:49\nMaximum gust speed = 9 mph from 090 °( E ) on day 19 at time 04:24\nMaximum heat index = 79.1°F on day 19 at time 08:03\n\nAverages\\Extremes for day :20\n------------------------------------------------------------\n\nAverage temperature = 74.3°F\nAverage humidity = 43%\nAverage dewpoint = 49.6°F\nAverage barometer = 29.7 in.\nAverage windspeed = 5.4 mph\nAverage gustspeed = 8.1 mph\nAverage direction = 177° ( S )\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 20 at time 23:57\nMaximum temperature = 84.2°F on day 20 at time 11:15\nMinimum temperature = 60.6°F on day 20 at time 00:00\nMaximum humidity = 72% on day 20 at time 19:31\nMinimum humidity = 30% on day 20 at time 13:50\nMaximum pressure = 29.863 in. on day 20 at time 00:05\nMinimum pressure = 29.656 in. on day 20 at time 23:57\nMaximum windspeed = 16.1 mph on day 20 at time 07:11\nMaximum gust speed = 23 mph from 225 °( SW) on day 20 at time 07:00\nMaximum heat index = 82.4°F on day 20 at time 11:10\n\nAverages\\Extremes for day :21\n------------------------------------------------------------\n\nAverage temperature = 72.3°F\nAverage humidity = 44%\nAverage dewpoint = 48.4°F\nAverage barometer = 29.7 in.\nAverage windspeed = 3.9 mph\nAverage gustspeed = 5.9 mph\nAverage direction = 229° ( SW)\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 21 at time 23:57\nMaximum temperature = 81.1°F on day 21 at time 13:28\nMinimum temperature = 63.0°F on day 21 at time 23:22\nMaximum humidity = 75% on day 21 at time 23:19\nMinimum humidity = 29% on day 21 at time 13:40\nMaximum pressure = 29.774 in. on day 21 at time 23:57\nMinimum pressure = 29.656 in. on day 21 at time 06:55\nMaximum windspeed = 19.6 mph on day 21 at time 08:34\nMaximum gust speed = 26 mph from 225 °( SW) on day 21 at time 08:33\nMaximum heat index = 80.0°F on day 21 at time 12:52\n\nAverages\\Extremes for day :22\n------------------------------------------------------------\n\nAverage temperature = 72.0°F\nAverage humidity = 38%\nAverage dewpoint = 42.8°F\nAverage barometer = 29.8 in.\nAverage windspeed = 4.4 mph\nAverage gustspeed = 6.5 mph\nAverage direction = 236° (WSW)\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 22 at time 23:57\nMaximum temperature = 84.0°F on day 22 at time 14:00\nMinimum temperature = 58.5°F on day 22 at time 06:00\nMaximum humidity = 70% on day 22 at time 00:32\nMinimum humidity = 19% on day 22 at time 17:43\nMaximum pressure = 29.922 in. on day 22 at time 23:57\nMinimum pressure = 29.745 in. on day 22 at time 04:40\nMaximum windspeed = 15.0 mph on day 22 at time 14:23\nMaximum gust speed = 25 mph from 225 °( SW) on day 22 at time 14:21\nMaximum heat index = 81.3°F on day 22 at time 14:00\n\nAverages\\Extremes for day :23\n------------------------------------------------------------\n\nAverage temperature = 65.8°F\nAverage humidity = 43%\nAverage dewpoint = 41.8°F\nAverage barometer = 30.0 in.\nAverage windspeed = 2.0 mph\nAverage gustspeed = 3.5 mph\nAverage direction = 275° ( W )\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 23 at time 23:57\nMaximum temperature = 73.6°F on day 23 at time 16:52\nMinimum temperature = 56.1°F on day 23 at time 01:16\nMaximum humidity = 73% on day 23 at time 23:25\nMinimum humidity = 28% on day 23 at time 04:01\nMaximum pressure = 30.070 in. on day 23 at time 14:09\nMinimum pressure = 29.892 in. on day 23 at time 02:54\nMaximum windspeed = 10.4 mph on day 23 at time 03:47\nMaximum gust speed = 13 mph from 225 °( SW) on day 23 at time 03:46\nMaximum heat index = 78.3°F on day 23 at time 08:57\n\nAverages\\Extremes for day :24\n------------------------------------------------------------\n\nAverage temperature = 70.6°F\nAverage humidity = 55%\nAverage dewpoint = 51.7°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.0 mph\nAverage gustspeed = 2.1 mph\nAverage direction = 11° ( N )\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 24 at time 00:00\nMaximum temperature = 83.1°F on day 24 at time 16:51\nMinimum temperature = 56.5°F on day 24 at time 05:35\nMaximum humidity = 84% on day 24 at time 06:03\nMinimum humidity = 23% on day 24 at time 16:48\nMaximum pressure = 30.040 in. on day 24 at time 00:00\nMinimum pressure = 29.951 in. on day 24 at time 10:10\nMaximum windspeed = 5.8 mph on day 24 at time 23:41\nMaximum gust speed = 8 mph from 113 °(ESE) on day 24 at time 01:19\nMaximum heat index = 82.0°F on day 24 at time 14:11\n\nAverages\\Extremes for day :25\n------------------------------------------------------------\n\nAverage temperature = 75.3°F\nAverage humidity = 49%\nAverage dewpoint = 54.0°F\nAverage barometer = 30.0 in.\nAverage windspeed = 1.5 mph\nAverage gustspeed = 2.6 mph\nAverage direction = 151° (SSE)\nRainfall for month = 2.72 in.\nRainfall for year = 8.15 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 25 at time 00:00\nMaximum temperature = 90.9°F on day 25 at time 15:42\nMinimum temperature = 58.3°F on day 25 at time 05:57\nMaximum humidity = 69% on day 25 at time 06:26\nMinimum humidity = 30% on day 25 at time 15:12\nMaximum pressure = 30.040 in. on day 25 at time 03:10\nMinimum pressure = 29.863 in. on day 25 at time 23:14\nMaximum windspeed = 11.5 mph on day 25 at time 23:54\nMaximum gust speed = 17 mph from 225 °( SW) on day 25 at time 23:53\nMaximum heat index = 90.9°F on day 25 at time 15:42\n\nAverages\\Extremes for day :26\n------------------------------------------------------------\n\nAverage temperature = 71.5°F\nAverage humidity = 59%\nAverage dewpoint = 54.8°F\nAverage barometer = 29.8 in.\nAverage windspeed = 2.9 mph\nAverage gustspeed = 4.6 mph\nAverage direction = 233° ( SW)\nRainfall for month = 3.07 in.\nRainfall for year = 8.50 in.\nRainfall for day = 0.35 in.\nMaximum rain per minute = 0.08 in. on day 26 at time 13:07\nMaximum temperature = 85.1°F on day 26 at time 12:12\nMinimum temperature = 60.6°F on day 26 at time 23:57\nMaximum humidity = 92% on day 26 at time 13:41\nMinimum humidity = 32% on day 26 at time 01:09\nMaximum pressure = 29.951 in. on day 26 at time 23:57\nMinimum pressure = 29.804 in. on day 26 at time 04:00\nMaximum windspeed = 15.0 mph on day 26 at time 02:27\nMaximum gust speed = 22 mph from 203 °(SSW) on day 26 at time 03:08\nMaximum heat index = 84.3°F on day 26 at time 12:10\n\nAverages\\Extremes for day :27\n------------------------------------------------------------\n\nAverage temperature = 64.1°F\nAverage humidity = 51%\nAverage dewpoint = 43.3°F\nAverage barometer = 30.1 in.\nAverage windspeed = 2.3 mph\nAverage gustspeed = 3.6 mph\nAverage direction = 285° (WNW)\nRainfall for month = 3.07 in.\nRainfall for year = 8.50 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 27 at time 23:57\nMaximum temperature = 71.6°F on day 27 at time 14:24\nMinimum temperature = 54.4°F on day 27 at time 23:56\nMaximum humidity = 91% on day 27 at time 05:59\nMinimum humidity = 26% on day 27 at time 18:54\nMaximum pressure = 30.158 in. on day 27 at time 23:57\nMinimum pressure = 29.951 in. on day 27 at time 04:24\nMaximum windspeed = 6.9 mph on day 27 at time 23:35\nMaximum gust speed = 10 mph from 225 °( SW) on day 27 at time 13:26\nMaximum heat index = 78.7°F on day 27 at time 07:26\n\nAverages\\Extremes for day :28\n------------------------------------------------------------\n\nAverage temperature = 70.7°F\nAverage humidity = 32%\nAverage dewpoint = 38.2°F\nAverage barometer = 30.1 in.\nAverage windspeed = 3.4 mph\nAverage gustspeed = 4.8 mph\nAverage direction = 229° ( SW)\nRainfall for month = 3.07 in.\nRainfall for year = 8.50 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 28 at time 23:50\nMaximum temperature = 82.4°F on day 28 at time 15:02\nMinimum temperature = 52.5°F on day 28 at time 00:51\nMaximum humidity = 54% on day 28 at time 00:45\nMinimum humidity = 21% on day 28 at time 16:20\nMaximum pressure = 30.158 in. on day 28 at time 01:10\nMinimum pressure = 29.981 in. on day 28 at time 18:55\nMaximum windspeed = 9.2 mph on day 28 at time 05:15\nMaximum gust speed = 13 mph from 225 °( SW) on day 28 at time 09:32\nMaximum heat index = 80.2°F on day 28 at time 15:02\n\nAverages\\Extremes for day :29\n------------------------------------------------------------\n\nAverage temperature = 76.5°F\nAverage humidity = 36%\nAverage dewpoint = 46.4°F\nAverage barometer = 29.9 in.\nAverage windspeed = 3.2 mph\nAverage gustspeed = 4.9 mph\nAverage direction = 218° ( SW)\nRainfall for month = 3.07 in.\nRainfall for year = 8.50 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 29 at time 23:50\nMaximum temperature = 91.2°F on day 29 at time 12:38\nMinimum temperature = 61.4°F on day 29 at time 01:26\nMaximum humidity = 54% on day 29 at time 23:30\nMinimum humidity = 18% on day 29 at time 13:12\nMaximum pressure = 29.981 in. on day 29 at time 23:50\nMinimum pressure = 29.922 in. on day 29 at time 18:25\nMaximum windspeed = 10.4 mph on day 29 at time 04:20\nMaximum gust speed = 14 mph from 203 °(SSW) on day 29 at time 08:42\nMaximum heat index = 87.3°F on day 29 at time 12:38\n\nAverages\\Extremes for day :30\n------------------------------------------------------------\n\nAverage temperature = 71.3°F\nAverage humidity = 50%\nAverage dewpoint = 50.7°F\nAverage barometer = 30.0 in.\nAverage windspeed = 2.1 mph\nAverage gustspeed = 3.3 mph\nAverage direction = 187° ( S )\nRainfall for month = 3.07 in.\nRainfall for year = 8.50 in.\nRainfall for day = 0.00 in.\nMaximum rain per minute = 0.00 in. on day 30 at time 23:50\nMaximum temperature = 87.4°F on day 30 at time 14:45\nMinimum temperature = 57.9°F on day 30 at time 03:35\nMaximum humidity = 74% on day 30 at time 05:07\nMinimum humidity = 29% on day 30 at time 17:36\nMaximum pressure = 30.040 in. on day 30 at time 23:50\nMinimum pressure = 29.892 in. on day 30 at time 21:40\nMaximum windspeed = 12.7 mph on day 30 at time 20:54\nMaximum gust speed = 15 mph from 225 °( SW) on day 30 at time 21:33\nMaximum heat index = 85.9°F on day 30 at time 14:52\n\n---------------------------------------------------------------------------------------------\nAverages\\Extremes for the month of June 2009\n\n---------------------------------------------------------------------------------------------\nAverage temperature = 61.3°F\nAverage humidity = 63%\nAverage dewpoint = 46.5°F\nAverage barometer = 29.940 in.\nAverage windspeed = 2.2 mph\nAverage gustspeed = 3.6 mph\nAverage direction = 183° ( S )\nRainfall for month = 3.071 in.\nRainfall for year = 8.504 in.\nMaximum rain per minute = 0.118 in on day 17 at time 15:29\nMaximum temperature = 91.2°F on day 29 at time 12:38\nMinimum temperature = 38.4°F on day 07 at time 22:03\nMaximum humidity = 100% on day 11 at time 06:11\nMinimum humidity = 18% on day 29 at time 13:12\nMaximum pressure = 30.29 in. on day 02 at time 14:55\nMinimum pressure = 29.66 in. on day 21 at time 06:55\nMaximum windspeed = 19.6 mph from 225°( SW) on day 21 at time 08:34\nMaximum gust speed = 26.5 mph from 225°( SW) on day 21 at time 08:33\nMaximum heat index = 90.9°F on day 25 at time 15:42\nAvg daily max temp :71.2°F\nAvg daily min temp :51.7°F\nTotal windrun = 1571.7miles\n-----------------------------------\nDaily rain totals\n-----------------------------------\n00.04 in. on day 1\n00.16 in. on day 2\n00.20 in. on day 3\n00.31 in. on day 7\n00.47 in. on day 9\n00.43 in. on day 10\n00.04 in. on day 11\n00.04 in. on day 14\n00.24 in. on day 15\n00.08 in. on day 16\n00.31 in. on day 17\n00.39 in. on day 18\n00.35 in. on day 26\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7565368,"math_prob":0.9989859,"size":30322,"snap":"2021-43-2021-49","text_gpt3_token_len":10293,"char_repetition_ratio":0.3639422,"word_repetition_ratio":0.41239315,"special_character_ratio":0.46428335,"punctuation_ratio":0.16154608,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.976924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T08:09:53Z\",\"WARC-Record-ID\":\"<urn:uuid:f46b8e5d-04e8-443f-859a-96133d68676a>\",\"Content-Length\":\"44141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:849723eb-cfcc-40ca-8906-067b0f83c4b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0455fe3-adaa-4f4b-aeaa-6d80e55b8708>\",\"WARC-IP-Address\":\"64.179.134.118\",\"WARC-Target-URI\":\"http://taja.dynip.com/June2009.htm\",\"WARC-Payload-Digest\":\"sha1:TBDCWVC3XZJRDLHYZNGKI3KIPTSBNHMS\",\"WARC-Block-Digest\":\"sha1:4DDV547REWS253FPQEZ6P53NDZ5RFD6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585653.49_warc_CC-MAIN-20211023064718-20211023094718-00257.warc.gz\"}"}
https://mathhelpboards.com/threads/proving-continuity-with-sequences.4857/
[ "# Proving continuity with sequences\n\n#### Carla1985\n\n##### Member\n$Prove\\ using\\ the\\ sequence\\ definition\\ that\\ f(x)=10x^2\\ is\\ continuous\\ at\\ x_0=0\\\\ I\\ have:\\ take\\ any\\ sequence\\ x_n\\ converging\\ to\\ 0.\\ Then\\ f(x_n)=10x_n^2\\ converges\\ to\\ f(x_0)=10*0^2=0\\ so\\ it\\ is\\ continuous.\\\\ is\\ that\\ sufficient\\ for\\ the\\ proof?$\nThankyou\n\n#### Prove It\n\n##### Well-known member\nMHB Math Helper\n$Prove\\ using\\ the\\ sequence\\ definition\\ that\\ f(x)=10x^2\\ is\\ continuous\\ at\\ x_0=0\\\\ I\\ have:\\ take\\ any\\ sequence\\ x_n\\ converging\\ to\\ 0.\\ Then\\ f(x_n)=10x_n^2\\ converges\\ to\\ f(x_0)=10*0^2=0\\ so\\ it\\ is\\ continuous.\\\\ is\\ that\\ sufficient\\ for\\ the\\ proof?$\nThankyou\nI'm not sure what you mean by the sequence definition (I have not done Real Analysis in a while) but to prove continuity here I would simply show that \\displaystyle \\displaystyle \\begin{align*} |x - 0 | < \\delta \\implies \\left| 10x^2 - 0 \\right| < \\epsilon \\end{align*}.\n\nWorking on the second inequality we find\n\n\\displaystyle \\displaystyle \\begin{align*} \\left| 10x^2 - 0 \\right| &< \\epsilon \\\\ \\left| 10x^2 \\right| &< \\epsilon \\\\ 10 |x| ^2 &< \\epsilon \\\\ |x| ^2 &< \\frac{\\epsilon}{10} \\\\ |x| &< \\sqrt{ \\frac{\\epsilon}{10} } \\\\ |x - 0| &< \\sqrt{ \\frac{\\epsilon}{10} } \\end{align*}\n\nSo let \\displaystyle \\displaystyle \\begin{align*} \\delta = \\sqrt{ \\frac{\\epsilon}{10} } \\end{align*} and reverse the process to complete your proof.\n\n#### Carla1985\n\n##### Member\nOur continuity definition has two parts:\n\nA function f : D → R, D ⊂ R is continuous at x ∈ D\nif either of the following equivalent conditions holds:\n(i) for every ε > 0 there exists δ = δ(ε) > 0 such that for y ∈ D\ny−x|<δ implies |f(y)−f(x)|<ε;\n(ii) for every sequence (xn)n∈N, xn ∈ D, converging to x ∈ D it follows that (f (xn ))n∈N converges to f (x), i.e. limn→∞ xn = x implies limn→∞ f(xn) = f(x)\n\nI think for this question we have to use the second part as I've already done the ones using the first part", null, "#### chisigma\n\n##### Well-known member\nOur continuity definition has two parts:\n\nA function f : D → R, D ⊂ R is continuous at x ∈ D\nif either of the following equivalent conditions holds:\n(i) for every ε > 0 there exists δ = δ(ε) > 0 such that for y ∈ D\ny−x|<δ implies |f(y)−f(x)|<ε;\n(ii) for every sequence (xn)n∈N, xn ∈ D, converging to x ∈ D it follows that (f (xn ))n∈N converges to f (x), i.e. limn→∞ xn = x implies limn→∞ f(xn) = f(x)\n\nI think for this question we have to use the second part as I've already done the ones using the first part", null, "A well known theorem on sequences extablishes that $\\lim_{n \\rightarrow \\infty} f(x_{n})= f(x_{0})$ if and only if $\\lim_{ n \\rightarrow \\infty} x_{n}=x_{0}$ and f(x) is continous in $x=x_{0}$...\n\nKind regards\n\n$\\chi$ $\\sigma$\n\n#### Fantini\n\nMHB Math Helper\n$Prove\\ using\\ the\\ sequence\\ definition\\ that\\ f(x)=10x^2\\ is\\ continuous\\ at\\ x_0=0\\\\ I\\ have:\\ take\\ any\\ sequence\\ x_n\\ converging\\ to\\ 0.\\ Then\\ f(x_n)=10x_n^2\\ converges\\ to\\ f(x_0)=10*0^2=0\\ so\\ it\\ is\\ continuous.\\\\ is\\ that\\ sufficient\\ for\\ the\\ proof?$\nThankyou\nHello Carla! Let us try. We want to prove that the sequence $(f(x_n))$ converges to $f(0)$ for any sequence $(x_n)$ converging to $0$. Let $\\varepsilon >0$. From the convergence of $(x_n)$ we know that there is a $N_0 \\in \\mathbb{N}$ such that for all $n \\geq N_0$ we have that $|x_n - 0| < \\varepsilon$.\n\nUsing our definitions, we know that $f(x_n) = 10x_n^2$ and $f(0) = 0$, therefore we have that $|f(x_n) - f(0)| = |10 x_n^2|$. We want to conclude that this is less than $\\varepsilon$, so it is desirable to have $|x_n^2| < \\frac{\\varepsilon}{10}$ and $|x_n| < \\frac{\\sqrt{\\varepsilon}}{\\sqrt{10}}$.\n\nI feel this is where we use the convergence of the sequence $(x_n)$. Since it converges, we can take a $N \\in \\mathbb{N}$ such that $|x_n| < \\sqrt{ \\frac{\\varepsilon}{10} }$. This argument works because the convergence is for all $\\varepsilon >0$, in particular this one.", null, "Putting it all together: using the convergence of the sequence $(x_n)$, take $N \\in \\mathbb{N}$ such that $|x_n| < \\sqrt{ \\frac{\\varepsilon}{10} }$. It follows that $$|f(x_n) - f(0)| = |10x_n^2| < 10 \\cdot \\left( \\sqrt{ \\frac{\\varepsilon}{10} } \\right) = \\varepsilon,$$ therefore the sequence $(f(x_n))$ converges to $f(0)$ and the function $f$ is continuous at $0$.", null, "I hope this helps.\n\nRegards.", null, "#### Opalg\n\n##### MHB Oldtimer\nStaff member\n$Prove\\ using\\ the\\ sequence\\ definition\\ that\\ f(x)=10x^2\\ is\\ continuous\\ at\\ x_0=0\\\\ I\\ have:\\ take\\ any\\ sequence\\ x_n\\ converging\\ to\\ 0.\\ Then\\ \\boxed{f(x_n)=10x_n^2\\ converges\\ to\\ f(x_0)=10*0^2=0}\\ so\\ it\\ is\\ continuous.\\\\ is\\ that\\ sufficient\\ for\\ the\\ proof?$\nThankyou\nYour proof is correct apart from giving some justification for the assertion that I have boxed. If you are allowed to quote theorems about limits of products then you should say that is what you are doing here. Otherwise you will need to use something like Fantini's argument in the previous comment.\n\n#### solakis\n\n##### Active member\nA well known theorem on sequences extablishes that $\\lim_{n \\rightarrow \\infty} f(x_{n})= f(x_{0})$ if and only if $\\lim_{ n \\rightarrow \\infty} x_{n}=x_{0}$ and f(x) is continous in $x=x_{0}$...\n\nKind regards\n\n$\\chi$ $\\sigma$\nSuch a theorem does not exist\n\n#### Fantini\n\nYes it does. The conditions that $f$ is continuous, for every convergent sequence $x_n \\to x_0$ we have that $f(x_n) \\to f(x_0)$ and for all $\\varepsilon >0$ exists $\\delta >0$ such that $0 < |x -x_0| < \\delta \\implies |f(x) - f(x_0)| < \\varepsilon$ are all equivalent in $\\mathbb{R}$, or more generally, in metric spaces." ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://mathhelpboards.com/.smileys/Skype Smilies/wave.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6804264,"math_prob":0.9997615,"size":645,"snap":"2021-31-2021-39","text_gpt3_token_len":248,"char_repetition_ratio":0.12324493,"word_repetition_ratio":0.77272725,"special_character_ratio":0.39534885,"punctuation_ratio":0.06837607,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999958,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T22:33:54Z\",\"WARC-Record-ID\":\"<urn:uuid:0f50bbb5-2018-4058-9bdb-4221aa4d9e42>\",\"Content-Length\":\"89729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:896847c3-c437-4a3e-ac2f-6da1a2c34259>\",\"WARC-Concurrent-To\":\"<urn:uuid:10741592-d054-4acd-b8f9-554fd2a43695>\",\"WARC-IP-Address\":\"50.31.99.218\",\"WARC-Target-URI\":\"https://mathhelpboards.com/threads/proving-continuity-with-sequences.4857/\",\"WARC-Payload-Digest\":\"sha1:SJCWD557LV5HW7DM43AXLQJDRIRFSSCW\",\"WARC-Block-Digest\":\"sha1:F3FDFZ2UCVNZKR3KRSO5QHFM7RAP2WDP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057973.90_warc_CC-MAIN-20210926205414-20210926235414-00467.warc.gz\"}"}
https://cs.stackexchange.com/questions/16226/what-is-the-fastest-algorithm-for-multiplication-of-two-n-digit-numbers/16244
[ "# What is the fastest algorithm for multiplication of two n-digit numbers?\n\nI want to know which algorithm is fastest for multiplication of two n-digit numbers? Space complexity can be relaxed here!\n\n• Are you interested in the theoretical question or in the practical question? – Yuval Filmus Oct 19 '13 at 20:03\n• Both, but more inclined towards practical one! – Andy Oct 20 '13 at 6:49\n• For the practical question, I recommend using GMP. If you're curious what they use, look at the documentation or the source code. – Yuval Filmus Oct 20 '13 at 8:04\n• Nobody knows. We haven't found it yet. – JeffE Oct 21 '13 at 2:55\n• It depends. If you are satisfied with an algorithm that can multiply only a very specific class of numbers, look at this algorithm that can multiply two $n$-bit numbers in $O(kn)$, where $k$ related to the Collatz problem. – DaBler May 20 at 12:53\n\nAs of now Fürer's algorithm by Martin Fürer has a time complexity of $n \\log(n)2^{Θ(log*(n))}$ which uses Fourier transforms over complex numbers. His algorithm is actually based on Schönhage and Strassen's algorithm which has a time complexity of $Θ(n\\log(n)\\log(\\log(n)))$\n\nOther algorithms which are faster than Grade School Multiplication algorithm are Karatsuba multiplication which has a time complexity of $O(n^{\\log_{2}3})$ ≈ $O(n^{1.585})$ and Toom 3 algorithm which has a time complexity of $Θ(n^{1.465})$\n\nNote that these are the fast algorithms. Finding fastest algorithm for multiplication is an open problem in Computer Science.\n\nReferences :\n\n• Note the recent paper by D. Harvey and J. van der Hoeven (March 2019) describing an algorithm with $O(n\\ln n)$ complexity. – hardmath Apr 28 '19 at 19:32\n• Karatsuba is really easy mathematically, just one simple formula. Also nice to distribute to multiple processors and to vectorise. – gnasher729 Feb 19 at 10:53\n• @hardmath do you want to move that to an answer to get upvotes :-) – Ciro Santilli 郝海东冠状病六四事件法轮功 Jul 22 at 19:05\n• @Ciro: There's a Question about the practical effects of this at MatterModeling.SE (a beta site I was unaware of) and one of the Answers is quite a good explanation of how large the numbers have to be to get an improvement. – hardmath Jul 22 at 19:26\n• @hardmath OMG, that site is so obscure, should at most be a tag on chemistry or physics. In any case, I still recommend dumping the link to the paper and quick summary. Doesn't matter if useless in practice, paper itself says authors didn't care about being useful in practice. This is Computer Science SE, doesn't have to be useful :-) – Ciro Santilli 郝海东冠状病六四事件法轮功 Jul 22 at 19:32\n\nNote that the FFT algorithms listed by avi add a large constant, making them impractical for numbers less than thousands+ bits.\n\nIn addition to that list, there are some other interesting algorithms, and open questions:\n\n• Linear time multiplication on a RAM model (with precomputation)\n• Multiplication by a Constant is Sublinear (PDF) - this means a sublinear number of additions which gets for a total of $\\mathcal{O}\\left(\\frac {n^2} {\\log n} \\right)$ bit complexity. This is essentially equivalent to long multiplication (where you shift/add based on the number of $1$s in the lower number), which is $\\mathcal{O}\\left({n^2} \\right)$, but with an $\\mathcal{O}\\left(\\log n\\right)$ speedup.\n• Residue number system and other representations of numbers; multiplication is almost linear time. The downside is, the multiplication is modular and {overflow detection, parity, magnitude comparison} are all as hard or almost as hard as converting the number back to binary or similar representation and doing the traditional comparison; this conversion is at least as bad as traditional multiplication (at the moment, AFAIK).\n• Other Representations:\n• [Logarithmic representation]: multiplication is addition of the logarithmic representation. Example: $$16 \\times 32 = 2^{\\log_2 16 + \\log_2 32} = 2^{4+5} = 2^{9}$$\n• Downside is conversion to and from logarithmic representation can be as hard as multiplication or harder, the representation can also be fractional/irrational/approximate etc. Other operations (addition?) are likely more difficult.\n• Canonical representation: represent the numbers as the exponents of the prime factorization. Multiplication is addition of the exponents. Example: $$36 \\times 48 = 3^2\\cdot 5^1\\times 2^{2}\\cdot 3^1\\cdot 4^1 = {2^2}\\cdot {3^2} \\cdot 4^1 \\cdot 5^1$$\n• Downside is, requires factors, or factorization, a much harder problem than multiplication. Other operations such as addition are likely very difficult.\n• I believe a residue/Chinese Remainder Theorem-based approach with the right moduli can lead to speedups over traditional multiplication even with the conversion back; at some point this was in chapter 4 of TAOCP, at least as a footnote. (It still doesn't get near the FFT-based methods, but it's an interesting historical note) – Steven Stadnicki Oct 20 '13 at 17:15\n• @StevenStadnicki oh cool, I need to look at that then; do you happen to know the complexity? – Realz Slaw Oct 20 '13 at 23:54\n\nIf space and amount of hardware is no concern, then you can do what most CPUs do: For two n-bit numbers, use n^2 AND gates to produce n^2 zeroes and ones, then use n^2 half adders to reduce the number of values by 1/3, do that again until you can get the final result with one set of full adders.\n\nTime = O(log n), hardware cost = O(n^2). Could realistybe done today for n= 256, but there isn’t that much demand." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90274036,"math_prob":0.9841752,"size":5895,"snap":"2020-34-2020-40","text_gpt3_token_len":1574,"char_repetition_ratio":0.1453064,"word_repetition_ratio":0.06702127,"special_character_ratio":0.26836303,"punctuation_ratio":0.09825528,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99860436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T20:48:27Z\",\"WARC-Record-ID\":\"<urn:uuid:f4b8f30f-d4d9-496c-babe-d6d1f75a7163>\",\"Content-Length\":\"180966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f03f40f4-d016-41fb-8768-f12144d6c363>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0f6fc9f-0c18-4b8f-a7f0-c344076ce462>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/16226/what-is-the-fastest-algorithm-for-multiplication-of-two-n-digit-numbers/16244\",\"WARC-Payload-Digest\":\"sha1:EAMBCEYPVVQEDCGRP34LBBQNICVZFESJ\",\"WARC-Block-Digest\":\"sha1:YANXAB4THCJ3D2CM632KG7ZGEEVBWSYH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401578485.67_warc_CC-MAIN-20200927183616-20200927213616-00351.warc.gz\"}"}
https://convertoctopus.com/149-5-ounces-to-kilograms
[ "## Conversion formula\n\nThe conversion factor from ounces to kilograms is 0.028349523125, which means that 1 ounce is equal to 0.028349523125 kilograms:\n\n1 oz = 0.028349523125 kg\n\nTo convert 149.5 ounces into kilograms we have to multiply 149.5 by the conversion factor in order to get the mass amount from ounces to kilograms. We can also form a simple proportion to calculate the result:\n\n1 oz → 0.028349523125 kg\n\n149.5 oz → M(kg)\n\nSolve the above proportion to obtain the mass M in kilograms:\n\nM(kg) = 149.5 oz × 0.028349523125 kg\n\nM(kg) = 4.2382537071875 kg\n\nThe final result is:\n\n149.5 oz → 4.2382537071875 kg\n\nWe conclude that 149.5 ounces is equivalent to 4.2382537071875 kilograms:\n\n149.5 ounces = 4.2382537071875 kilograms\n\n## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 kilogram is equal to 0.23594623377646 × 149.5 ounces.\n\nAnother way is saying that 149.5 ounces is equal to 1 ÷ 0.23594623377646 kilograms.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that one hundred forty-nine point five ounces is approximately four point two three eight kilograms:\n\n149.5 oz ≅ 4.238 kg\n\nAn alternative is also that one kilogram is approximately zero point two three six times one hundred forty-nine point five ounces.\n\n## Conversion table\n\n### ounces to kilograms chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from ounces to kilograms\n\nounces (oz) kilograms (kg)\n150.5 ounces 4.267 kilograms\n151.5 ounces 4.295 kilograms\n152.5 ounces 4.323 kilograms\n153.5 ounces 4.352 kilograms\n154.5 ounces 4.38 kilograms\n155.5 ounces 4.408 kilograms\n156.5 ounces 4.437 kilograms\n157.5 ounces 4.465 kilograms\n158.5 ounces 4.493 kilograms\n159.5 ounces 4.522 kilograms" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73737466,"math_prob":0.9956712,"size":1843,"snap":"2023-14-2023-23","text_gpt3_token_len":524,"char_repetition_ratio":0.22838499,"word_repetition_ratio":0.007017544,"special_character_ratio":0.36190993,"punctuation_ratio":0.14698163,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99288416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T18:16:32Z\",\"WARC-Record-ID\":\"<urn:uuid:bea8ee58-255b-42f4-a220-7579b1c95022>\",\"Content-Length\":\"26072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aef96557-6a06-4764-9140-e6be82e38e0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc9275ae-5ce0-4d4e-9262-6d9dc7ac7b54>\",\"WARC-IP-Address\":\"104.21.29.10\",\"WARC-Target-URI\":\"https://convertoctopus.com/149-5-ounces-to-kilograms\",\"WARC-Payload-Digest\":\"sha1:RECP5L4XQ5FYTQC3EWQISZZ37CHSSIRW\",\"WARC-Block-Digest\":\"sha1:BOY6R4NOLPT7XO3HJZNDEP7GK2DU2XF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656788.77_warc_CC-MAIN-20230609164851-20230609194851-00062.warc.gz\"}"}
https://metanumbers.com/50857
[ "## 50857\n\n50,857 (fifty thousand eight hundred fifty-seven) is an odd five-digits prime number following 50856 and preceding 50858. In scientific notation, it is written as 5.0857 × 104. The sum of its digits is 25. It has a total of 1 prime factor and 2 positive divisors. There are 50,856 positive integers (up to 50857) that are relatively prime to 50857.\n\n## Basic properties\n\n• Is Prime? Yes\n• Number parity Odd\n• Number length 5\n• Sum of Digits 25\n• Digital Root 7\n\n## Name\n\nShort name 50 thousand 857 fifty thousand eight hundred fifty-seven\n\n## Notation\n\nScientific notation 5.0857 × 104 50.857 × 103\n\n## Prime Factorization of 50857\n\nPrime Factorization 50857\n\nPrime number\nDistinct Factors Total Factors Radical ω(n) 1 Total number of distinct prime factors Ω(n) 1 Total number of prime factors rad(n) 50857 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 10.8368 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 50,857 is 50857. Since it has a total of 1 prime factor, 50,857 is a prime number.\n\n## Divisors of 50857\n\n2 divisors\n\n Even divisors 0 2 2 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 2 Total number of the positive divisors of n σ(n) 50858 Sum of all the positive divisors of n s(n) 1 Sum of the proper positive divisors of n A(n) 25429 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 225.515 Returns the nth root of the product of n divisors H(n) 1.99996 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 50,857 can be divided by 2 positive divisors (out of which 0 are even, and 2 are odd). The sum of these divisors (counting 50,857) is 50,858, the average is 25,429.\n\n## Other Arithmetic Functions (n = 50857)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 50856 Total number of positive integers not greater than n that are coprime to n λ(n) 50856 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5209 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares\n\nThere are 50,856 positive integers (less than 50,857) that are coprime with 50,857. And there are approximately 5,209 prime numbers less than or equal to 50,857.\n\n## Divisibility of 50857\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 1 2 1 2 1 7\n\n50,857 is not divisible by any number less than or equal to 9.\n\n• Arithmetic\n• Prime\n• Deficient\n\n• Polite\n\n• Prime Power\n• Square Free\n\n## Base conversion (50857)\n\nBase System Value\n2 Binary 1100011010101001\n3 Ternary 2120202121\n4 Quaternary 30122221\n5 Quinary 3111412\n6 Senary 1031241\n8 Octal 143251\n10 Decimal 50857\n12 Duodecimal 25521\n20 Vigesimal 672h\n36 Base36 138p\n\n## Basic calculations (n = 50857)\n\n### Multiplication\n\nn×i\n n×2 101714 152571 203428 254285\n\n### Division\n\nni\n n⁄2 25428.5 16952.3 12714.2 10171.4\n\n### Exponentiation\n\nni\n n2 2586434449 131538296772793 6689643158973933601 340215182135937341146057\n\n### Nth Root\n\ni√n\n 2√n 225.515 37.0496 15.0172 8.73515\n\n## 50857 as geometric shapes\n\n### Circle\n\n Diameter 101714 319544 8.12552e+09\n\n### Sphere\n\n Volume 5.50986e+14 3.25021e+10 319544\n\n### Square\n\nLength = n\n Perimeter 203428 2.58643e+09 71922.7\n\n### Cube\n\nLength = n\n Surface area 1.55186e+10 1.31538e+14 88086.9\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 152571 1.11996e+09 44043.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 4.47984e+09 1.55019e+13 41524.6\n\n## Cryptographic Hash Functions\n\nmd5 64d28904d3ab5462b1a8af44857c151b c8d7376968629a0c613394fa3a50fc5056363611 13ccea035314af840bf2eab30f32c6a868a91162caf74a71cc15e987b3276c77 1bb6dcf20a808a78cf4a0399cd15a83d231e690a96fb44dd9d2f7b64b9aed5f8d46000511fd96bed0e764656da6643cb342f428acc610d3d051623a14a969062 57c4c9901b15aa5a12abe82c30b918c6186f25cc" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6308013,"math_prob":0.9836672,"size":4529,"snap":"2020-34-2020-40","text_gpt3_token_len":1591,"char_repetition_ratio":0.12198895,"word_repetition_ratio":0.029717682,"special_character_ratio":0.45617133,"punctuation_ratio":0.076129034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T08:43:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6f0a8fd2-bf30-4ed8-973f-133172474545>\",\"Content-Length\":\"47797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ac8f98e-97a6-4be9-a41f-74c5803a5592>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ade3810-a484-450c-a3ae-0f538c4808f1>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/50857\",\"WARC-Payload-Digest\":\"sha1:RNOSBOZHIMEZ6I6A5PBEL7OOCWPI5REB\",\"WARC-Block-Digest\":\"sha1:WI2J5QZPQQSYTC4NN2XHWSIB2QNT3PMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738964.20_warc_CC-MAIN-20200813073451-20200813103451-00592.warc.gz\"}"}
https://www.jpost.com/israel/lessons-from-a-tragedy
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9690996,"math_prob":0.96919554,"size":5818,"snap":"2023-14-2023-23","text_gpt3_token_len":1178,"char_repetition_ratio":0.104231164,"word_repetition_ratio":0.0,"special_character_ratio":0.18580268,"punctuation_ratio":0.06945766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T03:18:39Z\",\"WARC-Record-ID\":\"<urn:uuid:dfa8a894-004c-4cb5-a700-83209de70a33>\",\"Content-Length\":\"84913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64503356-7d79-4c83-bbd4-63c508cdf8ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:02d2f95f-7be8-4785-b6f8-7f85a03c74f9>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/israel/lessons-from-a-tragedy\",\"WARC-Payload-Digest\":\"sha1:HQSBCQLUJBKAUWMQBMIPH7JAIB7KSUFQ\",\"WARC-Block-Digest\":\"sha1:DGYC34L4AWUYJWRGSFWLQCP5UKAOJ7DQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949533.16_warc_CC-MAIN-20230331020535-20230331050535-00271.warc.gz\"}"}
https://allquizanswer.xyz/physics-mcqs-1/22/
[ "# Physics MCQs\n\nPhysics Mcqs for Test Preparation from Basic to Advance. Physics Mcqs are from the different sections of Physics. Here you will find Mcqs of Physics and quantum physics subject from Basic to Advance. Which will help you to get higher marks in Physics subject. These Mcqs are useful for students and job seekers i.e MCAT ECAT ETEA test preparation, PPSC Test, FPSC Test, SPSC Test, KPPSC Test, BPSC Test, PTS, OTS, GTS, JTS, CTS, NTS        .\n\nThe band theory of solids explains satisfactorily the nature of_________________?\n\nA. Electrical insulators alone\nB. Electrical conductors alone\nC. Electrical semi conductors alone\nD. All of the above\n\nA completely filled band is called__________________?\n\nA. Conduction band\nB. Valence band\nC. Forbidden band\nD. Core band\n\nWhich one has the greatest energy gap ?\n\nA. Semi conductor\nB. Conductor\nC. Metals\nD. Non metals\n\nWith increase in temperature the electrical conductivity of intrinsic semi conductor___________________?\n\nA. Decreases\nB. Increases\nC. Remains same\nD. First increases then decreases\n\nOn the basis of band theory of solids the semiconductors have _____________________?\n\nA. A party filled valence band and totally empty conduction band\nB. A completely filled valence band a totally empty conduction band and a very wide forbidden band\nC. A completely filled valence band a partially filled conduction band and a narrow forbidden band\nD. A partly filled valence band a totally empty conduction band and a wide forbidden band\n\nVery weak magnetic fields are detected by___________________?\n\nA. Squids\nB. Magnetic resonance imaging (MRI)\nC. Magnetometer\nD. Oscilloscope\n\nEnergy needed to magnetize and demagnetize is represented by _________________?\n\nA. Hysteresis curve\nB. Hysteresis loop area\nC. Hysteresis loop\nD. Straight line\n\nWhat is the SI unit of modulus of elasticity of substance ?\n\nA. Nm-2\nB. Jm-2\nC. Nm-1\nD. Being a number it has no unit.\n\nA rubber cord of cross-sectional area 2cm2 has a length of 1m. When a tensile force of 10N is applied the length of the cord increases by 1cm. What is the youngs modulus of rubber ?\n\nA. 2×108 Nm-2\nB. 5×106 Nm-2\nC. 0.5×10-6 Nm-2\nD. 0.2×10-6Nm-2\n\nA uniform steel wire of length 4m and area of cross-section 3×10-6m2 is extended by 1mm by the application of a force. If the youngs modulus of steel is 2×1011 Nm-2 the energy stored in the wire is___________________?\n\nA. 0.025J\nB. 0.50J\nC. 0.75J\nD. 0.100J" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77875125,"math_prob":0.9107048,"size":2398,"snap":"2021-43-2021-49","text_gpt3_token_len":664,"char_repetition_ratio":0.16457811,"word_repetition_ratio":0.04071247,"special_character_ratio":0.28440368,"punctuation_ratio":0.15524194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9680474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T21:26:37Z\",\"WARC-Record-ID\":\"<urn:uuid:1c789a99-25a8-4013-82d2-489d468967bb>\",\"Content-Length\":\"39249\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d517ea4f-812a-4f4f-9477-fbb976318b50>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6231104-3da5-49dd-a7e4-3ae1802a99b9>\",\"WARC-IP-Address\":\"198.37.123.126\",\"WARC-Target-URI\":\"https://allquizanswer.xyz/physics-mcqs-1/22/\",\"WARC-Payload-Digest\":\"sha1:36Q4B25XDVZJ6MBCZJADAYD6PQ6B3S72\",\"WARC-Block-Digest\":\"sha1:6KCWZTMATAQWMCFPWDENYHJZQKPXCHKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583083.92_warc_CC-MAIN-20211015192439-20211015222439-00114.warc.gz\"}"}
https://riverware.org/HelpSystem/8.3-Help/Objects/AppendixTableInterpolation.33.4.html
[ "", null, "Objects and Methods : Table Interpolation : Three-dimensional Table Interpolation\nThree-dimensional Table Interpolation\nFor three-dimensional interpolation, the z values define blocks: each block has a constant z value and increasing x values, and the blocks are arranged in order of increasing z value. In other words, the three-dimensional surface is represented by multiple slices or contours in the x-y plane, each of which may be represented by any arbitrary number of data points, just as with ordinary two-dimensional curves. Table B.2 is an example of the proper way to formulate a table for three-dimensional interpolation.\n\nTable B.2  Plant power table for a power reservoir\nTurbine Release (cfs)\nPower (kW)\n100\n0\n0\n100\n10\n2000\n100\n20\n3000\n100\n30\n4000\n200\n0\n0\n200\n10\n2500\n200\n20\n3500\n200\n25\n3800\n200\n30\n4500\n300\n0\n0\n300\n10\n3000\n300\n25\n5000\nFor three-dimensional functions, the algorithm for interpolation has two basic cases, as follows:\n• If the z value being interpolated is equal to the z value for one of the blocks in the table, then we just perform a two-dimensional interpolation along the curve represented by that block.\n• When the z value is not exactly equal to any of the z values found in the table, RiverWare first identifies the constant z-blocks whose values bound the z value being interpolated and performs a two-dimensional interpolation along these curves. This yields two points, one on each bounding constant z-curve, and the final answer is computed by a linear interpolation between these two points. Figure B.2 illustrates this case. We denote a particular approximation using the table by an asterisk: y* = f(x*, z*)\nFigure B.2  Three-dimensional linear interpolation", null, "There is one special case in which the interpolation behavior is slightly different: when the x value being interpolated is within the domain of one of the bracketing constant z curves but not the other. In this case, we interpolate between the encompassing curve and the extrapolation of the other (shorter) curve. We extrapolate this curve with either the slope of its last segment or the slope of the corresponding segment of the encompassing curve, as appropriate for the particular table. To avoid overambitious extrapolation, RiverWare requires that the answer lie in the region bounded by the constant-z curves (that is, their convex hull). Figure B.3 illustrates this case, where the short curve is extrapolated with the slope of its last segment.\nFigure B.3  Three-dimensional linear interpolation", null, "Three-dimensional Table Interpolation Errors\nThe following types of errors may be reported during three-dimensional table interpolation:\n• Invalid value (data error): an x, y, or z value is invalid (xi = NaN, yi = NaN, or zi = NaN for some i).\n• Non-increasing z (data error): the z values are not increasing for one block to another (zi >= zi-1, for some i).\n• Non-increasing x (data error): the x values are not increasing (xi >= xi-1, for some i).\n• z value out of range (interpolation error): the z value being interpolated is out of the range of the table (z* < zmin or z* > zmax).\n• x value out of range (interpolation error): the x value being interpolated is out of the domain of both of the two bounding constant z-curves.\n\nRevised: 08/02/2021" ]
[ null, "https://riverware.org/HelpSystem/8.3-Help/Objects/images/blank.gif", null, "https://riverware.org/HelpSystem/8.3-Help/Objects/images/ObjectsE_0018.png", null, "https://riverware.org/HelpSystem/8.3-Help/Objects/images/ObjectsE_0019.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8806011,"math_prob":0.98969764,"size":3177,"snap":"2021-43-2021-49","text_gpt3_token_len":726,"char_repetition_ratio":0.17869525,"word_repetition_ratio":0.046728972,"special_character_ratio":0.24268177,"punctuation_ratio":0.092257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99496883,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T06:03:04Z\",\"WARC-Record-ID\":\"<urn:uuid:d92ab608-bb07-4ccf-aa0c-18771c427469>\",\"Content-Length\":\"30266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61038085-5bc3-4ba3-bbf9-854fdab6effa>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a921f71-8bd9-4215-a702-a7ad48ac07e0>\",\"WARC-IP-Address\":\"128.138.184.11\",\"WARC-Target-URI\":\"https://riverware.org/HelpSystem/8.3-Help/Objects/AppendixTableInterpolation.33.4.html\",\"WARC-Payload-Digest\":\"sha1:AJKU3IZ3AENJDIRBRUBTJJ4KK67TYLE2\",\"WARC-Block-Digest\":\"sha1:BCL4X5G7F6ZDZZN6ZODLVMG5GVYV5Z52\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585121.30_warc_CC-MAIN-20211017052025-20211017082025-00014.warc.gz\"}"}
https://tex.stackexchange.com/questions/482032/how-to-get-two-align-point-with-split-equations/482036
[ "# how to get two align point with split equations\n\nI have this equation:\n\n\\begin{equation}\n\\begin{split}\n\\alpha &= \\frac{1}{100} S \\sqrt{2g} = 2.2444e^{-05} \\ [m^\\frac52/s]\\\\\n\\beta &= \\pi r^2 = 0.0079 \\ [m^2]\\\\\n\\gamma &= \\frac{2 \\pi r}{tan(\\theta)} = 0.1814 \\ [m] \\\\\n\\delta &= \\frac{\\pi}{(tan(\\theta))^2} = 1.0472\n\\end{split}\n\\end{equation}\n\n\nand would like to add a second align point on the second '=' symbol. Is there a way to do that?\n\nThank\n\n• Welcome to TeX.SE! Can you please complete your given code snippet to be compilable? Then we do not have to guess what you are doing and we can see, if you use math related packages like amsmath etc. – Mensch Mar 29 at 2:55\n• Split only supports a single & per line. Use aligned instead, or alignat/alignedat as mentioned below. I tend to always use aligned in situations like this, and will only switch to split when I need the specific features it provides. – daleif Mar 29 at 10:23\n• Sorry, it was my first post, I will write all the code on the next one. – Leonardo Garberoglio Mar 30 at 3:15\n\nYou can use alignat for this:\n\n\\documentclass{article}\n\\usepackage{amsmath}\n\n\\begin{document}\n\n\\begin{alignat*}{2}\n\\alpha&=\\frac{1}{100} S \\sqrt{2g} &&=2.2444e^{-05} \\ [m^\\frac52/s]\\\\\n\\beta&=\\pi r^2&&=0.0079 \\ [m^2]\\\\\n\\gamma&=\\frac{2 \\pi r}{tan(\\theta)}&&=0.1814 \\ [m]\\\\\n\\delta&=\\frac{\\pi}{(tan(\\theta))^2}&&=1.0472\n\\end{alignat*}\n\n\\end{document}", null, "• tan should use a backslash to be typeset upright. – Bernard Mar 29 at 9:52\n\nThere are multiple questions on the same topic. I am taking the answer of Werner from the question Multiple alignment\n\nMultiple alignment points with no gap between expressions is obtained using the alignat environment from amsmath.\n\nWith that, the code changes to:\n\n\\documentclass{article}\n\\usepackage{amsmath}\n\\begin{document}\n\\begin{equation}\n\\begin{split}\n\\alpha &= \\frac{1}{100} S \\sqrt{2g} = 2.2444e^{-05} \\ [m^\\frac52/s]\\\\\n\\beta &= \\pi r^2 = 0.0079 \\ [m^2]\\\\\n\\gamma &= \\frac{2 \\pi r}{tan(\\theta)} = 0.1814 \\ [m] \\\\\n\\delta &= \\frac{\\pi}{(tan(\\theta))^2} = 1.0472\n\\end{split}\n\\end{equation}\n\n\\begin{alignat}{2}\n\\alpha &= \\frac{1}{100} S \\sqrt{2g} &&= 2.2444e^{-05} \\ [m^\\frac52/s] \\notag\\\\\n\\beta &= \\pi r^2 &&= 0.0079 \\ [m^2]\\\\\n\\gamma &= \\frac{2 \\pi r}{tan(\\theta)} &&= 0.1814 \\ [m] \\notag\\\\\n\\delta &= \\frac{\\pi}{(tan(\\theta))^2} &&= 1.0472 \\notag\n\\end{alignat}\n\n\\end{document}", null, "• Why to use {3} in \\begin{alignat}{3}? I reckon {2} alignments should be also fine. – Majid Abdolshah Mar 29 at 3:50\n• @MajidAbdolshah - Yes you are right. I have updated the answer. – subham soni Mar 29 at 4:17" ]
[ null, "https://i.stack.imgur.com/7mdfG.png", null, "https://i.stack.imgur.com/if9cu.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7088076,"math_prob":0.9980272,"size":652,"snap":"2019-51-2020-05","text_gpt3_token_len":234,"char_repetition_ratio":0.106481485,"word_repetition_ratio":0.0,"special_character_ratio":0.38803682,"punctuation_ratio":0.1119403,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998379,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T13:21:09Z\",\"WARC-Record-ID\":\"<urn:uuid:11ec3633-850e-47be-a34d-ae9f4f71e348>\",\"Content-Length\":\"143671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0eefdea-0c6c-4e0d-9e9d-a28e7043a04f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f43b7b0-0a3c-4b5f-842f-eef6d1d1412a>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/482032/how-to-get-two-align-point-with-split-equations/482036\",\"WARC-Payload-Digest\":\"sha1:NVJYRDNBOQZEZPWMX36KNRS7OOLBNE2X\",\"WARC-Block-Digest\":\"sha1:RZCBVGDKRB7D4BIU5T4NP24GVZYUMJL4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541157498.50_warc_CC-MAIN-20191214122253-20191214150253-00153.warc.gz\"}"}
https://giteleshetres.fr/Mar/19_32711.html
[ "## Get m20 concrete compressive strength Price\n\nYou can get the price of m20 concrete compressive strength and a A&C representative will contact you within one business day.\n\n•", null, "### What is the Characteristic Strength of Concrete? - [Civil\n\nThe compressive strength test will be determined by the strength of concrete at 28 days as per IS standard. Based on the cube test results, the below graph was plotted. The number of cube samples represents the vertical axis & the compressive strength of the concrete\n\n•", null, "### Concrete Cube Failure and Acceptance Criteria - Civil4M\n\n8/21/2019· As flexure is 0.7 of square root of compressive strength, it is important to get passed in compressive strength results. When we do cube testing, we write its results in cube test register with serial numbers, say 1 is xyz date concrete from abc location, 2 is def date concrete\n\n•", null, "### compressive strength of m30 concrete - bokesbar\n\nConcrete grades are denoted by M10, M20, M30 according to their compressive strength. The \"M\" denotes Mix design of concrete followed by the compressive strength number in N/mm 2 \"Mix\" is the respective ingredient proportions which are Cement: Sand: Aggregate Or Cement: Fine Aggregate: Coarse Aggregate.\n\n•", null, "### Calculate Cement Sand & Aggregate - M20, M15, M10, M5\n\nUnderstanding Concrete Grades . Based on strength, concrete is classified into different grades like M5, M7.5, M10, M15, M20 etc. In concrete grades, the letter \"M\" stands for \"Mix\" and the following number stands for characteristic compressive strength of concrete in 28 days in the Direct Compression test.\n\n•", null, "### Compressive Strength of Concrete | Cube Test, Procedure\n\n5/18/2018· Concrete being the major consumable material after water makes it quite inquisitive in its nature. The strength of concrete is majorly derived from aggregates, where-as cement and sand contribute binding and workability along with flowability to concrete.. This is an in-depth article on Compressive Strength of Concrete.\n\n•", null, "### Compressive Strength of Concrete | Cube Test, Procedure\n\n5/18/2018· Concrete being the major consumable material after water makes it quite inquisitive in its nature. The strength of concrete is majorly derived from aggregates, where-as cement and sand contribute binding and workability along with flowability to concrete.. This is an in-depth article on Compressive Strength of Concrete.\n\n•", null, "### Civiconcepts - Make Your House Perfect With us\n\nThe \"M\" refers Mix and Number after M (M10, M20) Indicates the compressive strength of concrete after 28 days of curing and testing.. M indicates the proportion of materials like Cement: Sand: Aggregate (1:2:4) or Cement: Fine Aggregate: Coarse Aggregate.. If we mention M10 concrete, it means that the concrete has 10 N/mm2 characteristic compressive strength at 28 days.\n\n•", null, "### Relation Between The Cubic And Cylindrical Strength Of\n\nConservative estimates put concrete cylinders at 80% of concrete cubes, for high-strength concrete some say the percentage is near . The ratio between the cube(150mm) and cylindrical sample(150×300 mm). Generally Strength of Cylinder sample= 0.8 x Strength of Cube. Example:- M20 is equivalent to C25. C25 is 1:1:3\n\n•", null, "### The procedure for Compressive strength test of Concrete\n\n4/3/2014· Compressive strength of concrete: Out of many test applied to the concrete, this is the utmost important which gives an idea about all the characteristics of concrete.By this single test one judge that whether Concreting has been done properly or not. For cube test two types of specimens either cubes of 15 cm X 15 cm X 15 cm or 10cm X 10 cm x 10 cm depending upon the size of\n\n•", null, "### COMPRESSIVE STRENGTH OF CONCRETE - HARDENED CONCRETE\n\nCompressive Strength of Concrete IS 456 Interpretation of Test Results of Sample specified Grade Mean of the Group of 4 Non-Overlapping Consecutive Test Results In N/mm2 Individual Test Results In N/mm2 (1) (2) (3) M 20 > fck + 0.825 X established SD > fck -3 N/mm2 or above\n\n•", null, "### compressive strength of m20 concrete\n\nWhat is the compressive strength of m20 concrete - Answers. compressive and split tensile strength of chopped basalt fiber reinforced concrete of M20 grade concrete. A coir fiber, glass, steel, polypropylene, polyester fibers are used in concrete to gain strength to the concrete.\n\n•", null, "### Compressive Strength of Concrete - MidTech\n\nCompressive strength of concrete depends on many factors such as water-cement ratio, cement strength, quality of concrete material, quality control during production of concrete etc. Test for compressive strength is carried out either on cube or cylinder. Various standard codes recommends concrete cylinder or concrete cube as the standard\n\n•", null, "### Relation Between The Cubic And Cylindrical Strength Of\n\nConservative estimates put concrete cylinders at 80% of concrete cubes, for high-strength concrete some say the percentage is near . The ratio between the cube(150mm) and cylindrical sample(150×300 mm). Generally Strength of Cylinder sample= 0.8 x Strength of Cube. Example:- M20 is equivalent to C25. C25 is 1:1:3\n\n•", null, "### Compressive Strength of Concrete & Concrete Cubes | What\n\n7/7/2016· The capacity of concrete is reported in psi – pounds per sq. inch in US units and in MPa – mega pascals in SI units.This is usually called as the characteristic compressive strength of concrete fc/ fck. For normal field applications, the concrete strength can vary from 10Mpa to 60 Mpa.\n\n•", null, "### What is meant by \"Characteristic strength\" (fck) of concrete??\n\n1/4/2013· The compressive strength of concrete is given in terms of the characteristic compressive strength of 150 mm size cubes tested at 28 days (fck)- as per Indian Standards (ACI standards use cylinder of diameter 150 mm and height 300 mm). The characteristic strength is defined as the strength of the concrete below which not more than 5% of the test\n\n•", null, "### Compressive strength of M20 concrete -cube Test procedure\n\nThe grade of M20 concrete is denoted by the letter M or C (Europe) stand for mix & followed by numerical figure is compressive strength. Thus compressive strength of M20 concrete is 20N/mm2 (20 MPa) or 2900 Psi. Compressive strength of M20 concrete at 7 days: Making of at least 3 concrete cube size each 150mm×150mm×150mm in mould by cement sand and aggregate ratio 1:1.5:3, use\n\n•", null, "### Compressive Strength of Concrete Cubes - Lab Test & Procedure\n\nTheir fck value or characteristics of compressive strength is 20 N/mm2. M20 grade of concrete mix ratio. M20 grade of concrete ratio is 1 : 1.5 : 3, mixture of cement, sand and aggregate in which one part is cement, 1.5 part is sand and 3 part is aggregate or stone.\n\n•", null, "### Different Grades of Concrete, Their Strength and Selection\n\nFor example, for a grade of concrete with 20 MPa strength, it will be denoted by M20, where M stands for Mix. These grade of concrete is converted into various mix proportions. For example, for M20 concrete, mix proportion will be 1:1.5:3 for cement:sand:coarse aggregates.\n\n•", null, "### What is the compressive strength of grade 20 concrete at 7\n\n10/19/2017· This is 65% of the characteristics strength. In case of M20, 7th day strength shall be : 20*0.65=13 N/Sqmm. Got it now? Thank you for the A2A Best wishes\n\n•", null, "### compressive strength of m30 concrete - bokesbar\n\nConcrete grades are denoted by M10, M20, M30 according to their compressive strength. The \"M\" denotes Mix design of concrete followed by the compressive strength number in N/mm 2 \"Mix\" is the respective ingredient proportions which are Cement: Sand: Aggregate Or Cement: Fine Aggregate: Coarse Aggregate.\n\n•", null, "### Concrete Mix Design: Illustrative Example M30 Grade (M20\n\nConcrete Mix Design Calculation : M20, M25, M30, M40 Grade Concrete Concrete mix design is a procedure of selecting the suitable ingredients of concrete and their relative proportions with an objective to prepare concrete of certain minimum strength, desired workability and durability as economically (value engineered) as possible.\n\n•", null, "### Effect on Compressive Strength of Concrete by Addition of\n\nAbstract- The paper deals with the effects of addition of various proportions of polypropylene fiber on the properties of high strength concrete m20 mixes.An experimental program was carried out to explore its effects on compressive strength under different curing condition. the main aim of the investigation program is to study the effect of\n\n•", null, "### Calculate Cement Sand & Aggregate - M20, M15, M10, M5\n\nUnderstanding Concrete Grades . Based on strength, concrete is classified into different grades like M5, M7.5, M10, M15, M20 etc. In concrete grades, the letter \"M\" stands for \"Mix\" and the following number stands for characteristic compressive strength of concrete in 28 days in the Direct Compression test.\n\n•", null, "### Concrete mix ratio for various grades of concrete\n\nSome of them are: M10, M20, M30, M35, etc. So, what really does M10 or M20 mean or represent. \"M\" stands for \"mix\". Mix represents concrete with designated proportions of cement, sand and aggregate. And the number following \"M\" represents compressive strength of that concrete\n\n•", null, "### Concrete Mix Design | Different Grades of Concrete\n\n5/17/2017· The grade of concrete is also denoted as C16/20, C20/25, C25/30, etc., which means Concrete Strength Class (C) the number behind C refers to Compressive strength of Concrete in N/mm 2 when tested with Cylinder / Cube. Remember : 1MPA = N/mm 2. Different grades of concrete :\n\n•", null, "### Compressive Strength Test Of Concrete Cubes - Engineering\n\nCompressive strength as a concrete property depends on several factors related to the quality of used materials, mix design and quality control during concrete production. Depending on the applied code, the test sample may be cylinder [15 cm x 30 cm is common]\n\n•", null, "### What is the meaning of M15 M20 M10 concrete? - Quora\n\nM10 M15 M20 etc. are all grades of concrete, The 'M' denotes 'Mix' followed by a number representing the compressive strength of that mix in N/mm^2. A mix is a\n\n•", null, "### Grades of Concrete - M10 to M80 Explained in Detail\n\n~ Its compressive strength is 15 MPa. ~ Used as Plain cement concrete ( PCC). ~ Used in the base of footing, construction of levelling works, road construction, etc. e. M20 Grade ~ Mixing Ratio is 1:1.5:3 ( 1cement part: 1.5 sand part: 3aggregate part). ~ Its compressive strength is 20 MPa. ~ Used as Reinforced Cement Concrete(RCC).\n\n•", null, "" ]
[ null, "https://giteleshetres.fr/randimg/101.jpg", null, "https://giteleshetres.fr/randimg/123.jpg", null, "https://giteleshetres.fr/randimg/273.jpg", null, "https://giteleshetres.fr/randimg/47.jpg", null, "https://giteleshetres.fr/randimg/241.jpg", null, "https://giteleshetres.fr/randimg/21.jpg", null, "https://giteleshetres.fr/randimg/282.jpg", null, "https://giteleshetres.fr/randimg/28.jpg", null, "https://giteleshetres.fr/randimg/225.jpg", null, "https://giteleshetres.fr/randimg/280.jpg", null, "https://giteleshetres.fr/randimg/11.jpg", null, "https://giteleshetres.fr/randimg/186.jpg", null, "https://giteleshetres.fr/randimg/241.jpg", null, "https://giteleshetres.fr/randimg/177.jpg", null, "https://giteleshetres.fr/randimg/202.jpg", null, "https://giteleshetres.fr/randimg/37.jpg", null, "https://giteleshetres.fr/randimg/1.jpg", null, "https://giteleshetres.fr/randimg/98.jpg", null, "https://giteleshetres.fr/randimg/172.jpg", null, "https://giteleshetres.fr/randimg/204.jpg", null, "https://giteleshetres.fr/randimg/10.jpg", null, "https://giteleshetres.fr/randimg/271.jpg", null, "https://giteleshetres.fr/randimg/30.jpg", null, "https://giteleshetres.fr/randimg/202.jpg", null, "https://giteleshetres.fr/randimg/299.jpg", null, "https://giteleshetres.fr/randimg/131.jpg", null, "https://giteleshetres.fr/randimg/43.jpg", null, "https://giteleshetres.fr/randimg/21.jpg", null, "https://giteleshetres.fr/randimg/104.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8825759,"math_prob":0.9362665,"size":10871,"snap":"2020-45-2020-50","text_gpt3_token_len":2534,"char_repetition_ratio":0.23713997,"word_repetition_ratio":0.3319908,"special_character_ratio":0.23364916,"punctuation_ratio":0.107035175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590895,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,3,null,1,null,3,null,3,null,2,null,2,null,1,null,3,null,4,null,5,null,2,null,4,null,2,null,3,null,3,null,2,null,2,null,3,null,6,null,1,null,2,null,3,null,1,null,3,null,1,null,5,null,6,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T11:34:04Z\",\"WARC-Record-ID\":\"<urn:uuid:5634bc50-fa6b-4dfa-9318-11ccedd08975>\",\"Content-Length\":\"23013\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe5c7133-0e1f-410e-b364-1eaff3889a9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d71d1701-8c41-4e15-a2b3-76772a5b8dc0>\",\"WARC-IP-Address\":\"104.24.123.191\",\"WARC-Target-URI\":\"https://giteleshetres.fr/Mar/19_32711.html\",\"WARC-Payload-Digest\":\"sha1:QRA6546G6NDYC23Y2TLMO5FSQGORPMK7\",\"WARC-Block-Digest\":\"sha1:YCWKVCT4TVJBXWE3D4KZRUUUNSE4YSHY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191692.20_warc_CC-MAIN-20201127103102-20201127133102-00402.warc.gz\"}"}
https://www.borealisai.com/en/publications/uniform-stability-and-high-order-approximation-sgld-non-convex-learning/
[ "We propose a novel approach to analyze the generalization error of the stochastic gradient Langevin dynamics (SGLD) algorithm, a popular alternative to stochastic gradient descent. Discrete-time algorithms such as SGLD typically do not admit an explicit formula for their (time-marginal) distributions, making theoretical analysis very difficult. Previous non-asymptotic generalization bounds for SGLD used the distribution associated to the continuous-time Langevin diffusion as an approximation. However, the approximation error is at best order one in step size, and these bounds either suffer from a slow convergence rate or implicit conditions on the step size. In this paper, we construct a high order approximation framework with time independent error using weak backward error analysis. We then provide a non-asymptotic generalization bound for SGLD, with explicit and less restrictive conditions on the step size.\n\nAuthors\n* Denotes equal\ncontribution\nBibTeX\n\n@inproceedings{LiGazeau19,\ntitle = {Uniform Stability and High Order Approximation of SGLD in Non-Convex Learning},\nauthor = {Mufan Li and Maxime Gazeau},\nbooktitle = {International Conference on Machine Learning (ICML workshop on Understanding and Improving Generalization in Deep Learning)},\nyear = 2019," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8573191,"math_prob":0.8445745,"size":1232,"snap":"2019-51-2020-05","text_gpt3_token_len":252,"char_repetition_ratio":0.10749186,"word_repetition_ratio":0.01183432,"special_character_ratio":0.17857143,"punctuation_ratio":0.088541664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9517848,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T07:46:29Z\",\"WARC-Record-ID\":\"<urn:uuid:bdcd4668-09ba-40a5-ac43-e7ec1ceed8a8>\",\"Content-Length\":\"30859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f91ce261-690e-44e7-8c90-6af3319ba2bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:04e75a5d-bbcc-4d83-9009-6da957ed23e3>\",\"WARC-IP-Address\":\"18.222.57.221\",\"WARC-Target-URI\":\"https://www.borealisai.com/en/publications/uniform-stability-and-high-order-approximation-sgld-non-convex-learning/\",\"WARC-Payload-Digest\":\"sha1:KTYID2FDCXPM4Q3J63UJVPDPKO42DEEH\",\"WARC-Block-Digest\":\"sha1:IZLZ4R7JSVSMAQPUJWZJDDTAQJJX6YP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251789055.93_warc_CC-MAIN-20200129071944-20200129101944-00441.warc.gz\"}"}
https://www.nature.com/articles/s41598-022-11036-8?error=cookies_not_supported&code=8a6c754e-bf65-4dba-925a-03f0232e1def
[ "Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\n# Low-dimensional physics of clay particle size distribution and layer ordering\n\n## Abstract\n\nClays are known for their small particle sizes and complex layer stacking. We show here that the limited dimension of clay particles arises from the lack of long-range order in low-dimensional systems. Because of its weak interlayer interaction, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for individual phyllosilicate layers and a quasi-1D system for layer stacking. The layer stacking or ordering in an interstratified clay can be described by a 1D Ising model while the limited extension of individual phyllosilicate layers can be related to a 2D Berezinskii–Kosterlitz–Thouless transition. This treatment allows for a systematic prediction of clay particle size distributions and layer stacking as controlled by the physical and chemical conditions for mineral growth and transformation. Clay minerals provide a useful model system for studying a transition from a 1D to 3D system in crystal growth and for a nanoscale structural manipulation of a general type of layered materials.\n\n## Introduction\n\nClays are ubiquitous in the Earth system, especially in sedimentary and weathering systems. Clays are layers of aluminosilicates (Fig. 1), in which one aluminum oxide octahedral sheet joins with one or two silica tetrahedral sheets to form what is called 1:1 (e.g. kaolinite) and 2:1 (e.g. smectite and illite) phyllosilicate layers. The thickness of a 2:1 layer is about 0.65 nm1. The Si and Al centers in the layers can partially be substituted by lower-valent metals, resulting in negative charges in the layers, which are then balanced by interlayer cations2. Clay are known for their small particle sizes and high density of defects3. The a-b dimension of clay crystallites ranges from a few nanometers to micrometers4, while the dimension along the c-direction ranges from $$\\sim 1$$ to $$\\sim 100$$ nm3,5. The dimension disparity between the two directions can be up to 200 times1. Based on the Periodic Bond Chains (PBCs) theory, Meunie1 suggested that the size and shape of a single clay platelet might depend on the amount of crystal defects along the three axes of symmetry , $$[\\bar{1}10]$$, and $$[\\bar{1}\\bar{1}0]$$. Depending on cation ordering and occupancy in octahedral and tetrahedral sheets, crystal defects may tend to concentrate and thus poison crystal growth along one, two, or three PBCs, therefore limiting crystal dimensions in growth. The PBCs theory may provide a plausible explanation for the fibrous nature of some clay minerals such as sepiolite, but it fails to explain other key features of clay minerals such as the great dimensional disparity between illite and muscovite in spite of both minerals possessing a similar structure6.\n\nUnder certain conditions, clays tend to form mixed layers with complex layer stacking patterns (see7 and refs. therein). For example, in the transformation of smectite to illite, the percentage of illite layers increases with temperature, geological time, and water/rock ratio, and accordingly the layer stacking mode shifts from R0 (random) to R1 (alternating), and then to longer-range order (R3)4. Based on a 1D Ising model, Zen8 attempted to provide a thermodynamic explanation for the formation of different layer stacking modes. By assuming that the interaction energy between layers depends only on the nearest neighbors, he showed that if the excess interaction energy between two unlike layers was large and positive, segregation into discrete crystals would result, and if the energy was large and negative, unlike layers would tend to alternate, forming a regular 1:1 mixed-layer crystal for equal proportions of the two layers. Intermediate energy values would result in irregular mixed layers, and a truly random layers would occur when the excess energy approaches to zero. In contrast, Wang and Xu7 suggested that the layer stacking would be a kinetic process and the sequence of layer stacking could be described by a one-dimensional logistic map, such that non-periodic interstratification emerges when the contacted solution becomes slightly supersaturated with respect to both structural components. The transition from one interstratification pattern to another reflects a change in the chemical environment during mineral crystallization. In all these models, the underlying assumption is that any ordered structure would extend to an infinite physical domain as commonly assumed for a crystalline system. With this assumption, one would hope that a mixed-layer clay can be modeled with a fixed composition and well-defined structure. However, as we show below, this assumption may no longer be appropriate for a clay system, in which a limited dimension becomes an inherent attribute of the material and the size of particles and the range of ordering are intimately related. Furthermore, no existing model can explain the observed layer thickness distribution along the c-direction of clay crystallites, which usually deviates from a lognormal distribution and highly skews towards small sizes (Fig. 2)3,5.\n\nIn this paper, we show that a clay mineral can be treated as two separate low-dimensional systems: a 2D system for the individual layers and a 1D system for the layer stacking. By formulating an appropriate statistical mechanical model for each system, we show that the dimension of clay particles is inevitably limited by the lack of long-range order in low-dimensional systems. This treatment will provide a new perspective on mineral phase definition and thermodynamic modeling of clay materials as well as the transition from a 1D or 2D system to a 3D system. Since layered minerals are a large set of materials with a wide range of applications in advanced technologies9, the work presented below will also provide an insight into the structural manipulation and synthesis of these materials.\n\n## Results\n\n### Clay as a low-dimensional system\n\nThe interlayer interaction in a clay mineral involves the electrostatic (electric double layer like), van der Waals, and hydration forces10, which are much weaker than the intralayer ionic/covalent bonding, leading to a significant anisotropy in mineral mechanical properties11. In an expansive clay such as smectite, multiple layers of water can exist in an interlayer. As the clay expands due to hydration, the interlayer interaction can become further weakened12, and the mineral can easily be exfoliated13. All these suggest that the growth of individual phyllosilicate layers can approximately be treated as a 2D system. Also, since the interlayer interaction is relatively uniform within the a-b plane, the layer stacking along the c-direction can be treated as a quasi-1D system.\n\nIt is well-known that low-dimensional systems (typically 1D or 2D) with short-range interactions generally do not exhibit a long-range order or phase transition. This behavior is often attributed to the Hohenberg-Mermin-Wagner (HMW) theorem14,15 for systems with continuous symmetries such as the XY model, to works by Landau and Lifshitz16 and Peierls and Born17 for systems with discrete symmetries such as the Ising model, and to van Hove18 for low-dimensional fluid-like systems. The typical explanation is that in low-dimensional systems, thermal fluctuations or other excitations have a strong tendency to disrupt any long-range order19. This result is quite universal and can be applied to a wide range of systems such as magnets, solids, superfluids, and membranes20. We postulate that this result can also apply to a clay system, that is, the limited dimension of a clay mineral is due to the lack of long-range order within its 2D layers and its 1D stacking of those layers.\n\nWith respect to individual phyllosilicate layers, much can be learned from the studies of engineered nanolayers. Nanolayers are solid layers with large in-plane dimensions but with nanometer thicknesses. Hong et al.21 studied the stability of ultrathin membranes of SrTiO$$_3$$ in epitaxial growth. Atomically controlled membranes were released after synthesis by dissolving the underlying epitaxial layer. Although all unreleased films were initially single-crystalline, the SrTiO$$_3$$ membrane lattice collapsed below a critical thickness. The authors showed that this crossover from power law to exponential decay of the crystalline order is analogous to the 2D Berezinkii-Kosterlitz-Thousless (BKT) transition. The BKT transition is a phase transition where the order in a 2D system of rotors such as the XY model is disrupted by the formation of unbound vortex and anti-vortex pairs22. The physics behind this behavior is quite universal and in the context of clay layers or 2D crystals, the lack of long-range order is due to the disruption of orientational order in a crystalline lattice. In this theory, one can define the correlation length of the crystalline lattice of a thin membrane. It is interesting to note that, similar to the process of a membrane released from a substrate, the expansion of clay interlayers through hydration could lead to a systematic reduction in clay particle size23.\n\nIf we assume that clay growth proceeds layer by layer, the BKT transition may take place within an individual phyllosilicate layer. For a weak interlayer interaction, a growing phyllosilicate layer would be constantly subjected to environmental fluctuations and any long-range structural order in the layers would be destroyed. Note that the thickness of a 2:1 phyllosilicate layer is about 0.65 nm1, thinner than the critical thickness for the BKT transition in an SrTiO$$_3$$ membrane21. As noted by Hong et al.21, the thermal fluctuations alone may be orders of magnitude lower than the energy required to break chemical bonds in a layer. However, the environmental fluctuations such as those in chemical potential and impurity concentration may be high enough to disrupt the lattice structure of a layer, leading to its limited extension.\n\n### Layer stacking and the Ising model\n\nWe here develop a statistical mechanical model of an interstratified clay. Let us assume that an interstratified clay is formed by the stacking of two types of phyllosilicate layers, A and B. Note that in a more general context, one type of “layer” could not necessarily be a phyllosilicate layer and it can simply be a structural discontinuity or empty space. This can be useful if one wants to think of a system as a single type of clay that is fragmented. We further assume that the total energy of the system is determined by the interactions between nearest-neighbor layers. The Hamiltonian or energy H of this system can be expressed as\n\n\\begin{aligned} H=\\frac{\\epsilon _{AA}+\\epsilon _{BB}-2\\epsilon _{AB}}{4}\\sum _{i=1}^{N}\\sigma _i\\sigma _{i+1}+\\frac{\\epsilon _{AA}-\\epsilon _{BB}}{4}\\sum _{i=1}^{N}(\\sigma _i+\\sigma _{i+1})+\\frac{\\epsilon _{AA}+\\epsilon _{BB}+2\\epsilon _{AB}}{4}N, \\end{aligned}\n(1)\n\nwhere $$\\epsilon _{AA}$$, $$\\epsilon _{BB}$$, and $$\\epsilon _{AB}$$ are the energies for the stacking of AA, BB, and AB layers, respectively; $$\\sigma _i$$ is the type of layer i with $$\\sigma _i=1$$ representing an A layer and $$\\sigma _i=-1$$, a B layer; and N is the total number of layers in the system. Suppose that the mineral is in equilibrium with an aqueous solution of fixed chemical potentials $$\\mu _A$$ and $$\\mu _B$$ for layers A and B respectively. The partition function of the system can be written as\n\n\\begin{aligned} Z=\\sum _{\\varvec{\\sigma }}e^{-\\beta \\left( H-\\mu _AN_A-\\mu _BN_B\\right) }, \\end{aligned}\n(2)\n\nwhere $$\\beta =1/kT$$ is the inverse temperature, and $$N_A$$ and $$N_B$$ are the numbers of A and B layers respectively. The sum is over all combinations of layer types $$\\varvec{\\sigma }=\\{\\sigma _i\\}$$. We can rewrite the numbers of each layer type as\n\n\\begin{aligned} N_A=\\sum _{i=1}^{N}\\frac{1+\\sigma _i}{2},N_B=\\sum _{i=1}^{N}\\frac{1-\\sigma _i}{2}. \\end{aligned}\n(3)\n\nNote that this automatically enforces $$N_A+N_B=N$$. The partition function Eq. (2) can then be recast as\n\n\\begin{aligned} Z=\\sum _{\\varvec{\\sigma }}e^{-\\beta \\left[ J_{\\perp }\\sum _{i}\\sigma _i\\sigma _{i+1}+\\frac{K}{2}\\sum _{i}(\\sigma _i+\\sigma _{i+1})+NH_0\\right] }, \\end{aligned}\n(4)\n\nwhere\n\n\\begin{aligned} J_{\\perp }&=\\frac{\\epsilon _{AA}+\\epsilon _{BB}-2\\epsilon _{AB}}{2}, \\end{aligned}\n(5a)\n\\begin{aligned} K&=\\frac{\\epsilon _{AA}-\\epsilon _{BB}-\\mu _A+\\mu _B}{2}\\end{aligned}\n(5b)\n\\begin{aligned} H_0&=\\frac{\\epsilon _{AA}+\\epsilon _{BB}+2\\epsilon _{AB}-\\mu _A+\\mu _B}{4}. \\end{aligned}\n(5c)\n\nEquation (4) resembles the standard 1D Ising model for material magnetization24 with an interaction energy $$J_{\\perp }$$ and an external magnetic field K. Parameter $$J_{\\perp }$$ controls whether like or unlike layers stack together, which depends on the interactions between two neighboring layers. K accounts for the difference between the two phyllosilicate components in the chemical affinity for clay layer precipitation from a contacted solution, which depends on solution chemistry. As an analogy, K represents the influence of an external chemical potential field. $$H_0$$ is simply a constant energy shift that will have no effect on the final results. We can evaluate the partition function using the transfer matrix method25. The major results are summarized as follows. The free energy of the interstratified clay is\n\n\\begin{aligned} F=-kT\\ln Z=NH_0-kT\\ln \\left( \\lambda _+^N+\\lambda _-^N\\right) , \\end{aligned}\n(6)\n\nwhere $$\\lambda _{\\pm }$$ are the eigenvalues of the transfer matrix $$\\left[ \\begin{array}{ll} e^{\\beta (J_{\\perp }+K)} &{} e^{\\beta J_{\\perp }}\\\\ e^{\\beta J_{\\perp }} &{} e^{\\beta (J_{\\perp }-K)}\\\\ \\end{array}\\right]$$ given by\n\n\\begin{aligned} \\lambda _{\\pm }=e^{-\\beta J_{\\perp }}\\cosh \\beta K\\pm \\sqrt{e^{-2\\beta J_{\\perp }}\\cosh ^2\\beta K+2\\sinh 2\\beta J_{\\perp }}. \\end{aligned}\n(7)\n\nNote that $$\\lambda _+>\\lambda _-$$. In the thermodynamic limit ($$N\\rightarrow \\infty$$), we have $$\\lambda _+^N\\gg \\lambda _-^N$$ and the free energy can be well-approximated by $$F\\simeq NH_0-NkT\\ln \\lambda _+$$. The mean composition of layer i is\n\n\\begin{aligned} \\langle \\sigma _i\\rangle =-\\frac{1}{N}\\frac{\\partial F}{\\partial K}=-\\frac{e^{-2\\beta J_{\\perp }\\sinh \\beta K}}{\\sqrt{1+e^{-4\\beta J_{\\perp }}\\sinh ^2\\beta K}}\\equiv \\cos 2\\phi , \\end{aligned}\n(8)\n\nwhere $$\\phi$$ satisfies $$\\cot 2\\phi =-e^{-2\\beta J_{\\perp }}\\sinh \\beta K$$. A mean composition of $$\\langle \\sigma _i\\rangle =1$$ means that all the layers are of type A while a mean composition of $$\\langle \\sigma _i\\rangle =-1$$ means that all the layers are of type B. To quantify the structure or ordering of the layers, we compute the so-called two-point correlation function\n\n\\begin{aligned} \\langle \\sigma _i\\sigma _j\\rangle =\\cos ^22\\phi +\\sin ^22\\phi \\left( \\frac{\\lambda _-}{\\lambda _+}\\right) ^{|j-i|}, \\end{aligned}\n(9)\n\nand the correlation of fluctuations\n\n\\begin{aligned} \\langle \\delta \\sigma _i\\delta \\sigma _j\\rangle =\\langle \\sigma _i\\sigma _j\\rangle -\\langle \\sigma _i\\rangle \\langle \\sigma _j\\rangle =\\sin ^22\\phi \\left( \\frac{\\lambda _-}{\\lambda _+}\\right) ^{|j-i|}, \\end{aligned}\n(10)\n\nwhere $$\\delta \\sigma _i=\\sigma _i-\\langle \\sigma _i\\rangle$$ is a fluctuation of layer i from its mean composition. $$\\langle \\sigma _i\\sigma _j\\rangle$$ characterizes how correlated the type of layer i is with that of layer j while $$\\langle \\delta \\sigma _i\\delta \\sigma _j\\rangle$$ characterizes how correlated fluctuations about the mean type of layer i is with those of layer j.\n\nSince $$|\\lambda _-/\\lambda _+|\\le 1$$, the correlation functions given by Eq. (10) decays exponentially as the distance between two layers increases, which means that there is no long-range order in clay layer stacking. This suggests that we should abandom the existing attempt to model clay layer stacking as a long-range ordering process. The existing classification of R1 and R3 layer stacking modes should not be treated as long-range ordering patterns, but rather a local ordering phenonmenon.\n\nIt can be seen in Eq. (10) that the correlation of structural fluctuations is determined by parameters $$J_{\\perp }$$ and K. As shown in Fig. 3, $$J_{\\perp }$$ controls the sign of $$\\lambda _-/\\lambda _+$$ and therefore the layer stacking mode. A positive $$J_{\\perp }$$ results in a negative ratio, leading to short-range alternating layer stacking, while a negative $$J_{\\perp }$$ leads to short-range stacking of like layers. At $$J_{\\perp }=0$$, we have random layer stacking since the ratio is zero and there are no correlations. A similar result was obtained by Zen8. $$J_{\\perp }$$ also affects the magnitude of $$\\lambda _-/\\lambda _+$$ and therefore the correlation length in layer stacking; in particular, a larger $$|J_{\\perp }|$$ generally enhances the length over which structural fluctuations are correlated.\n\nAs mentioned earlier, K represents the influence of the solution chemistry on clay layer stacking. Let us first consider the case when $$K=0$$, that is, there is no influence from the external chemical potential. In this case, the layer stacking is controlled only by structural fluctuations, and consequently the structural coherence length is equivalent to the correlation length of the fluctuations. From Eq. (10), the probability of a given coherence length of layer stacking exponentially decreases as the length increases. If we can consider this coherence length as the clay particle thickness, the clay particle size along the c-direction should follow an exponential distribution. Indeed, this exponential distribution of thickness or cluster sizes has been shown to be the case for Ising-like models with no external field26. For $$K\\ne 0$$, increasing |K| causes one component to be enriched over the other in layer stacking, and as a result the particle size of the enriched component would increase. At the same time, as shown in Fig. 3, the ratio of $$\\lambda _-/\\lambda _+$$ approaches zero and so does the fluctuation correlation length. This means that a fluctuation of the type of one layer is not correlated with the fluctuations of the type of nearby layers. In this case, clay layer stacking is equivalent to a uniform random fragmentation process, in which the depleted component would randomly and uniformly insert into a sequence of layers of the enriched component. A uniform random fragmentation in a 1D system generates an exponential-like particle size distribution27. In all these cases, the particle size distribution along the c-distribution is thus predicted to follow an exponential or nearly exponential distribution. This is in qualitative agreement with actual measurements (Fig. 2)3,5 where it has been observed that the particle size distributions have exponential tails for large thicknesses. The exponential decay of correlation with length implies that the size of clay particles along the c-direction is finite. This is an inherent property of the one-dimensional nature of layer stacking, for which there is no long-range order.\n\nFor smaller thicknesses, as shown in Fig. 2, there is a considerable deviation from an exponential distribution (e.g. the peak in the distribution). However, it turns out that the distribution in this regime can still be described by a uniform random fragmentation process, but in a system with a dimension greater than one. In other words, while we may be able to treat the clay stacking as a one-dimensional process for large thicknesses along the c-direction, we may not be able to do so for smaller thicknesses. The distribution that arises from a uniform random fragmentation process in arbitrary dimensions is known as the Weibull distribution27. We will revisit this point in more detail in the “Discussion” section.\n\n### Lateral dimension and the XY model\n\nNow let us examine the stability of an individual phyllosilicate layer using an XY-like model. We define $$\\psi (\\varvec{r})$$ as the structural orientation field (i.e. the local orientational order of the crystalline lattice) in the tetrahedral and octahedral sheets. We here reproduce some key parts of the calculation of the fluctuations in the orientational order of a 2D lattice20,22. If the 2D lattice is ordered, the order parameter will be constant over the entire lattice or $$\\psi (\\varvec{r})=\\psi _0$$. Due to environmental excitations, the lattice will deform and the orientational order will vary with position. Assuming that the gradients in $$\\psi (\\varvec{r})$$ are small, we can expand the Hamiltonian $$H[\\psi (\\varvec{r})]$$ to the second order in the gradient since $$\\varvec{\\nabla }\\psi (\\varvec{r})\\rightarrow -\\varvec{\\nabla }\\psi (\\varvec{r})$$ should leave the energy unchanged, which gives\n\n\\begin{aligned} H[\\psi (\\varvec{r})]=\\frac{J_{\\parallel }}{2}\\int d^2\\varvec{r}[\\varvec{\\nabla }\\psi (\\varvec{r})]^2, \\end{aligned}\n(11)\n\nwhere $$J_{\\parallel }$$ is the interaction coefficient within a phyllosilicate layer. To make progress in computing the thermodynamic properties of this model, it is useful to express $$\\psi (\\varvec{r})$$ in Fourier space as\n\n\\begin{aligned} \\psi (\\varvec{r})=\\int \\frac{d^2\\varvec{k}}{(2\\pi )^2}\\psi (\\varvec{k})e^{i\\varvec{k}\\cdot \\varvec{r}}, \\end{aligned}\n(12)\n\nwhich gives us for the energy\n\n\\begin{aligned} H[\\psi (\\varvec{k})]=\\frac{J_{\\parallel }}{2}\\int d^2\\varvec{k}|\\psi (\\varvec{k})|^2k^2. \\end{aligned}\n(13)\n\nThe partition function of the system can then be expressed as an integral over all realizations of field $$\\psi (\\varvec{k})$$ given by\n\n\\begin{aligned} Z[\\psi (\\varvec{k})]=\\int \\mathcal {D}[\\psi (\\varvec{k})]e^{-\\beta H[\\psi (\\varvec{k})]}=\\int \\mathcal {D}[\\psi (\\varvec{k})]e^{-\\frac{\\beta }{2}\\int d^2\\varvec{k}|\\psi (\\varvec{k})|^2\\epsilon (\\varvec{k})}, \\end{aligned}\n(14)\n\nwhere $$\\epsilon (\\varvec{k})=J_{\\parallel }k^2$$. The structural correlation function is defined as $$c(|\\varvec{r}-\\varvec{r}'|)=\\langle e^{i\\psi (\\varvec{r})}e^{-i\\psi (\\varvec{r}')}\\rangle =e^{-\\frac{1}{2}\\left\\langle \\left[ \\psi (\\varvec{r})-\\psi (\\varvec{r}')\\right] ^2\\right\\rangle }$$, where the last step can be obtained by evaluating the average $$\\langle \\cdot \\rangle$$ over realizations of field $$\\psi (\\varvec{r})$$ with the probability distribution $$P[\\psi (\\varvec{r})]=Z^{-1}e^{-\\beta H[\\psi (\\varvec{r})]}$$. The last average $$\\left\\langle \\left[ \\psi (\\varvec{r})-\\psi (\\varvec{r}')\\right] ^2\\right\\rangle$$ can be computed as follows\n\n\\begin{aligned} \\left\\langle \\left[ \\psi (\\varvec{r})-\\psi (\\varvec{r}')\\right] ^2\\right\\rangle &=\\int \\frac{d^2\\varvec{k}d^2\\varvec{k}'}{(2\\pi )^4}\\left( e^{i\\varvec{k}\\cdot \\varvec{r}}-e^{i\\varvec{k}\\cdot \\varvec{r}'}\\right) \\left( e^{i\\varvec{k}'\\cdot \\varvec{r}}-e^{i\\varvec{k}'\\cdot \\varvec{r}'}\\right) \\langle \\psi (\\varvec{k})\\psi (\\varvec{k}')\\rangle \\\\& =\\int \\frac{d^2\\varvec{k}d^2\\varvec{k}'}{(2\\pi )^2}\\left( e^{i\\varvec{k}\\cdot \\varvec{r}}-e^{i\\varvec{k}\\cdot \\varvec{r}'}\\right) \\left( e^{i\\varvec{k}'\\cdot \\varvec{r}}-e^{i\\varvec{k}'\\cdot \\varvec{r}'}\\right) \\frac{\\delta (\\varvec{k}+\\varvec{k}')}{\\beta \\epsilon (\\varvec{k})}\\\\& =\\int \\frac{d^2\\varvec{k}}{2\\pi ^2}\\frac{1-\\cos \\left[ \\left( \\varvec{r}-\\varvec{r}'\\right) \\cdot \\varvec{k}\\right] }{\\beta \\epsilon (\\varvec{k})}\\\\& \\approx \\frac{1}{\\pi \\beta J_{\\parallel }}\\int _{\\frac{1}{|\\varvec{r}-\\varvec{r}'|}}^{\\frac{1}{a}}\\frac{dk}{k}=\\frac{1}{\\pi \\beta J'}\\ln \\frac{|\\varvec{r}-\\varvec{r}'|}{a}, \\end{aligned}\n(15)\n\nwhere a is the lattice constant. Therefore, the structural correlation goes as\n\n\\begin{aligned} c(\\varvec{r}-\\varvec{r}')\\approx \\left( \\frac{a}{|\\varvec{r}-\\varvec{r}'|}\\right) ^{\\frac{1}{2\\pi \\beta J_{\\parallel }}}. \\end{aligned}\n(16)\n\nThis power-law dependence of the structural correlation on the distance between two points $$\\varvec{r}$$ and $$\\varvec{r}'$$ indicates that there is no true long-range order. In other words, over a large enough distance, the orientational order of the 2D lattice will be broken. This means that the structural coherence of a phyllosilicate layer is limited and so is the lateral dimension of the layer. For SrTiO$$_3$$ nanolayers (1.2–3.1 nm thick), after the layers were released from the growth substrate and freely suspended in water, the structural coherence length was estimated to be 4-40 nm21. Assuming that the bonding energy in a phyllosilicate layer is similar to that in an SrTiO$$_3$$ nanolayer, and considering that this energy can be further increased by the interactions between clay layers (see Section “Correlation between layer extension and clay composition”), we estimate that the structural coherence length of a clay layer could range from nanometers to micrometers, consistent with observations4.\n\nThe intralayer interaction can also be anisotropic depending on the lattice structure of a phyllosilicate layer. For example, in sepiolite, the chemical bonding in one direction is stronger than that in another. This would result in the structural coherence length to be longer along one direction, leading to the fibrous nature of the mineral. Indeed, we can note from Eq. (16) that the distance $$\\delta r^*=|\\varvec{r}-\\varvec{r}'|^*$$ at which the structural correlation decays to a threshold value $$c^*$$ satisfies $$\\ln \\frac{\\delta r^*}{a}\\approx 2\\pi \\beta J_{\\parallel }\\ln \\frac{1}{c^*}\\sim \\beta J_{\\parallel }$$, which suggests that a small change in $$J_{\\parallel }$$ can induce a large change in structural coherence and hence a strong structural anisotropy. In addition, Eq. (16) suggests that the area of clay platelets should follow a power-law distribution, which yet needs to be confirmed experimentally.\n\n### Compositional variation of an interstratified clay\n\nOne challenge in modeling a mixed-layer clay is that such a mineral does not have a fixed stoichiometry in terms of chemical composition28, that is, the percentage makeup of the types of layers can vary from sample to sample. A common approach is to choose the appropriate layer types with fixed percentages and then use a solid solution model for layer mixing29,30,31,32,33, which is mostly empirical with many model parameters to be constrained. The benefit of our model is that it does not require any additional assumptions about the mixing and contains fewer parameters that can be directly related to the physics of the system. From Eq. (8), we can easily calculate the average molar fraction of component A, $$X_A$$, in an interstratified clay in equilibrium with a porewater as\n\n\\begin{aligned} X_A=\\frac{1+\\langle \\sigma _i\\rangle }{2}=\\frac{1}{2}-\\frac{e^{-2\\beta J_{\\perp }}\\sinh \\beta K}{2\\sqrt{1+e^{-4\\beta J_{\\perp }}\\sinh ^2\\beta K}}. \\end{aligned}\n(17)\n\nThis is plotted in Fig. 4. The composition of a mixed layer clay is thus determined by just two parameters: the interlayer interaction $$J_{\\perp }$$ and the external chemical field K. Note that we can rewrite K as $$K=K_{\\epsilon }+K_{\\mu }$$, where $$K_{\\epsilon }=(\\epsilon _{AA}-\\epsilon _{BB})/2$$ and $$K_{\\mu }=(\\mu _B-\\mu _A)/2$$. By varying the solution chemistry (i.e. the chemical potentials $$\\mu _A$$ and $$\\mu _B$$) and measuring the layer composition [(Eq. (17)] and correlations [(Eqs. (9), (10)], one can easily determine the parameters $$J_{\\perp }$$ and $$K_{\\epsilon }$$. Once these two parameters are determined, the composition of the mineral can then be predicted for a whole range of solution chemistries. In this approach, we do not assume any long-range ordering in the clay minerals; instead, we treat a local clay particle aggregate as a single thermodynamic ensemble with short-range interactions and ordering. Such an approach may significantly simplify the way mixed-layer clays are modeled in water-rock interactions and allow for an easy prediction of various thermodynamic properties such as composition, Gibbs’ free energy, and mineral structure.\n\n### Dimension disparity\n\nAs mentioned earlier, the a-b dimension of clay crystallites ranges from a few nanometers to micrometers4, while the c dimension ranges from $$\\sim 1$$ to $$\\sim 100$$ nm3,5. This dimension disparity between the two directions can be up to 200 times1. This may be attributed to the way how the structural correlation decays along the two directions. As indicated in Eqs. (10) and (16), the two-point structural correlation decays exponentially along the c direction while only follows a power law along an a-b direction. Since the former decays much faster than the latter, the dimension of clay crystallites along the a-b direction would be larger than that along the c direction.\n\n### Correlation between layer extension and clay composition\n\nOur model provides a reasonable explanation for the observed correlation between the lateral extension of clay platelets and the composition in mixed-layer samples4. As shown in Fig. 5, the area of illite layers in illite/smectite mixed layers strongly correlates with the percentage of illite in the samples from a hydrothermal/sandstone system. However, there is no such correlation at all in the samples from bentonite. To explain this difference, let us choose illite as component A. We argue that there should be some coupling between the interlayer interaction $$J_{\\perp }$$ and the intralayer interaction $$J_{\\parallel }$$. For example, the interaction with neighboring layers would reduce the freedom for layer structural fluctuations, which equivalently increases the interaction within the layers. Writing $$J_{\\parallel }=f(J_{\\perp })$$ and expanding about $$J_{\\perp }=0$$, we have\n\n\\begin{aligned} J_{\\parallel }=J_{\\parallel ,0}+f'(0)J_{\\perp }+\\frac{1}{2}f''(0)J_{\\perp }^2+\\cdots , \\end{aligned}\n(18)\n\nwhere $$J_{\\parallel ,0}$$ is the interaction within a clay layer when there is a very weak interlayer interaction. As discussed earlier, the sign of $$J_{\\perp }$$ determines whether like or unlike layers stack together. The influence of a neighboring layer on the intralayer interaction $$J_{\\parallel }$$ of a given layer is expected to be independent of the type of the neighboring layer as long as the strength of interlayer interaction $$|J_{\\perp }|$$ is the same. Therefore, we expect $$J_{\\parallel }$$ to be an even function of $$J_{\\perp }$$ and so $$J_{\\parallel }\\simeq J_{\\parallel ,0}+\\frac{1}{2}f''(0)J_{\\perp }^2$$. As shown in Fig. 4, we can approximate $$X_{\\text {illite}}$$ as a linear function of $$J_{\\perp }$$ for a fixed K or\n\n\\begin{aligned} X_{\\text {illite}}\\simeq \\frac{3}{4}-\\frac{3}{8}\\beta \\left( J_{\\perp }-J_{\\perp }^*\\right) , \\end{aligned}\n(19)\n\nwhere $$J_{\\perp }^*=\\frac{1}{2\\beta }\\ln \\sinh \\beta K$$. Furthermore, by choosing the threshold value $$c^*$$ for the structural correlation given by Eq. (16), we can define the characteristic correlation length of a clay platelet $$\\delta r^*$$ and area $$A\\sim {\\delta r^*}^2$$. From Eq. (16), we have that $$\\ln \\delta r^*\\sim \\beta J_{\\parallel }$$, and thus $$\\ln A\\sim \\ln \\delta r^*\\sim J_{\\parallel }$$. Using the quadratic relation between $$J_{\\parallel }$$ and $$J_{\\perp }$$, we find\n\n\\begin{aligned} \\ln A=\\gamma J_{\\perp }^2+\\ln A_0, \\end{aligned}\n(20)\n\nwhere $$\\gamma$$ and $$A_0$$ are constants. The key result is that the area of the clay platelets is dependent on the interlayer interaction $$J_{\\perp }$$ and not on the external chemical field K. If K is held fixed, we can invert Eq. (19) to find an approximate linear relation between $$J_{\\perp }$$ and $$X_{\\text {illite}}$$, which gives\n\n\\begin{aligned} \\left. \\ln A\\right| _{K\\text {fixed}}=\\gamma '\\left( X_{\\text {illite}}-X^*\\right) ^2+\\ln A_0. \\end{aligned}\n(21)\n\nTherefore, if the interlayer interaction $$J_{\\perp }$$ is varied with the external chemical field K held fixed, there should be a strong correlation between the area A of illite layers and the mineralogical composition $$X_{\\text {illite}}$$ of the clay. On the other hand, if the external chemical field K is varied with the interlayer interaction $$J_{\\perp }$$ held fixed, the area of illite layers should be independent of the mineralogical composition. This is exactly what is observed in Fig. 5.\n\nChanges in the interlayer interaction $$J_{\\perp }$$ are mainly driven by variations of temperature and pressure12. Increasing the temperature would reduce the number of water layers in the clay interlayer, the basal spacing of the clay, and therefore the layer interaction energy of the clay10,12,34. Such environments are often observed in hydrothermal or sandstone systems and, as shown in Fig. 5, there is indeed a strong correlation between the layer area and the mineralogical composition. On the other hand, in low temperature environments such as surface weathering systems, the formation of clay is mainly driven by the chemical affinity of a contacted solution or changes in K while $$J_{\\perp }$$ remains unchanged35. Bentonite is such a system, which, as shown in Fig. 5, exhibits almost no correlation between the layer area and the mineralogical composition. Thus, through simple scaling and symmetry arguments, we obtain a reasonable explanation for the correlations between the layer area and the mineralogical composition observed in various clay systems. In the more general context of 2D crystalline systems, the observed correlations between the interlayer interaction and the area of the layers may provide useful insight into the formation of thin materials with large lateral extensions.\n\n## Discussion\n\nThe effect of structural fluctuations in different dimensions has been illustrated in36. In a 1D lattice of particles with short range interactions, the relative fluctuations between the ends of a chain of N particles grows as $$\\sqrt{N}$$ since fluctuations add up independently. This means that there cannot be any periodic structure over large distances in 1D at finite temperatures. In a 2D lattice, the fluctuations grow logarithmically with distance [e.g. Eq. (15)] while in a 3D lattice, they are finite over any distance. Therefore, for dimensions less than three, structural order cannot persist over large distances. This change in structural order as one transitions from a 2D to a 3D system should generally be observable in layered materials. At one end of the spectrum are clays, which have relatively weak interlayer interactions. As a result, each individual phyllosilicate layer can be treated as a 2D system and the lateral extension of the layer is then limited by the lack of long-range order. However, as the interlayer interaction increases, such as in muscovite10,34, a layered mineral may approach a 3D crystal system with a long-range order, resulting in the formation of large crystals. As formulated in Section “Correlation between layer extension and clay composition”, the intralayer interaction $$J_{\\parallel }$$ should increase quadratically with the interlayer interaction $$J_{\\perp }$$. Consequently, the correlation length should increase with the interlayer interaction. This is schematically illustrated in Fig. 6. This concept provides a plausible explanation for the observed great size disparity between illite and muscovite, both with a similar mineral structure, which cannot be explained based only on a mineral structure argument. Two major factors control the interlayer interaction of clay minerals: the layer charge and the temperature. The interlayer interaction is expected to increase with increasing the layer charge. The layer charge per cell unit of O$$_{20}$$(OH)$$_4$$ increases from smectite (0.5–1.2) to illite (1.4–2.0) and ultimately to muscovite (2.0)37, and so does the interlayer interaction. Furthermore, muscovite tends to occur in a high temperature environment. An elevated temperature would reduce the basal spacing of a clay mineral (i.e. the number of water layers)38 and the interlayer hydration through reducing the water dielectric constant39. All these effects combined would result in a strong interlayer interaction for muscovite. Given a strong (more than exponential!) dependence of the lateral extension of a clay layer on the interlayer interaction as predicted by Equation (20), it is reasonable to expect that the lateral extension of muscovite would be much larger than that of illite. The trend illustrated in Fig. 6 is consistent with the observed transition from smectite to illite and ultimately to muscovite in the prograde transition of mudstone to slate3. It is interesting to note that relatively large smectite crystals have been synthesized at high pressure and temperature40.\n\nUp to this point, we have treated clay layer stacking as a 1D process. We have shown that the correlation of layer fluctuations along the c-direction for an enriched mineral [Eq. (10)] implies a uniform random fragmentation process in 1D and that the resulting particle size distribution should be exponential. Indeed, the distributions have exponential tails for large particle sizes (Fig. 2, inset). As noted, however, there is a considerable deviation from an exponential distribution for smaller particle sizes. One possible explanation for this deviation is that for smaller particle sizes or stacks of layers, a clay system may no longer be treated as a one-dimensional stack of layers but rather somewhere between one and two dimensions (a single layer is of course two-dimensional). Tenchov and Yanev27 generalized the 1D fragmentation result to higher dimensional systems. They showed that the particle size distribution generated from a uniform random fragmentation process in arbitrary dimensions is given by the Weibull distribution\n\n\\begin{aligned} P(d)=\\frac{\\delta }{\\eta }\\left( \\frac{d}{\\eta }\\right) ^{\\delta -1}e^{-\\left( \\frac{d}{\\eta }\\right) ^{\\delta }}, \\end{aligned}\n(22)\n\nwhere $$\\eta$$ is the characteristic particle size (or the thickness of a structurally coherent clay crystallite along the c-direction) and $$\\delta$$ is a constant characterizing the dimensionality of the fragmentation process. For $$\\delta =1$$, Eq. (22) reduces to an exponential distribution. For smaller particle sizes, we expect the dimensionality of the fragmentation process to have an effective dimensionality $$1\\le \\delta \\le 2$$. For $$\\delta >1$$, the distribution becomes peaked towards smaller particle sizes, which is exactly what is observed in measurements of clay thickness distributions shown in Fig. 2. Fitting the distributions gives us dimensionalities $$\\delta$$ ranging from 1.5 to 1.8 and characteristic thicknesses $$\\eta$$ ranging from 4.5 to 27.7 nm.\n\nTraditionally, the peak shift in the particle size distribution of minerals is attributed to Ostwald ripening. One problem with the existing theory is that a size distribution generated from Ostwald ripening should be highly skewed towards larger sizes (e.g.41), which apparently contradicts actual measurements (Fig. 2) showing that the peak is skewed towards smaller sizes. In addition, Ostwald ripening is an irreversible process in which larger crystals grow at the expense of smaller ones, ultimately leading to a sharply peaked distribution around a single particle size. To our knowledge, however, a sharply peaked distribution has never been observed for clay particles. In contrary, data show that the size distribution broadens with increasing metamorphic grade of clay samples (Fig. 2). In addition, it is often assumed that the Ostwald ripening of clay could take place over a geological time of millions of years5. It is difficult to imagine that a clay-water reaction would not reach equilibrium over such a long time scale, given the fact that clays have relatively fast reaction rates due to their high reactive surface areas and are usually modeled as secondary mineral phases in equilibrium with a contacted geofluid (e.g.39). All these arguments point to a possibility that Ostwald ripening may not be a relevant underlying mechanism for describing the particle size distribution of clays. Interestingly, our model provides a reasonably consistent explanation for all of the observed features. As shown in Fig. 2, the skew of the particle size distributions towards smaller sizes is a natural outcome of a random fragmentation process, which we inferred from our analysis of fluctuations in Section “Layer stacking and the Ising model”. In contrast with Ostwald ripening, our model implies that a clay aggregate with a broad particle size distribution can be a thermodynamically stable ensemble which can be preserved over a geological time scale as long as the environment for mineral formation remains relatively unchanged. As prograde metamorphosis progresses, we expect a progressive peak shift and a peak broadening of the clay particle size distribution (Fig. 2) because an elevated temperature and pressure should strengthen clay interlayer interactions (Section “Correlation between layer extension and clay composition”).\n\nIn summary, the commonly observed small clay particles can be related to the lack of long-range order in low-dimensional systems. Because of its weak interlayer interacion, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for the individual layers and a quasi-1D system for the layer stacking. The layer stacking in a mixed-layer clay can be described by a 1D Ising model while the limited 2D extension of an individual phyllosilicate layer can be described by an XY-like model. This simple yet powerful treatment allows for a systematic prediction and explanation of the limited dimension of clay particles, the origin of the particle size distribution, the compositional variation of an interstratified clay, and the transition from small illite crystallites to large muscovite crystals. Clay minerals thus provide a useful model system for studying transitions between 1D, 2D, and 3D systems in crystal growth.\n\n## References\n\n1. Meunie, A. Why are clay minerals small?. Clay Miner. 41, 551 (2006).\n\n2. Murray, H. H. Applied Clay Mineralogy: Occurrences, Processing and Applications of Kaolins, Bentonites, Palygorskite, Sepiolite, and Common Clays Vol. 188 (Elseview, New York, 2006).\n\n3. Warr, L. N. & Nieto, F. Crystallite thickness and defect density of phyllosilicates in low temperature metamorphic pelites: A TEM and XRD study of clay mineral crystallinity-index standards. Can. Mineral. 36, 1453 (1998).\n\n4. Altaner, S. P. & Ylagan, R. E. Comparison of structural models of mixed-layer illite/smectite and reaction mechanisms of smectite illitization. Clays Clay Miner. 45, 517 (1997).\n\n5. Eberl, D. D., Srodon, J., Kralik, M., Taylor, B. E. & Peterman, Z. E. Ostwald ripening of clays and metamorphic minerals. Science 248, 474 (1990).\n\n6. Drever, J. . I. The Geochemistry of Natural Waters 388 (Prentice-Hall, Hoboken, 1982).\n\n7. Wang, Y. & Xu, H. Geochemical chaos: Periodic and nonperiodic growth of mixed-layer phyllosilicates. Geochim. Cosmochim. Acta 70, 1995 (2006).\n\n8. Zen, E. Mixed-layer minerals as one-dimensional crystals. Am. Mineral. 52, 635 (1967).\n\n9. Brigatti, M. F. & Mottana, A. Layered Mineral Structures and their Application in Advanced Technologies Vol. 375 (European Mineralogical Union, London, 2011).\n\n10. Giese, R. F. The electrostatic interlayer forces of layer structure materials. Clays Clay Miner. 26, 51 (1978).\n\n11. Honorio, T., Brochard, L., Vandamme, M. & Lebée, A. Flexibility of nanolayers and stacks: Implications in the nanostructuration of lays. Soft Matter 14, 7354 (2018).\n\n12. Pradhan, S. M., Katti, K. S. & Katti, D. R. Evolution of molecular interactions in the interlayer of Na-montmorillonite swelling clay with increasing hydration. Int. J. Geomech. 15, 04014073 (2014).\n\n13. Zhu, T. T. et al. Exfoliation of montmorillonite and related properties of clay/polymer nanocomposites. Appl. Clay Sci. 169, 48 (2019).\n\n14. Hohenberg, P. C. Existence of long-range order in one and two dimensions. Phys Rev. 158, 383 (1967).\n\n15. Mermin, N. D. & Wagner, H. Absences of ferromagnetism or antiferromagnetism in one- or two-dimensional isotropic Heisenberg models. Phys. Rev. Lett. 17, 1133 (1966).\n\n16. Landau, L. D. & Lifshitz, E. M. Statistical Physics Part I (Elsevier Butterworth-Heinemann, Oxford, 1980).\n\n17. Peierls, R. & Born, M. On Ising’s model of ferromagnetism. Math. Proc. Camb. Philos. Soc. 32, 477 (1936).\n\n18. van Hove, L. Sur L’intégrale de Configuration Pour Les Systèmes De Particules À Une Dimension. Physica 16, 137 (1950).\n\n19. Chaikin, P. M. & Lubensky, T. C. Principles of Condensed Matter Physics Vol. 293 (Cambridge University Press, Cambridge, 2013).\n\n20. Halperin, B. I. On the Hohenberg–Mermin–Wagner theorem and its limitations. J. Stat. Phys. 175, 521 (2019).\n\n21. Hong, S. . S. et al. Two-dimensional limit of crystalline order in perovskite membrane films. Sci. Adv. 3, eaao5173 (2017).\n\n22. Kosterlitz, J. M. & Thouless, D. J. Ordering, metastability and phase transitions in two-dimensional systems. J. Phys. C Solid State Phys. 6, 1181 (1973).\n\n23. Katti, D. R., Matar, M. I., Katti, K. S. & Amarasinghe, P. M. Multiscale modeling of swelling clays: A computational and experimental approach. KSCE J. Civ. Eng. 13, 243 (2009).\n\n24. Ising, E. Beitrag zur Theorie des Ferromagnetismus. Z. Physik 31, 253 (1925).\n\n25. Baxter, R. J. Exactly Solved Models in Statistical Mechanics Vol. 486 (Academic Press, New York, 1982).\n\n26. Yilmaz, M. . B. & Zimmermann, F. . M. Exact cluster size distribution in the one-dimensional Ising model. Phys. Rev. E 71, 026127 (2005).\n\n27. Tenchov, B. & Yanev, T. Weibull distribution of particle sizes obtained by uniform random fragmentation. J. Colloid Interface Sci. 111, 1 (1986).\n\n28. Aja, S. U. & Rosenberg, P. E. The thermodynamic status of compositionally-variable clay minerals: A discussion. Clays Clay Miner. 40, 292 (1992).\n\n29. Aagaard, P. & Helgeson, H. C. Activity/composition relations among silicates and aqueous solutions: II. Chemical and thermodynamic consequences of ideal mixing of atoms on homological sites in montmorillonites, illites, and mixed-layer clays. Clays Clay Miner. 31, 207 (1983).\n\n30. Blanc, P., Bieber, A., Fritz, B. & Duplay, J. A short range interaction model applied to illite/smectite mixed-layer minerals. Phys. Chem. Miner. 24, 574 (1997).\n\n31. Blanc, P. et al. A generalized model for predicting the thermodynamic properties of clay minerals. Am. J. Sci. 315, 734 (2015).\n\n32. Gailhanou, H. et al. Thermodynamic properties of mixed-layer illite-smectite by calorimetric methods: Acquisition of the enthalpies of mixing of illite and smectite layers. J. Chem. Thermodyn. 138, 78 (2019).\n\n33. Lippmann, F. The solubility products of complex minerals, mixed crystals, and three-layer clay minerals. N. Jb. Miner. Abh. 130, 243 (1977).\n\n34. Sakuma, H. & Suehara, S. Interlayer bonding energy of layered minerals: Implication for the relationship with friction coefficient. J. Geophys. Res. Solid Earth 120, 2212 (2015).\n\n35. Christidis, G. E. & Huff, W. D. Geological aspects and genesis of bentonites. Elements 5, 93 (2009).\n\n36. Illing, B. et al. Mermin–Wagner fluctuations in 2D amorphous solids. PNAS 114, 1856 (2017).\n\n37. Sposito, G. et al. Surface geochemistry of the clay minerals. PNAS 96, 3358 (1999).\n\n38. Vidal, O. & Dubacq, B. Thermodynamic modelling of clay dehydration, stability and compositional evolution with temperature, pressure and H2O activity. Geochim. Cosmochim. Acta 73, 6544 (2009).\n\n39. Helgeson, H. C., Garrels, R. M. & Mackenzie, F. T. Evaluation of irreversible reactions in geochemical processes involving minerals and aqueous solutions-II. Applications. Geochim. Cosmochim. Acta 33, 455–481 (1960).\n\n40. Nakazawa, H., Yamada, H. & Fujita, T. Crystal synthesis of smectite applying very high pressure and temperature. Appl. Clay Sci. 6, 395 (1992).\n\n41. Vengrenovitch, R. D. On the Ostwald ripening theory. Acta Metall. 30, 1079 (1982).\n\n## Acknowledgements\n\nSandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This research was supported by the U.S. Department of Energy Spent Fuel Waste Science & Technology Program and Fossil Energy Fundamental Shale Research Program.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nAuthors contributed equally.\n\n### Corresponding authors\n\nCorrespondence to Yifeng Wang or Michael Wang.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare no competing interests.\n\n### Publisher's note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions\n\nReprints and Permissions", null, "", null, "" ]
[ null, "https://www.nature.com/static/images/logos/nature-briefing-logo-n150-white.svg", null, "https://www.nature.com/platform/track/article/s41598-022-11036-8", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87375677,"math_prob":0.9888701,"size":45036,"snap":"2022-27-2022-33","text_gpt3_token_len":10483,"char_repetition_ratio":0.15544501,"word_repetition_ratio":0.046068076,"special_character_ratio":0.22939426,"punctuation_ratio":0.12562507,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99768466,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T01:59:35Z\",\"WARC-Record-ID\":\"<urn:uuid:47997070-e713-464b-99a9-0737c55b95db>\",\"Content-Length\":\"325625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb07ed47-305e-4ea9-a67f-f8bb7e48ef51>\",\"WARC-Concurrent-To\":\"<urn:uuid:a656868f-3e1b-44e7-9641-14683ef7f7d6>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://www.nature.com/articles/s41598-022-11036-8?error=cookies_not_supported&code=8a6c754e-bf65-4dba-925a-03f0232e1def\",\"WARC-Payload-Digest\":\"sha1:MZRZSRNZJCD5UPR4J453NAMP5C5DPSKA\",\"WARC-Block-Digest\":\"sha1:STLLQDMEODPI4SXNPUXXR7H6YDJ72LHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103619185.32_warc_CC-MAIN-20220628233925-20220629023925-00037.warc.gz\"}"}
https://letsfindcourse.com/data-science/python-scipy-mcq-questions
[ "• Data Science MCQ Topics\n\n• Data Science Reference\n\n• Other Reference\n\n# Python SciPy MCQ Questions And Answers\n\nThis section focuses on \"Python SciPy\" for Data Science. These Python SciPy Multiple Choice Questions (MCQ) should be practiced to improve the Data Science skills required for various interviews (campus interview, walk-in interview, company interview), placements, entrance exams and other competitive examinations.\n\n1. SciPy stands for?\n\nA. science library\nB. source library\nC. significant library\nD. scientific library\n\n2. SciPy Original author is?\n\nA. Guido van Rossum\nB. Travis Oliphant\nC. Wes McKinney\nD. Jim Hugunin\n\n3. Which of the following is not correct sub-packages of SciPy?\n\nA. scipy.cluster\nB. scipy.source\nC. scipy.interpolate\nD. scipy.signal\n\n4. The number of axes is called as _____.\n\nA. object\nB. Vectors\nC. rank\nD. matrices\n\n5. Which of the following is true?\n\nA. By default, all the NumPy functions have been available through the SciPy namespace\nB. There is no need to import the NumPy functions explicitly, when SciPy is imported.\nC. SciPy is built on top of NumPy arrays\nD. All of the above\n\n6. What will be output for the following code?\n\n```import numpy as np\n\nprint np.arange(7)```\n\nA. array([0, 1, 2, 3, 4, 5, 6])\nB. array(0, 1, 2, 3, 4, 5, 6)\nC. [0, 1, 2, 3, 4, 5, 6]\nD. [[0, 1, 2, 3, 4, 5, 6]]\n\n7. What will be output for the following code?\n\n```import numpy as np\n\nprint np.linspace(1., 4., 6)```\n\nA. array([ 1. , 2.2, 2.8, 3.4, 4. ])\nB. array([ 1. , 1.6, 2.8, 3.4, 4. ])\nC. array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])\nD. array([ 1. , 1.6, 2.2, 2.8, 4. ])\n\n8. Which of the following code is used to whiten the data?\n\nA. data = numpy.whiten(data)\nB. data = whiten(data)\nC. data =SciPy.whiten(data)\nD. data = data.whiten()\n\n9. How to import Constants Package in SciPy?\n\nA. import scipy.constants\nB. from scipy.constants\nC. import scipy.constants.package\nD. from scipy.constants.package\n\n10. What is \"h\" stand for Constant?\n\nA. Newton's gravitational constant\nB. Elementary charge\nC. Planck constant\nD. Molar gas constant\n\n11. what is constant defined for Boltzmann constant in SciPy?\n\nA. G\nB. e\nC. R\nD. k\n\n12. What is the value of unit milli in SciPy?\n\nA. 0.01\nB. 0.1\nC. 0.0001\nD. 0.001\n\n13. What will be output for the following code?\n\n```from scipy import linalg\n\nimport numpy as np\n\na = np.array([[3, 2, 0], [1, -1, 0], [0, 5, 1]])\n\nb = np.array([2, 4, -1])\n\nx = linalg.solve(a, b)\n\nprint x```\n\nA. array([ 2., -2., 9., 6.])\nB. array([ 2., -2., 9.])\nC. array([ 2., -2.])\nD. array([ 2., -2., 9., -9.])\n\n14. What will be output for the following code?\n\n```from scipy import linalg\n\nimport numpy as np\n\nA = np.array([[1,2],[3,4]])\n\nx = linalg.det(A)\n\nprint x```\n\nA. 2\nB. 1\nC. -2\nD. -1\n\n15. In SciPy, determinant is computed using?\n\nA. determinant()\nB. SciPy.determinant()\nC. det()\nD. SciPy.det()\n\n16. scipy.linalg always compiled with?\n\nA. BLAS/LAPACK support\nB. BLAS/Linalg support\nC. Linalg/LAPACK support\nD. None of the above\n\n17. Which of the following is false?\n\nA. scipy.linalg also has some other advanced functions that are not in numpy.linalg\nB. SciPy version might be faster depending on how NumPy was installed.\nC. Both A and B\nD. None of the above\n\n18. The scipy.linalg.solve feature solves the _______.\n\nA. integration problem\nB. differentiation problem\nC. linear equation\nD. All of the above\n\n19. What relation is consider between Eigen value (lambda), square matrix (A) and Eign vector(v)?\n\nA. Av = lambda*v\nB. Av =Constant * lambda*v\nC. Av =10 * lambda*v\nD. Av != lambda*v\n\n20. What will be output for the following code?\n\n```from scipy.special import logsumexp\n\nimport numpy as np\n\na = np.arange(10)\n\nres = logsumexp(a)\n\nprint res```\n\nA. 10\nB. 9.45862974443\nC. 9\nD. 9.46" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63994914,"math_prob":0.95718306,"size":5495,"snap":"2023-40-2023-50","text_gpt3_token_len":1715,"char_repetition_ratio":0.17173557,"word_repetition_ratio":0.1987513,"special_character_ratio":0.31355777,"punctuation_ratio":0.26886448,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9985372,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T04:15:22Z\",\"WARC-Record-ID\":\"<urn:uuid:9d4f3db1-a4f0-41fe-97d0-a1c7fa204343>\",\"Content-Length\":\"24282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2484f3ab-b5d8-4471-8cfa-11a7240dfd47>\",\"WARC-Concurrent-To\":\"<urn:uuid:e105f34c-d5dd-4ec0-bc6e-386ab4b21740>\",\"WARC-IP-Address\":\"159.65.153.174\",\"WARC-Target-URI\":\"https://letsfindcourse.com/data-science/python-scipy-mcq-questions\",\"WARC-Payload-Digest\":\"sha1:KWTIJY45MLFLY2R4RRTYB7PVIZF5HVN4\",\"WARC-Block-Digest\":\"sha1:A5L3UHMSKKHMHIRO76C5RQD3BLADBRKL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506559.11_warc_CC-MAIN-20230924023050-20230924053050-00717.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/cosmological+term
[ "# cosmological term\n\n## cosmological term\n\n[¦käz·mə¦läj·ə·kəl ′tərm]\n(relativity)\nA term proportional to the metric tensor in Einstein's field equations for special relativity.\nMcGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.\nMentioned in ?\nReferences in periodicals archive ?\nEquation (35) describes the inflaton density with w = [rho]/p = 1 and a cosmological term varying with the speed [a.sup.-3].\nConsider now the FLRW metric with a positive cosmological term and homogeneous density - that is to say the [LAMBDA]CDM model.\nThe cosmological term then becomes a ratio of photons to photon holes.\nFor a homogeneous, isotropic, and flat universe (k = 0) there are two independent Friedmann equations with the cosmological term [LAMBDA]:\nThis lends support to the fact that only a 4th rank tensor theory can strictly describe a metric with a variable cosmological term. Therefore, after interchanging [alpha] with [beta], we find:\nRecently, embedding general relativity with varying cosmological term in five-dimensional BDT of gravity in vacuum has been discussed by Reyes & Aguilar .\nIn addition, it implies a non zero cosmological term and a constant scalar curvature, therefore it doesnot admit a Hubble expansion in the whole, which tends to contradict all current observations.\n* We then include a variable term that supersedes the socalled cosmological term Kgah in the field equations, still complying with the conservation property of the Einstein tensor density in GR;\nSome Exact Solutions of the Friedmann Equations with the Cosmological Term. Soviet Astronomy, 1976, v.\nNext making use of a max quantum cosmic cosmological term (lambda) we obtained the mass of the universe, which now appears quantized in the units of cosmic [[??].sub.g].\nIn the first one we used an exponential cosmological term, for the second one we considered vanishing cosmological constant.\nIts great brightness and protracted visibility is largely a result of its relative proximity in cosmological terms.\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88189274,"math_prob":0.9174104,"size":1724,"snap":"2022-40-2023-06","text_gpt3_token_len":381,"char_repetition_ratio":0.16802326,"word_repetition_ratio":0.0,"special_character_ratio":0.2035963,"punctuation_ratio":0.10891089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.974407,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T04:48:26Z\",\"WARC-Record-ID\":\"<urn:uuid:626e3bfb-4ee3-4bff-aa39-e2a6e43e7803>\",\"Content-Length\":\"43601\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ed8553e-17a1-4e0e-91e0-191d48114b71>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e092f88-d3c9-4874-acc7-1b6f8a5e417e>\",\"WARC-IP-Address\":\"45.35.33.115\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/cosmological+term\",\"WARC-Payload-Digest\":\"sha1:3Z5FUG2DAKBG46TXGGS26XPLOUAHIZRO\",\"WARC-Block-Digest\":\"sha1:TVKMFE5RSCAYISE2IWDIXJGDEPSJNLJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334644.42_warc_CC-MAIN-20220926020051-20220926050051-00226.warc.gz\"}"}
http://www.us-mortgage-calculator.com/javascript-mortgage-calculator-formula-and-code-example-2/
[ "# Javascript mortgage calculator formula and code example (2)\n\n//Javascript mortgage calculator formula and code example (2)\n\n## Simple javascript mortgage calculator Example:\n\nThe javascript mortgage calculator will need the following input values:\n\nPR (present value of the mortgage loan or principal amount)\nIN (annual interest rate of the mortgage loan)\nPE (number of periods of the mortgage loan in years, or loan term in years)\n\nJust copy and paste the following JavaScript code (in blue) in a html page, after the <body> tag, but before the ending </body> tag.\n\n<script language=\"JavaScript\" type=\"text/javascript\">\n<!– hide the mortgage calculator formula from non JavaScript browsers\nfunction find_payment(PR, IN, PE) {\nvar PAY = (PR * IN) / (1 – Math.pow(1 + IN, -PE));\nreturn PAY\n}\nvar principal = 200000\nvar interest = 0.09\nvar term = 30\nvar monthly_payment = find_payment(principal, interest / 12, term * 12)\nalert(\"Amount of the loan:\\t\\$\" + principal + \"\\n\" +\n\"Annual interest rate:\\t\" + interest * 100 + \"%\\n\" +\n\"Term of the mortgage loan:\\t\" + term + \" years\\n\\n\" +\n\"Monthly payment:\\t\\$\" + monthly_payment)\n//–> End of hidden JavaScript for Browsers not supporting it\n</script>\n\nThe Javascript function find_payment() does the heavy javascript mortgage calculator calculations here. It takes as arguments the three terms required by the expression: PR, IN, and PE (mortgage principal amount, mortgage interest rate and mortgage periods in years).\n\nThe result of the find_payment function code is stored in the Javascript PAY variable, which is then transferred to the monthly_payment variable.\n\nFinally, the result is shown in an alert box.\n\nThis Javascript mortgage calculator script example uses fixed values: principal, interest and term (hand coded in the above example). Those fixed values are passed to the find_payment function.\n\nBefore the javascript mortgage calculator find_payment function does it’s calculations, the interest rate is divided by 12 and the term is multiplied by 12. This is necessary because those values were annual values, not monthly.\n\nTo change the results, simply replace one of the hard coded variables: principal, interest or the term.\n\nThis is part 2 of the post, read part 1 here.\n\nBy | 2017-01-24T12:19:29+00:00 April 18th|Categories: Mortgage formula|Comments Off on Javascript mortgage calculator formula and code example (2)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8416655,"math_prob":0.945227,"size":2021,"snap":"2020-45-2020-50","text_gpt3_token_len":449,"char_repetition_ratio":0.16955875,"word_repetition_ratio":0.0,"special_character_ratio":0.24344385,"punctuation_ratio":0.118644066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T21:53:24Z\",\"WARC-Record-ID\":\"<urn:uuid:fe276b8f-1f8c-4408-887e-82c4c4341db2>\",\"Content-Length\":\"462516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41f38810-f9ab-4a6a-a553-37df1d48661e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9662960-d49c-4618-95f9-bdb1a3279364>\",\"WARC-IP-Address\":\"74.208.236.61\",\"WARC-Target-URI\":\"http://www.us-mortgage-calculator.com/javascript-mortgage-calculator-formula-and-code-example-2/\",\"WARC-Payload-Digest\":\"sha1:2KOJO6SIMEQVJEHYTHJMZVQWP42FW4HH\",\"WARC-Block-Digest\":\"sha1:4OJHHL4FRK2KOFN7B7HGZMJ2X5NBPVRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141203418.47_warc_CC-MAIN-20201129214615-20201130004615-00185.warc.gz\"}"}
https://isiarticles.com/article/14865
[ "", null, "دانلود مقاله ISI انگلیسی شماره 14865\nترجمه فارسی عنوان مقاله\n\n# نوسانات روزانه و سنجش فرکانس بالای بازارهای ارز\n\nعنوان انگلیسی\nIntraday volatility and scaling in high frequency foreign exchange markets\nکد مقاله سال انتشار تعداد صفحات مقاله انگلیسی ترجمه فارسی\n14865 2011 6 صفحه PDF سفارش دهید\nدانلود فوری مقاله + سفارش ترجمه\n\nنسخه انگلیسی مقاله همین الان قابل دانلود است.\n\nهزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.\n\nاین مقاله تقریباً شامل 4586 کلمه می باشد.\n\nهزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:\n\nشرح تعرفه ترجمه زمان تحویل جمع هزینه\nترجمه تخصصی - سرعت عادی هر کلمه 55 تومان 8 روز بعد از پرداخت 252,230 تومان\nترجمه تخصصی - سرعت فوری هر کلمه 110 تومان 4 روز بعد از پرداخت 504,460 تومان\nپس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.\nتولید محتوا برای سایت شما\nپایگاه ISIArticles آمادگی دارد با همکاری مجموعه «شهر محتوا» با بهره گیری از منابع معتبر علمی، برای کتاب، سایت، وبلاگ، نشریه و سایر رسانه های شما، به زبان فارسی «تولید محتوا» نماید.\n• تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما\n• تولید محتوا با مقالات ISI برای کتاب شما\n• تولید محتوا با مقالات ISI برای نشریه یا رسانه شما\n• و...\n\nپیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.\n\nکد تخفیف 10 درصدی: isiArticles\nمنبع", null, "Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)\n\nJournal : International Review of Financial Analysis, Volume 20, Issue 3, June 2011, Pages 121–126\n\nترجمه کلمات کلیدی\nمقیاس گذاری - نوسانات روزانه - افزایش عدم ثبات - تفاوت بی ثباتی -\nکلمات کلیدی انگلیسی\nScaling, Intraday volatility, Nonstationary increments, Nonstationary differences,\n\n#### چکیده انگلیسی\n\nRecent reports suggest that the stochastic process underlying financial time series is nonstationary with nonstationary increments. Therefore, time averaging techniques through sliding intervals are inappropriate and ensemble methods have been proposed. Using daily ensemble averages we analyze two different measures of intraday volatility, trading frequency and the mean square fluctuation of increments for the three most active FX markets; we find that both measures indicate that the underlying stochastic dynamics exhibits nonstationary increments. We show that the two volatility measures are equivalent. In each market we find three time intervals during the day where the mean square fluctuation of increments can be fit by power law scaling in time. The scaling indices in the intervals are different, but independent of the FX market under study. We also find that the fluctuations in return in these intervals lie on exponential distributions.\n\n#### مقدمه انگلیسی\n\nAnalysis of financial time series has provided new insights about the underlying stochastic processes (Bouchaud and Potters, 2000, Dacorogna et al., 2001, Mantegna and Stanley, 2007 and McCauley, 2009). Techniques from statistical physics have been adapted to analyze and model financial time series, to access risk, and price options. Early work by Osborne, 1959, Osborne, 1977 and Samuelson, 1965 laid the foundation for the Black and Scholes option pricing model (Black and Scholes, 1973 and Merton, 1973), which assumed that the stochastic dynamics of the underlying asset was a geometric Brownian motion. However, the hypothesis of Gaussian fluctuations disagrees with fluctuations seen in commodity markets as reported by Mandelbrot, 1963 and Mandelbrot, 1966. Empirical studies conducted over the last two decades found that distributions of intraday fluctuations are non-Gaussian and contain fat tails (Cont, 2001, Dacorogna et al., 1993, Gopikrishnan et al., 1999, Müller et al., 1990, Olsen et al., 1997, Schmitt et al., 1999 and Xu and Gencay, 2003). For example, these distributions were found to follow a power law outside the Lévy stable domain (Gopikrishnan et al., 2000, Gopikrishnan et al., 1999 and Plerou and Stanley, 2007). Furthermore, empirical analysis suggests that the distributions scale with the length of the time interval analyzed (Galluccio et al., 1997, Gopikrishnan et al., 2000 and Vandewalle and Ausloos, 1998). Many of these analyses employed sliding interval methods, which implicitly assume that the underlying stochastic process Xt has stationary increments, i.e., the increments ΔXτ = ΔXt, τ = Xt + τ − Xt are independent of time t and are functions of τ only. However, other reports have suggested that the increments are nonstationary, i.e., the increment ΔXt, τ is an explicit function of time. First, it was shown that the trading frequency is not uniform within a day. In fact, it was shown that the frequency varies by a factor of ~ 20 ( Dacorogna et al., 2001, Müller et al., 1990 and Zhou, 1996). Many authors proposed that financial market fluctuations are best analyzed in transaction time ( Ane and Geman, 2000, Baviera et al., 2001, Clark, 1973, Griffin and Oomen, 2008, Mandelbrot and Taylor, 1967, Oomen, 2006 and Silva and Yakovenko, 2007). A second approach inferred that the volatility of the Euro–Dollar exchange rate (in real time) was not uniform and varied by a factor of around 3 within a day ( Bassler et al., 2007, Dacorogna et al., 2001, Müller et al., 1990 and Zhou, 1996). Both approaches suggest that intraday increments are generally time dependent and one conclusion of the present work is that they are equivalent. Bassler et al. (2007) demonstrated that there were several time intervals during which the Euro–Dollar exchange rate can be fit by power laws in time. Moreover, the scaling indices within these intervals were different. The second result of our work is that the scaling intervals and scaling indices are common for the three major currency exchange rates, EUR/USD, USD/JPY, and GBP/JPY. In fact, the volatility in these markets exhibits similar characteristics even outside the scaling intervals. We also ask whether price variations outside of the scaling intervals lie on exponential distributions as reported in Silva et al., 2004 and Bassler et al., 2007. We address this issue using low order absolute moments of the distributions. Our studies are conducted on FX rates, which have the most active and liquid markets. The daily turnover in traditional FX market transactions in 2009 was approximately 3 trillion Dollars (BIS, 2007 and IFSL, 2009). The market is open 24 hours on weekdays, i.e., Sunday 20:15 Greenwich Mean Time (GMT) till Friday 22:00 GMT. The global turnover can be accounted in three main geographical regions: Asia, Europe and North America (BIS, 2007 and Galati and Heath, 2007). The UK accounts for 35.8% of exchange trading, while the US and Japan account for 13.9% and 6.7% respectively (IFSL, 2009). The three FX rates considered here were the most traded between 2001 and 2009 (BIS, 2007 and IFSL, 2009). We restrict our analyses to trading days on which each recorded trade is reported with the bid and ask quote and approximate the spot price p as the average of the bid and ask price: View the MathML sourcep=12(pbid+pask). Following Osborne (1959), we analyze market fluctuations using the return View the MathML sourcex(t;τ)=logp(t+τ)p(t), where p(t) represents the price of the commodity at time t. If the increments were stationary, the distribution of x(t ; τ) would be independent of the starting time t, and would only depend on the time-lag τ. As we already mentioned, intra-day variations in trading frequency ( Ane and Geman, 2000, Clark, 1973 and Mandelbrot and Taylor, 1967) and volatility ( Müller et al., 1990, Dacorogna et al., 2001, Zhou, 1996 and Bassler et al., 2007) were used to argue that increments in FX rates were nonstationary. We define volatility of returns as the root mean square fluctuation, see Eq. (2). If successive transactions are uncorrelated and the returns for each transaction are from the same unknown underlying distribution with finite variance σ0 constant over time, then the standard deviation after M transactions is proportional to View the MathML sourceMσ0. Assuming that M transactions have been reported in a (short) time interval [t, t + τ], the standard deviation can be expressed as equation(1) View the MathML sourceσ(t;τ)∝τντ(t)σ0, Turn MathJax on where View the MathML sourceντ=Mτ is the trading frequency. Thus, we suspect the volatility at a time t to be a function of the trading frequency. Here we define trading frequency as the number of recorded trades within a fixed time interval. Alternatively, we can define trading frequency by only considering trades within the time interval that change the price (tick time sampling) ( Griffin & Oomen, 2008). We find however, that the choice of transaction time is the most appropriate for our analysis. Fig. 1 illustrates the daily behavior of tick frequency and volatility according to Eq. (2). Both measures vary over the course of a day and exhibit similar complicated daily behavior. This means that the underlying stochastic process is not independent of the time of day. Full-size image (27 K) Fig. 1. A. The average number of ticks ντ of the EUR/USD exchange rate is plotted against time of day, with time lag τ = 10 min. B. Volatility σ(t ; τ) of the EUR/USD exchange rate is plotted against time of day, also with time lag τ = 10 min to ensure that autocorrelations have decayed. The plots indicate that the underlying stochastic process is nonstationary and exhibits nonstationary increments, depending on starting time t. If the increments were stationary, σ would be flat. Times of high volatility (and high trading frequency) coincide with opening times of banks and financial markets in major financial centers. The peaks in plot B can be assigned to characteristic times during the trading day. Both measures exhibit similar daily behavior raising the question if they are related. Figure options Bassler et al. (2007) demostrated that the intra-day volatility for the EUR/USD exchange rate contained several intervals during which the fluctuations exhibited scaling; the scaling indices in these intervals were different. Here we wish to determine if the volatility in other FX markets are similarly time dependent, if there are scaling regions, and how the scaling intervals and indices of different markets are related. These studies are conducted using the mean-square-fluctuation of increments during the time interval [t, t + τ] over different trading days. Specifically, equation(2) View the MathML sourceσ2t;τ:=x2t;τ=1N∑k=1Nxk2t;τ Turn MathJax on where N is the number of trading days and τ is chosen to be 10 min to eliminate correlations ( Bassler et al., 2007). xk(t ; τ) represents the return in the interval [t, t + τ] on the kth trading day. Note that applying an ensemble average is necessary because of the nonstationarity of the stochastic process. Methods based on sliding time averages of increments are not appropriate because the underlying dynamics exhibits nonstationary increments ( McCauley, 2008). On the other hand, the use of ensemble averages is justified due to the approximate daily repetition of σ(t) ( Bassler et al., 2007). Next, consider the distribution W(x, τ ; 0) of fluctuations over a time lag τ starting from t = 0. In the scaling region, the scaling ansatz given by Bassler et al. (2007) asserts that equation(3) W(x,τ;0)=τ−HF(u),Wx,τ;0=τ−HFu, Turn MathJax on where FF is the scaling function of the scaling variable View the MathML sourceu=xτH with the scaling index H. Note that t is set to zero at the beginning of each scaling interval. It was shown that the scaling function FF of the EUR/USD rate within the scaling region between 9:00 AM and 12:00 AM New York time was close to bi-exponential. Here we compute the scaling functions for other scaling intervals and other FX markets. Also note that we only have ~ 2000 ensembles in our study. This is insufficient to obtain accurate distribution functions. The method outlined in Bassler et al. (2007) first determines the scaling index H and subsequently uses the scaling ansatz, Eq. (3), and increments from multiple time intervals to compute FF. The first step was to note that within the scaling interval, the moments of x(0 ; τ) satisfy equation(4) View the MathML sourcex0;τβ1β∝τH−12. Turn MathJax on Computations for several moments β can be used to estimate H. Next, we use Eq. (3) at a range of intervals τ, in order to compute FF. It is found that FF is a bi-exponential distribution ( Bassler et al., 2007). The next step is to determine if the fluctuations in return outside the scaling regions lie on the same distribution. Without scaling, we do not have sufficient data to compute the distributions of x(t ; τ). Instead, we look at non-dimensional low-order moments equation(5) View the MathML sourcemβt;τ=∫dxxt;τβWx,τ;t1β∫dxx(t;τ)Wx,τ;t. Turn MathJax on The equality of two distributions will imply that the corresponding moments are identical. Thus, we compare the moments for the EUR/USD within a single scaling interval with those in a second interval that does not lie within a single scaling interval. The moments mβ(t ; τ) are calculated over the ensemble of daily returns x(t ; τ), whereas the returns are calculated over the whole time interval, i.e., τ equals the interval length. We compare the non-dimensional moments for an interval within a scaling region and for one not contained in a scaling region.\n\n#### نتیجه گیری انگلیسی\n\nFX markets can be regarded as large complex systems exhibiting stochastic behavior resulting from interactions between market participants at different levels. Properties of the resulting stochastic process can be inferred using statistical features. We analyzed two different measures of intraday volatility, trading frequency and the mean square fluctuation of increments, for the three most active FX markets and found that both measures indicate that the underlying stochastic dynamics is nonstationary. Consequently, the use of sliding interval techniques will not give an accurate characterization of the underlying stochastic process. The primary peak in Fig. 1B at 8:30 AM EST coincides with the time of announcement of important economic indicators, e.g., jobless claims, international trade. We also note that opening hours of financial institutions in the three main financial centers Asia, Europe, and North America are associated with peaks in volatility. Typically, the volatility decreases systematically following the opening. Recent studies proposed two approaches to demonstrate that the stochastic process underlying FX markets is non-stationary with nonstationary increments. Each approach assumes that the process is repeated every trading day (Dacorogna et al., 2001, Dacorogna et al., 1993, Galluccio et al., 1997, Müller et al., 1990 and Zhou, 1996). The assumption has been justified using the behavior of markets during a week (Bassler et al., 2007). The first technique relies on the variation in trading frequency during the day, and suggests the use of tick or transaction time to analyze the stochastic process. The second approach analyzes the time dependence of volatility (Dacorogna et al., 2001 and Dacorogna et al., 1993). We showed that the two approaches are equivalent during times at which financial institutions in at least one of the major trading centers (Japan, Britain, and USA) are open. Although previous studies reported the proportionality of the variance and the frequency of trades (Bouchaud et al., 2008, Plerou and Stanley, 2007 and Silva and Yakovenko, 2007), they used sliding interval techniques which do not apply for nonstationary processes. All three FX markets exhibit time intervals in the course of the day during which the volatility decays from a peak. We analyzed each scaling region for the different markets and found that the dynamical scaling indices differ from the often reported value of 0.5. The scaling indices differ between scaling intervals, but are consistent between all three markets. Earlier work by Bassler et al., 2007 and Bassler et al., 2008 found that the scaling index, or Hurst exponent, of H ≈ 0.5 can arise artificially from the use of sliding interval techniques. An additional misconception regarding H lies in the conclusion of long time correlations for View the MathML sourceH>12 like fractional Brownian motion (fBm). It has been shown that View the MathML sourceH>12 does not necessarily imply long term correlations ( Bassler et al., 2006 and Preis et al., 2007). The empirical scaling functions for each FX market and scaling interval are identical suggesting that the dynamics during the scaling intervals in all three FX markets are governed by the same underlying process. Our calculations were based on an underlying variable diffusion process, which was shown to exhibit volatility clustering (Gunaratne, Nicol, Seemann, & Torok, 2009), i.e., a slow decay of the autocorrelation function of the absolute or squared values of the time series, which is another characteristic feature of financial markets (Heyde and Leonenko, 2005 and Heyde and Yang, 1997). The scaling functions do not exhibit fat tails and are exponential, in agreement with previous work (Bassler et al., 2007, Bassler et al., 2008 and Silva et al., 2004). We further supported the finding of the bi-exponential behavior using low order moments. Our result suggests that reported fat tails might be caused by inappropriate use of sliding interval techniques since moving average procedures can not only give rise to artificial Hurst exponents but also to artificial fat tails (Bassler et al., 2007).\n\nدانلود فوری مقاله + سفارش ترجمه\n\nنسخه انگلیسی مقاله همین الان قابل دانلود است.\n\nهزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.\n\nاین مقاله شامل 4586 کلمه می باشد.\n\nهزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:\n\nشرح تعرفه ترجمه زمان تحویل جمع هزینه\nترجمه تخصصی - سرعت عادی هر کلمه 55 تومان 8 روز بعد از پرداخت 252,230 تومان\nترجمه تخصصی - سرعت فوری هر کلمه 110 تومان 4 روز بعد از پرداخت 504,460 تومان\nپس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید." ]
[ null, "https://certify.alexametrics.com/atrk.gif", null, "https://isiarticles.com/bundles/Article/front/images/Elsevier-Logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86509734,"math_prob":0.924877,"size":16805,"snap":"2021-04-2021-17","text_gpt3_token_len":4217,"char_repetition_ratio":0.15201476,"word_repetition_ratio":0.044172235,"special_character_ratio":0.23135972,"punctuation_ratio":0.1338558,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96485275,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T03:02:15Z\",\"WARC-Record-ID\":\"<urn:uuid:da07cb17-ec22-4a1f-af26-46f2a13901ff>\",\"Content-Length\":\"57039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0186b19-0685-4785-96bf-efaaa3d9d18a>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed52a42c-9690-4c61-a043-484a4357ff73>\",\"WARC-IP-Address\":\"185.236.37.180\",\"WARC-Target-URI\":\"https://isiarticles.com/article/14865\",\"WARC-Payload-Digest\":\"sha1:BWAEFYHPVWDJGBCB5SZNNCOTZXTGQDHC\",\"WARC-Block-Digest\":\"sha1:PHG4JFFSORSHKI5AHVENCZP3VABWOKLI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039560245.87_warc_CC-MAIN-20210422013104-20210422043104-00011.warc.gz\"}"}
https://www.vistrails.org/index.php?title=User:Tohline/Math/EQ_Toroidal04&diff=15921&oldid=15656
[ "", null, "# User:Tohline/Math/EQ Toroidal04\n\n(Difference between revisions)", null, "$~(\\nu - \\mu + 1)P^\\mu_{\\nu + 1} (z)$", null, "$~=$", null, "$~ (2\\nu + 1)z P_\\nu^\\mu(z) - (\\nu + \\mu)P^\\mu_{\\nu-1}(z)$ Abramowitz & Stegun (1995), p. 334, eq. (8.5.3)\n NOTE:", null, "$~Q_\\nu^\\mu$, as well as", null, "$~P_\\nu^\\mu$, satisfies this same recurrence relation." ]
[ null, "https://www.vistrails.org/skins/vistrails/header.png", null, "https://www.vistrails.org/images/math/a/a/8/aa878501c24ff4d4508b51b8706593b3.png ", null, "https://www.vistrails.org/images/math/8/f/2/8f21fc53d02cb7b08d9f1e08301bf5d0.png ", null, "https://www.vistrails.org/images/math/9/9/c/99c420e81f4dd972c6bc2153a64ba2e4.png ", null, "https://www.vistrails.org/images/math/9/c/c/9cc17508f338782d32933eccda989114.png ", null, "https://www.vistrails.org/images/math/a/c/6/ac612dd531dc87b4de2a0fb94f285f4e.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6714089,"math_prob":0.9999738,"size":654,"snap":"2021-04-2021-17","text_gpt3_token_len":214,"char_repetition_ratio":0.12923077,"word_repetition_ratio":0.0,"special_character_ratio":0.3088685,"punctuation_ratio":0.25,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997278,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,2,null,null,null,2,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T12:46:07Z\",\"WARC-Record-ID\":\"<urn:uuid:c451a8ec-d31d-473b-9888-c2f55241d267>\",\"Content-Length\":\"16865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe4b4ba8-7eb3-47ec-b061-bbcd60163064>\",\"WARC-Concurrent-To\":\"<urn:uuid:71b4480f-2bf9-42bc-a0cb-bf4f1dab3aad>\",\"WARC-IP-Address\":\"128.238.182.100\",\"WARC-Target-URI\":\"https://www.vistrails.org/index.php?title=User:Tohline/Math/EQ_Toroidal04&diff=15921&oldid=15656\",\"WARC-Payload-Digest\":\"sha1:F43WTP4UU4PHC22GGIWZETHYOOZS3PT5\",\"WARC-Block-Digest\":\"sha1:RWY5FN35USCLKNOSQFWEXUHR4MOKYVZN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703518240.40_warc_CC-MAIN-20210119103923-20210119133923-00228.warc.gz\"}"}
https://stuffsure.com/what-is-20-out-of-24-as-a-percentage/
[ "# What is 20 Out of 24 as a Percentage?\n\nIf you’re wondering what 20 out of 24 as a percentage is, you’re in luck. We’ve got the answer right here.\n\nCheckout this video:\n\n## 20 Out of 24 as a Percentage\n\nTo calculate 20 out of 24 as a percentage , you need to first understand what a percentage is. A percentage is a way of expressing a number as a fraction of 100. In other words, a percentage is a way of expressing a number as a fraction of 100. So, 20 out of 24 as a percentage would be calculated as follows: 20/24 = 0.83 = 83%.\n\n### Convert 20 out of 24 to a decimal\n\nTo convert 20 out of 24 to a decimal, divide 20 by 24.\n\n20 ÷ 24 = 0.8333333333\n\nThis can be rounded to 8.33% or 8 1/3%.\n\n### Convert the decimal to a percent\n\nTo covert a decimal to a percent, multiply the decimal by 100 and add the percentage sign. In this case, we would multiply 0.83 by 100 to get 83%.\n\n## 20 Out of 24 as a Fraction\n\nIn mathematics, a fraction is a number that represents a part of a whole. It is written with a numerator (20 in this case) and a denominator (24 in this case). The numerator represents the number of parts, and the denominator represents the total number of parts in the whole. When you divide the numerator by the denominator, you get the fraction’s decimal equivalent. In this case, 20÷24=0.833.\n\n### Convert 20 out of 24 to a fraction\n\nTo converts fraction such as 20 out of 24 to a percentage, we need to follow these steps.\n\n-Step 1) We convert both top and bottom into numbers that we can multiply by each other. This is because when we multiply fractions, we multiply the top numbers together and multiply the bottom numbers together.\n-Step 2) We find a number that we can multiply by the bottom number in our fraction so that it will equal the number in Step 1. In this example, we need to find a number that when multiplied by 24 will give us 100:\n-24 × 4 = 96 (This isn’t quite 100 yet, but it’s close)\n24 × 5 = 120 (This is too high)\n24 × 3= 72 (This is too low)\n24 × 4½= 108 (This is our answer: 108 is as close to 100 as 96 was, and since 96 was too low, 108 must be too high; so, 4½ must be the number we are looking for.)\n-Step 3) We take the number from Step 2 and multiply it by the top number of our fraction:\n4½ × 20 = 90 (Now our fraction looks like this: 90/100.)\n-Step 4) Finally, we convert this to percentage by moving the decimal point two digits to the right and adding a “%” sign. In this example:\n90%\n\n### Convert the fraction to a percent\n\nTo convert a fraction to a percent, multiply the numerator (top number) by 100 and divide by the denominator (bottom number). In this case, multiply 20 by 100 and divide by 24 to get 83 1/3%." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89792436,"math_prob":0.9958715,"size":2592,"snap":"2023-40-2023-50","text_gpt3_token_len":711,"char_repetition_ratio":0.16499227,"word_repetition_ratio":0.09126214,"special_character_ratio":0.3148148,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:12:23Z\",\"WARC-Record-ID\":\"<urn:uuid:8da6eae9-c721-45c0-bb22-3bdf893559b5>\",\"Content-Length\":\"58112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81c396c6-391b-473c-9fc5-8ae8e51eac9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:42431b48-a441-428e-9930-d77338018d11>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://stuffsure.com/what-is-20-out-of-24-as-a-percentage/\",\"WARC-Payload-Digest\":\"sha1:HDGHMYMIHSWBGBZWRDE4TLHZBQESIYNO\",\"WARC-Block-Digest\":\"sha1:R6OJPT7MTUFXQZZ2FCFCIVVBQYCODSJF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00044.warc.gz\"}"}
https://www.europeanmedical.info/motor-proteins/dynamic-force-spectroscopy-1.html
[ "# Dynamic Force Spectroscopy\n\nE. Evans1 and P. Williams2\n\nPart 1: E. Evans and P. Williams 1 Dynamic force spectroscopy. I. Single bonds\n\n### 1.1 Introduction\n\nWeak-noncovalent interactions govern structural cohesion and mediate most of life's functions from the outer membrane surface to the interior nucleus of a cell. On laboratory time scales, the energy landscape of a weak bond is fully explored by Brownian-thermal excitations, and energy barriers along its dissociation pathway(s) become encoded in a rate of unbonding that can range from to 1/year. When pulled apart with a ramps of force, the dissociation kinetics become transformed into a dynamic spectrum of unbonding force as a function of the steepness of the force ramps (loading rates). Expressed on a logarithmic scale in loading rate, the spectrum of breakage forces begins first with a crossover from near equilibrium to far from equilibrium unbonding and then rises through ascending regimes of strength. These regimes expose the prominent energy barriers traversed along the dissociation pathway. Labelled as dynamic force spectroscopy [7,10], this approach is being used to probe the inner world of biomolecular interactions [7, 8,13,14, 23, 24, 26, 30] and reveals energy barriers that are difficult or impossible to access by solution assays of near-equilibrium kinetics. These hidden barriers are crucial for specialized dynamic functions of molecules.\n\nIn this first chapter of our tutorial, we begin with an outline of the physics needed to understand the impact of force on lifetime of a single bond. Then deriving prescriptions for rate of transition under force, we analyze the stochastic process of unbonding in a probe experiment and\n\n1 Physics and Pathology, University of British Columbia, Vancouver, Canada V6T 2A6; Biomedical Engineering, Boston University, Boston, MA 02215, USA.\n\n2 Pharmaceutical Sciences, University of Nottingham, Nottingham, UK.\n\ndemonstrate the kinetic origin of the force distribution, the peak of which defines bond strength. Finally, we show how these developments come together to establish the method of dynamic force spectroscopy and give examples of single molecule experiments. In the second chapter to follow, we describe how a nanoscale attachment made up of a few bonds fails under force and develop limiting models for use in analysis of probe tests that involve multiply-bonded contacts.\n\n1.1.1 Intrinsic dependence of bond strength on time frame for breakage\n\nUnlike interatomic linkages within nucleic acid - protein - lipid - carbohydrate structures, weak noncovalent bonds between these biomolecules have limited lifetimes and will dissociate under almost any level of force if pulled on for modest periods of time. When close to equilibrium in solution, large numbers of molecules continuously bond and dissociate under zero force; thus, application of a field (e.g. electrical force or osmotic stress) to the reacting molecules simply alters the ratio of bound-to-free constituents. But at infinite dilution, an isolated molecular complex (\"bond\") exists far from equilibrium and has no strength on time scales longer than the time toff = 1/KoS needed for spontaneous dissociation. If pulled apart faster than toff, a solitary bond will resist detachment. The unbonding force can range up to - and even exceed - the adiabatic limit fx ~ \\dE/dx\\max defined by the steepest gradient in the intermolecular potential E(x) that binds the complex. In other words, if the bond is broken in less time than required for diffusive relaxation (^10~9 s), the force must exceed the \"brittle\" fracture strength of the bond. However, between the extremes in time scale (from a nanosecond to the time for spontaneous dissociation), the force needed to disrupt a weak bond is reduced significantly by thermal activation. Albeit very rarely, Brownian excitations in the liquid environment occasionally contribute large transient impulses of force which, added to a modest external force, exceed the steepest gradient in the intermolecular potential. This enables passage of the confining energy barrier. The physics that governs activated processes in liquids is century old beginning with Einstein's theory of Brownian motion and culminating in Kramers theory for escape from a bound state in liquids [16,18]. We will use this physics to establish the crucial connection between force - lifetime - and chemistry for a single molecular bond.\n\n1.1.2 Biomolecular complexity and role for dynamic force spectroscopy\n\nWhat's subtle and daunting about biomolecular bonds it that the interactions are usually made up of many atomic scale bonds distributed over diverse regions of large molecules - i.e. not localized to a single amino acid", null, "Fig. 1.1. Determined by X-ray diffraction, a vertex (stick) representation shows the important 19 amino acid tip (SGP) of the mucin P-selectin glycoprotein ligand-1 (PSGL-1) in its bound state conformation superposed on the van der Waals surface of the outer lectin domain of the cell membrane receptor P-selectin (taken from Somers et al. ).\n\nFig. 1.1. Determined by X-ray diffraction, a vertex (stick) representation shows the important 19 amino acid tip (SGP) of the mucin P-selectin glycoprotein ligand-1 (PSGL-1) in its bound state conformation superposed on the van der Waals surface of the outer lectin domain of the cell membrane receptor P-selectin (taken from Somers et al. ).\n\nor other small molecular residue. As an illustration, Figure 1.1 shows the structural complex obtained recently for the reactive tip of a glycosylated protein ligand bound to the outer protein domain of its cell surface receptor called P(for platelet)-selectin . Essential in immune function, this interaction enables white blood cells to transiently stick and carry out a rolling patrol of the vascular wall under high shear stress in blood vessels. Referred to as a carbohydrate-protein bond, this ligand-receptor interaction is comprised of several sugar-peptide and sulfopeptide-peptide hydrogen bonds plus a metal-ion coordination bond spread over many residues of both the receptor lectin domain and the tip of the large glycoprotein ligand. Even so, association and dissociation of this ligand-receptor complex in solution seems to exhibit first order kinetics as expected for an ideal \"bond\", which is modeled as a bound state confined by a single energy barrier. Furthermore, from force probe tests, we will also see that a single barrier dominates the kinetics of dissociation over many orders of magnitude in time scale for this complex interaction. Yet, probe tests of other species of the same class of bonds reveals that a sequence of barriers impede dissociation in different ranges of force. Thus, the landscape of energy barriers in a complex interaction can produce highly specialized dynamic responses in molecular reactions and linkages under stress, which is a principal design requirement for chemistry in living systems. An important step towards understanding these designs lies in probing the relation between force - time - chemistry at the level of single molecules.\n\nIn this tutorial, our aim is to show that measuring forces to pull apart single biomolecular complexes over an enormous span of time scales provides a spectroscopic method to explore the energy landscape of barriers which govern dissociation kinetics. By landscape, we mean the free energy profile along a preferential pathway (or pathways) followed most often through configuration space during dissociation; other pathways involve significantly greater energy, which makes their traverse extremely rare. Thus, an energy landscape is viewed to start from a minimum representing the bound state and rise over one or more peaks with intervening valleys to reach the dissociated state as illustrated schematically in Figure 1.2. The peaks are local saddle points in the energy surface and define barriers to kinetics. Because of the thermal (Boltzmann) weighting of the energy barriers, the most prominent barrier is the dominant impedance to kinetics with little retardation to dissociation from passage of lower barriers. When a bond or molecular complex is pulled apart under a ramp of force in a probe test, the barriers diminish in time and thus unbonding force depends on rate of loading (= force/time). As a consequence of diminishing barrier heights, the most frequent forces for unbonding plotted on a scale of log(loading rate) yield a dynamic spectrum that images the hierarchy of energy barriers traversed along the force-driven pathway [7,10]. Thus, the method of dynamic force spectroscopy (DFS) probes the inner world of molecular interactions.\n\n1.1.3 Biochemical and mechanical perspectives of bond strength\n\nGiven the conceptual energy landscape shown in Figure 1.2, it is useful to compare traditional ways of characterizing the strength of chemical bonds. Starting with biochemistry, the scale for bond strength is usually taken as the free energy difference E0 between bound and free states - or \"binding\" energy. In an ideal-dilute solution, the binding energy sets the equilibrium partition of \"bound-to-free\" constituents, i.e. the mole fraction of bound complexes vw [AB] divided by the product of free reactants vw [A] vw [B],\n\n,Lf rmrgy laiLdïcr>pc\n\n,Lf rmrgy laiLdïcr>pc", null, "Fig. 1.2. Conceptual energy profile along a scalar reaction coordinate. In this hypothetical example, the energy landscape exhibits a primary bound state and a secondary metastable state punctuated by two energy barriers. From the perspective of biochemistry, only the outer barrier Eb and binding energy E0 = Eb — AEb are important in bond formation and dissociation.\n\nFig. 1.2. Conceptual energy profile along a scalar reaction coordinate. In this hypothetical example, the energy landscape exhibits a primary bound state and a secondary metastable state punctuated by two energy barriers. From the perspective of biochemistry, only the outer barrier Eb and binding energy E0 = Eb — AEb are important in bond formation and dissociation.\n\nwhere concentrations [number/volume] are converted to a scale of mole fraction by the partial molar volume of water vw (e.g. ~ one liter per 55 Moles). At equilibrium, the ratio Keq = [AB] /{vw [A][B]} expresses the thermodynamic balance, kBT log(Keq) = E0, between reduction in (mixing) entropy and gain in free energy from binding. The important dynamical corollary to thermodynamic equilibrium is \"detailed balance\" where the number of complexes that form per unit time Kon [A] [B] must exactly equal the number that dissociate per unit time Koff [AB]. The \"on\" rate Kon (M-1 time-1) and \"off\" rate Koff (time-1) are empirically-defined parameters.\n\nAs such, the connection between equilibrium thermodynamics and phe-nomenological kinetics is through what's called the dissociation constant Kd = Koff/Kon, which has units of concentration and is inversely related to the equilibrium constant, i.e. 1/Keq = vw KD.\n\nConsistent with the label, lowering the concentration of reactants below Kd leads to complex dissociation and increasing concentration promotes complex formation. Introduced by Van't Hoff and Arrhenius in the late 19th century , the long-held phenomenological view is that kinetic rates start with primitive-attempt rates driven by molecular excitations but then are discounted dramatically by an inverse exponential (Arrhenius)\n\ndependence on a dominant energy barrier. For instance, once the reacting molecules come together rapidly by diffusion, the entrance barrier AEb shown in Figure 1.2 would retard association on approach to the bound state, i.e. Kon ~ exp(—AEb/kBT). Likewise, given that inner barriers are more than kB T lower, the height Eb of the paramount-outer barrier relative to the primary minimum would govern the rate of dissociation, i.e. Koff ~ exp(—Eb/kBT). Since the difference Eb — AEb in energy barriers equals the binding energy E0, the ratio of kinetic rates is consistent with \"detailed balance\" at equilibrium. However, what's clearly missing in this biochemical perspective of bond strength is force!", null, "Fig. 1.3. Schematic of the force (solid curve) required to displace molecular components of a bond given the conceptual energy landscape in Figure 1.2. From the viewpoint of mechanics, the complex should become unstable and dissociate from the location of the steepest energy gradient just beyond the primary minimum. The remainder of the energy landscape (dotted curve) would appear not to affect bond strength.\n\nFig. 1.3. Schematic of the force (solid curve) required to displace molecular components of a bond given the conceptual energy landscape in Figure 1.2. From the viewpoint of mechanics, the complex should become unstable and dissociate from the location of the steepest energy gradient just beyond the primary minimum. The remainder of the energy landscape (dotted curve) would appear not to affect bond strength.\n\nIn contrast to the biochemical perspective, classical mechanics is precise in its prescription of bond force. Specifically, the force required to separate interacting molecules is the gradient in energy along the landscape (or interaction potential). The subtlety is that not all positions along the energy profile are accessible as we apply increasing force to pull molecules apart. As sketched in Figure 1.3, only regions of the energy contour with monotonically-increasing gradients would be stable under rising force, which could leave major portions of the landscape as \"virtual\" or unmapped by a pulling force. Once force exceeds the steepest gradient in energy (just beyond the primary minimum in Fig. 1.2), the molecules would jump apart. Hence, from a mechanical perspective, bond strength is independent of barrier heights or features of the landscape other than the maximum gradient. Here, what's missing is time and temperature!\n\n1.1.4 Relevant scales for length, force, energy, and time\n\nAt the beginning, it is important to introduce the relevant length, force, energy, and time scales appropriate to measurements of single bond properties. The increment of length is obviously the size of a small molecule, which is taken as a nanometer (about three water molecules end-to-end). One nanometer is comparable to the mean spacing between molecules at a concentration of mole/liter and five hundred-fold smaller than the wavelength of green light. Next, the characteristic scale of force needed to speed up dissociation and quickly break weak-noncovalent bonds is a pi-conewton (pN). One piconewton is about one ten-billionth of a gram weight (10\" 10 gm) or ten thousand-fold smaller than can be measured with an analytical microbalance. The product of length and force scales reveals the appropriate scale for energy - thermal energy kB T - which is ^4 pNnm at biological temperatures (^300 K), which is better known as ^0.6 Kcal/mole for Avogadro's number (~6 x 1023) of molecules.\n\nTime scale is a much more complicated issue. At the atomic level, the time scale for excitations is typically 10\"15 s or comparable to the frequency of the light photons emitted or adsorbed in atomic transitions. Much longer, however, kinetics in vacuum or gas phase reactions are theorized to start at an attempt frequency defined by thermal energy over Planck's constant, kBT/h ~ 1013/s. This frequency characterizes thermally-driven transitions in a quantum oscillator model of chemical dissociation as developed by Eyring . However, in condensed liquids or inside compact structures like proteins, kinetics are slowed significantly by dissipative collisions between and within the molecular components as well as with the myriad of other molecules in the surrounding solvent. For example, an instantaneous impulse of momentum from a thermal collision in water will die out on a time scale of ~10\"12 s or less as set by the ratio of damping to molecular inertia. Because of damping, many impulses are needed to separate molecular components over a distance comparable to the scale of bond length even in the absence of a bonding interaction.\n\nHence, as will be shown next, kinetics in the overdamped world of biomolecular interactions begin on a time scale set by a diffusive relaxation time for the bond. This relaxation time is on the order of 10\"9 s which means that attempt frequencies for unbonding in liquids are four orders of magnitude slower than predicted by vacuum theory. The corresponding attempt frequency of ~109/s is then diminished many orders of magnitude by bond chemistry to reach laboratory kinetic rates of as long as 1/month or more for dissociation of weak biomolecular interactions - i.e. a range of more than sixteen orders of magnitude in time scale! Most important, this enormous span in time scale corresponds to breakage forces that range from the maximum gradient in the molecular interaction potential (—nN) under nanosecond detachment to zero force under detachment slower than the spontaneous dissociation rate 1 /toff.\n\n1.2 Brownian kinetics in condensed liquids: Old-time physics\n\nTo make the connection between force - time - and chemistry, we need to review the physics that underlies kinetics in a liquid environment. Motivated by Einstein's theory of Brownian motion , these well-known developments take advantage of the huge gap in time scale that separates rapid thermal impulses in liquids (<10~12 s) from slow processes in laboratory measurements. Three equivalent formulations describe molecular kinetics in an overdamped environment (see for example, N.G. van Kampen: Stochastic Processes in Physics and Chemistry ). The first is a nanoscopic description where molecules behave as particles with instantaneous positions or states x(t) governed by an overdamped Langevin equation of motion, dx/dt =[f + Sf]/(. (1.1)\n\nChanges in state are driven by instantaneous force scaled by the mobility of states or inverse of the damping coefficent Z. The deterministic part of the force (f = —VE + fext) includes the local gradient in molecular interaction potential E(x) plus the applied external force fext. To this is added a random-uncorrelated force Sf that embodies the many body collisions associated with the thermal environment. These random impulses are governed by the fluctuation-dissipation theorem, (Sf2}At — kBTZ. (Einstein's great insight was to recognize that the average mechanical energy imparted by thermal impulses had to equal thermal energy, i.e. the ensemble-average integral of mechanical power (JAt Sf • Sv dt'} = kBT. The assumption of overdamped motions Sv = Sf/Z then yields the autocorrelation relation that governs force fluctuations.) The nanoscopic view can also be described by a stochastic process, which has become the foundation for an important computational technique - Brownian dynamics or dissipative Monte-Carlo simulations (referred to by its creators as \"smart Monte-Carlo\" ). In this description, the likelihood P(x + Ax, t + At|x, t) that a state x(t) will evolve to a new state x + Ax over a time increment At is the product of the equilibrium (long-time) Boltzmann weight for the step and a diffusive-Gaussian weight for dynamics,\n\nwhere the diffusivity of states D is taken here to be a constant given by the Einstein-Stokes relation, D = kBT/Q. Finally, on time scales that include many thermal impulses s and longer), the overdamped dynamics can be cast in a continuum theory where the density of states p(x, t) at location x and time t evolves according to the Smoluchowski transport equation, dp/dt = -V- J (1.3)\n\nwith the flux of states J = (fext-VE )p/Z-DVp defined by force-driven convection plus the diffusive gradient. Although each description illuminates different features of kinetics in a dissipative environment, Kramers [16,18] demonstrated that Smoluchowski transport can be used to predict the rate for thermally-activated escape from a deeply bound state.\n\n### 1.2.1 Two-state transitions in a liquid\n\nTo illustrate the important features of chemical kinetics in liquids and the utility of Kramers approach, we begin by examining two-state transitions with an energy landscape modelled by two deep energy minima separated by an intervening barrier (Fig. 1.4). In this 1-D abstraction, a scalar coordinate \"x\" is assumed to map the transition pathway over a barrier at energy Ets relative to the deepest minimum. Following Kramers, rates of transition between these two states are approximated by stationary fluxes (constant #/time) between the states under appropriate boundary conditions,\n\nAgain treating the diffusivity or mobility of states as locally constant, integration of the stationary flux relates flux to the end-state densities (pi, P2),\n\nJ = D{pi exp(Ei/keT) - p2 exp(E2/kBT)}/ dx exp[E(x)/keT] j -\n\nDirectional rates of transition and between the two states are then found by starting with all states essentially at \"1\" or \"2\" and an adsorbing boundary at the final state \"2\" or \"1\" (i.e. p2 = 0 or p1 = 0). This leads to the expressions for forward and reverse rates of transition,\n\nEnergy-weighted, the major contribution to the pathway integral in equation (1.5) arises local to the transition state and defines a length scale etx}", null, "Fig. 1.4. Conceptual energy landscape for a two-state transition.\n\nfor barrier width, Lts = dxexp[(E(x) — Ets)/k&T\\. As expected, \"detailed balance\" = vi^2) yields the ratio (p\\/p2)<x, = exp[(E2 — Ei)/kBT] required for densities of states at long times.\n\n### 1.2.2 Kinetics of first-order reactions in solution\n\nAnother revealing application of Kramers theory is to bimolecular reactions (A + B ^ AB) in solution. Here, we imagine that a 1-D density pa (#/length) of reactant (e.g. A) exchanges with the bound state (reactant B) at one end of a scalar coordinate x as sketched in Figure 1.5. As such, the reverse rate in equation (1.6) predicts the rate of capture by the attractive potential:\n\nwhich may involve passage of an entrance barrier AEb like that sketched in Figure 1.5. To connect with the solution in 3-D beyond the reaction pathway, the 1-D density pa is modelled as the product of solution concentration ca (number/volume) and the effective cross section of the reactive site, i.e. pa — 4nxbca. In this way, the bimolecular \"on\" rate in solution is found from the rate of capture per concentration of reactant A, i.e. Kon = vb^a/ca. Since barrier width Lts will be comparable to barrier location xb, \"on\" rate in solution can be approximated by, vb^a « Dpa exp[-AEb/kBT]/L,", null, "Fig. 1.5. Conceptual energy landscape for capture and release of components in solution.", null, "" ]
[ null, "https://www.europeanmedical.info/motor-proteins/images/8984_52_92.jpg", null, "https://www.europeanmedical.info/motor-proteins/images/8984_52_93.jpg", null, "https://www.europeanmedical.info/motor-proteins/images/8984_52_94.jpg", null, "https://www.europeanmedical.info/motor-proteins/images/8984_52_95.jpg", null, "https://www.europeanmedical.info/motor-proteins/images/8984_52_97.jpg", null, "https://www.europeanmedical.info/images/downloads/eJw9ykEKgCAQAMDfeFQJ1Aqkp4S5Sy6lK2X4_ejSZU6TWquzUp0O6rhp7WRmZhk5K-BeTg6w5lDCjpe6sYCsqS4hNuLiK8X2XCj-SOCNdXayEYyxOIrkB6NF_3wB8IUkQg.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89632845,"math_prob":0.9576176,"size":22588,"snap":"2020-24-2020-29","text_gpt3_token_len":4832,"char_repetition_ratio":0.13372299,"word_repetition_ratio":0.09772728,"special_character_ratio":0.20205419,"punctuation_ratio":0.092334494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9582966,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T16:29:38Z\",\"WARC-Record-ID\":\"<urn:uuid:e4de39e7-a415-484b-90c7-7a823a672cad>\",\"Content-Length\":\"44375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ece8f705-5617-40cc-9f2e-2460d0dca5b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:4029fbf3-948f-4c89-ac1c-d39e5c2938c5>\",\"WARC-IP-Address\":\"104.28.19.15\",\"WARC-Target-URI\":\"https://www.europeanmedical.info/motor-proteins/dynamic-force-spectroscopy-1.html\",\"WARC-Payload-Digest\":\"sha1:LZ7SYJHR3UV6WNWBHPLZKD3E44RP3CL3\",\"WARC-Block-Digest\":\"sha1:IRXRTGRRN35SRSGGDOPCZPBIPZWJ746G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347391277.13_warc_CC-MAIN-20200526160400-20200526190400-00514.warc.gz\"}"}
https://math.stackexchange.com/questions/3033437/how-do-generators-of-a-group-work
[ "# How do generators of a group work?\n\n$$G$$ is a group, $$H$$ is a subgroup of $$G$$, and $$[G:H]$$ stands for the index of $$H$$ in $$G$$ in the following example:\n\nLet $$G=S_3$$, $$H=\\left<(1,2)\\right>$$. Then $$[G:H]=3$$.\n\nI know the definition of group generators: A set of generators $$(g_1,...,g_n)$$ is a set of group elements such that possibly repeated application of the generators on themselves and each other is capable of producing all the elements in the group.\n\nWhat does the individual elements of $$H=\\left<(1,2)\\right>$$ look like? Any help would be greatly aprciated.\n\nP.S. I know how to find the index when the groups don’t involve a group generator, the thing I need help with is understanding the group generator.\n\n• What's the actual question here? – Lord Shark the Unknown Dec 10 '18 at 4:18\n• My question is more or less this: What does the individual elements of $H$ look like? – AMN52 Dec 10 '18 at 4:21\n• $H$ must have the identity element. It must also have the element $(1\\ 2)$. – Lord Shark the Unknown Dec 10 '18 at 4:22\n• $(1,2)$ has order 2 so it's the only non identity element in $H$. – Justin Stevenson Dec 10 '18 at 4:22\n• Why does $H$ only contain $(1,2)$ and the identity element? – AMN52 Dec 10 '18 at 4:25\n\n$$(12)$$ is a transposition. $$(12)^{-1}=(12)$$, that is, it's its own inverse. All that can be gotten by taking powers of $$(12)$$ is $$(12)$$ and $$e$$, the identity. (Note: In general, $$\\langle a\\rangle =\\{a^n:n\\in\\mathbb Z\\}$$). Thus $$\\langle (12)\\rangle =\\{(12),e\\}$$, a two element group.\nSince $$\\mid S_3\\mid=6$$, we get $$[S_3:\\langle (12)\\rangle] =3$$.\nBecause $$H$$ is the set of all elements of the form $$(1,2)^n$$ and $$(1,2)^2=e$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87191916,"math_prob":0.9999758,"size":671,"snap":"2019-13-2019-22","text_gpt3_token_len":180,"char_repetition_ratio":0.14842579,"word_repetition_ratio":0.0,"special_character_ratio":0.28166914,"punctuation_ratio":0.15333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995863,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T07:40:46Z\",\"WARC-Record-ID\":\"<urn:uuid:43ec6dbc-65c4-4b4e-8c4d-4fad6282b7c6>\",\"Content-Length\":\"141262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ebbd0db-0643-4c14-94bf-b469453c0170>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b9e37fd-4845-4058-aa1a-5ed799185e3f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3033437/how-do-generators-of-a-group-work\",\"WARC-Payload-Digest\":\"sha1:ABP3ZGDYYIWMDZ2CLPZZEJECNJ5IVLGH\",\"WARC-Block-Digest\":\"sha1:PIGGRQZ4QCBQ752SH5GODCFZJFOM3STY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203378.92_warc_CC-MAIN-20190324063449-20190324085449-00524.warc.gz\"}"}
https://simplywall.st/stocks/us/food-beverage-tobacco/nyse-mkc/mccormick/news/calculating-the-fair-value-of-mccormick-company-incorporated
[ "# Calculating The Fair Value Of McCormick & Company, Incorporated (NYSE:MKC)\n\nBy\nSimply Wall St\nPublished\nNovember 25, 2021\n\nToday we'll do a simple run through of a valuation method used to estimate the attractiveness of McCormick & Company, Incorporated (NYSE:MKC) as an investment opportunity by taking the forecast future cash flows of the company and discounting them back to today's value. One way to achieve this is by employing the Discounted Cash Flow (DCF) model. Models like these may appear beyond the comprehension of a lay person, but they're fairly easy to follow.\n\nCompanies can be valued in a lot of ways, so we would point out that a DCF is not perfect for every situation. For those who are keen learners of equity analysis, the Simply Wall St analysis model here may be something of interest to you.\n\nSee our latest analysis for McCormick\n\n### What's the estimated valuation?\n\nWe're using the 2-stage growth model, which simply means we take in account two stages of company's growth. In the initial period the company may have a higher growth rate and the second stage is usually assumed to have a stable growth rate. To start off with, we need to estimate the next ten years of cash flows. Where possible we use analyst estimates, but when these aren't available we extrapolate the previous free cash flow (FCF) from the last estimate or reported value. We assume companies with shrinking free cash flow will slow their rate of shrinkage, and that companies with growing free cash flow will see their growth rate slow, over this period. We do this to reflect that growth tends to slow more in the early years than it does in later years.\n\nGenerally we assume that a dollar today is more valuable than a dollar in the future, and so the sum of these future cash flows is then discounted to today's value:\n\n#### 10-year free cash flow (FCF) estimate\n\n 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 Levered FCF (\\$, Millions) US\\$829.5m US\\$925.0m US\\$852.0m US\\$899.0m US\\$914.0m US\\$930.1m US\\$947.0m US\\$964.6m US\\$982.8m US\\$1.00b Growth Rate Estimate Source Analyst x6 Analyst x5 Analyst x1 Analyst x1 Est @ 1.67% Est @ 1.76% Est @ 1.82% Est @ 1.86% Est @ 1.89% Est @ 1.91% Present Value (\\$, Millions) Discounted @ 5.5% US\\$787 US\\$832 US\\$726 US\\$727 US\\$701 US\\$676 US\\$653 US\\$630 US\\$609 US\\$588\n\n(\"Est\" = FCF growth rate estimated by Simply Wall St)\nPresent Value of 10-year Cash Flow (PVCF) = US\\$6.9b\n\nAfter calculating the present value of future cash flows in the initial 10-year period, we need to calculate the Terminal Value, which accounts for all future cash flows beyond the first stage. The Gordon Growth formula is used to calculate Terminal Value at a future annual growth rate equal to the 5-year average of the 10-year government bond yield of 2.0%. We discount the terminal cash flows to today's value at a cost of equity of 5.5%.\n\nTerminal Value (TV)= FCF2031 × (1 + g) ÷ (r – g) = US\\$1.0b× (1 + 2.0%) ÷ (5.5%– 2.0%) = US\\$29b\n\nPresent Value of Terminal Value (PVTV)= TV / (1 + r)10= US\\$29b÷ ( 1 + 5.5%)10= US\\$17b\n\nThe total value, or equity value, is then the sum of the present value of the future cash flows, which in this case is US\\$24b. The last step is to then divide the equity value by the number of shares outstanding. Relative to the current share price of US\\$85.5, the company appears about fair value at a 5.0% discount to where the stock price trades currently. The assumptions in any calculation have a big impact on the valuation, so it is better to view this as a rough estimate, not precise down to the last cent.\n\n### The assumptions\n\nNow the most important inputs to a discounted cash flow are the discount rate, and of course, the actual cash flows. Part of investing is coming up with your own evaluation of a company's future performance, so try the calculation yourself and check your own assumptions. The DCF also does not consider the possible cyclicality of an industry, or a company's future capital requirements, so it does not give a full picture of a company's potential performance. Given that we are looking at McCormick as potential shareholders, the cost of equity is used as the discount rate, rather than the cost of capital (or weighted average cost of capital, WACC) which accounts for debt. In this calculation we've used 5.5%, which is based on a levered beta of 0.800. Beta is a measure of a stock's volatility, compared to the market as a whole. We get our beta from the industry average beta of globally comparable companies, with an imposed limit between 0.8 and 2.0, which is a reasonable range for a stable business.\n\nValuation is only one side of the coin in terms of building your investment thesis, and it is only one of many factors that you need to assess for a company. It's not possible to obtain a foolproof valuation with a DCF model. Instead the best use for a DCF model is to test certain assumptions and theories to see if they would lead to the company being undervalued or overvalued. For example, changes in the company's cost of equity or the risk free rate can significantly impact the valuation. For McCormick, we've put together three relevant factors you should further research:\n\n1. Risks: Consider for instance, the ever-present spectre of investment risk. We've identified 1 warning sign with McCormick , and understanding this should be part of your investment process.\n2. Future Earnings: How does MKC's growth rate compare to its peers and the wider market? Dig deeper into the analyst consensus number for the upcoming years by interacting with our free analyst growth expectation chart.\n3. Other Solid Businesses: Low debt, high returns on equity and good past performance are fundamental to a strong business. Why not explore our interactive list of stocks with solid business fundamentals to see if there are other companies you may not have considered!\n\nPS. Simply Wall St updates its DCF calculation for every American stock every day, so if you want to find the intrinsic value of any other stock just search here.\n\n### Discounted cash flow calculation for every stock\n\nSimply Wall St does a detailed discounted cash flow calculation every 6 hours for every stock on the market, so if you want to find the intrinsic value of any company just search here. It’s FREE.\n\n### Make Confident Investment Decisions\n\nSimply Wall St's Editorial Team provides unbiased, factual reporting on global stocks using in-depth fundamental analysis.\nFind out more about our editorial guidelines and team." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9167534,"math_prob":0.9416445,"size":6812,"snap":"2022-05-2022-21","text_gpt3_token_len":1669,"char_repetition_ratio":0.11119272,"word_repetition_ratio":0.0017226529,"special_character_ratio":0.2482384,"punctuation_ratio":0.09940209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95703083,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T00:48:05Z\",\"WARC-Record-ID\":\"<urn:uuid:064abf11-4c0e-4eea-a27b-726848999c19>\",\"Content-Length\":\"94031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1d6be14-5aca-42e9-af69-4886b32c6f23>\",\"WARC-Concurrent-To\":\"<urn:uuid:9dce083e-8efe-4092-a731-efa53aad86e2>\",\"WARC-IP-Address\":\"172.66.41.27\",\"WARC-Target-URI\":\"https://simplywall.st/stocks/us/food-beverage-tobacco/nyse-mkc/mccormick/news/calculating-the-fair-value-of-mccormick-company-incorporated\",\"WARC-Payload-Digest\":\"sha1:PTUECW6HX4YMJFT5HSDPUZQ6OOPBWNPR\",\"WARC-Block-Digest\":\"sha1:5PHKVLJ77FPNVFV7UDSTIWLWUYTYDENX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662520936.24_warc_CC-MAIN-20220517225809-20220518015809-00131.warc.gz\"}"}
https://oeis.org/A065902/internal
[ "The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.", null, "Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)\n A065902 Smallest prime p such that n is a solution mod p of x^4 = 2, or 0 if no such prime exists. 5\n\n%I\n\n%S 7,79,127,7,647,2399,23,937,4999,14639,1481,28559,19207,23,31,47,73,\n\n%T 18617,79999,194479,117127,5711,165887,73,4663,113,233,707279,47,\n\n%U 40153,524287,191,167,257,439,267737,45329,2313439,182857,2825759,1555847\n\n%N Smallest prime p such that n is a solution mod p of x^4 = 2, or 0 if no such prime exists.\n\n%C Solutions mod p are represented by integers from 0 to p-1. The following equivalences holds for n > 1: There is a prime p such that n is a solution mod p of x^4 = 2 iff n^4 - 2 has a prime factor > n; n is a solution mod p of x^4 = 2 iff p is a prime factor of n^ 4 - 2 and p > n. n^4 - 2 has at most three prime factors > n, so these factors are the only primes p such that n is a solution mod p of x^4 = 2. The first zero is at n = 1689 (cf. A065903 ). For n such that n^4 - 2 has one resp. two resp. three prime factors > n; cf. A065904 resp. A065905 resp. A065906.\n\n%F If n^4 - 2 has prime factors > n, then a(n) = smallest of these prime factors, else a(n) = 0.\n\n%e a(16) = 31, since 16 is a solution mod 31 of x^4 = 2 and 16 is not a solution mod p of x^4 = 2 for primes p < 31. Although 16^4 = 2 (mod 7), prime 7 is excluded because 7 < 16 and 16 = 2 (mod 7).\n\n%o (PARI): a065902(m) = local(n,f,a,j); for(n = 2,m,f = factor(n^4-2); a = matsize(f); j = 1; while(f[j,1]< = n&&j<a,j++); print1(if(f[j,1]>n,f[j,1],0),\",\")) a065902(45)\n\n%Y Cf. A040098, A065903, A065904, A065905, A065906.\n\n%K nonn\n\n%O 2,1\n\n%A _Klaus Brockhaus_, Nov 28 2001\n\nLookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam\nContribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent\nThe OEIS Community | Maintained by The OEIS Foundation Inc.\n\nLast modified June 15 15:16 EDT 2021. Contains 345049 sequences. (Running on oeis4.)" ]
[ null, "https://oeis.org/banner2021.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84314215,"math_prob":0.9964039,"size":1455,"snap":"2021-21-2021-25","text_gpt3_token_len":590,"char_repetition_ratio":0.15988973,"word_repetition_ratio":0.14130434,"special_character_ratio":0.5402062,"punctuation_ratio":0.21615201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99838555,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T20:12:27Z\",\"WARC-Record-ID\":\"<urn:uuid:55e71a55-b990-48b0-ab7a-b5a2645c1a27>\",\"Content-Length\":\"8708\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69e902d9-bf3b-43c3-8738-43f8461104c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:0038e204-d55d-44d4-ab8e-006b23be8253>\",\"WARC-IP-Address\":\"104.239.138.29\",\"WARC-Target-URI\":\"https://oeis.org/A065902/internal\",\"WARC-Payload-Digest\":\"sha1:2CG4Y7JPN2GDHAVJYYRT6ACASRPTQI6Z\",\"WARC-Block-Digest\":\"sha1:D64HHNQQGYTR5WOHF7LA3I3GM2PAVDP5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487621519.32_warc_CC-MAIN-20210615180356-20210615210356-00520.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/200122/impress-donald-knuth-by-counting-polyominoes-on-the-hyperbolic-plane
[ "# Impress Donald Knuth by counting polyominoes on the hyperbolic plane\n\nThis challenge is inspired by a talk about Schläfli symbols, etc that I gave in a Geometry seminar. While I was putting together this challenge, I saw that Donald Knuth himself was interested in (some subset of) this problem. In October 2016, he commented on a related OEIS sequence:\n\nIf [the OEIS author] is wrong about the hyperbolic {4,5} pentominoes, the next number is probably mistaken too. I don't have [time] right now to investigate further.\n\nSuccessful completion of this challenge will have you investigating something that Donald Knuth might have investigated if only he had more time, and will result in new additions (and perhaps a rare correction) to the On-Line Encyclopedia of Integer Sequences.\n\n# Challenge\n\nThis challenge will have you create a function that counts \"free polyforms\" on the hyperbolic plane. In particular, you will write a function that takes three positive integer parameters p, q, and n and counts the number of $$\\n\\$$-cell \"free polyforms\" on the tiling of the hyperbolic plane given by the Schläfli symbol $$\\\\{p,q\\}\\$$.\n\nShortest code wins.\n\n# Definitions\n\nThe Schläfli symbol $$\\\\{p,q\\}\\$$ describes a tiling of the hyperbolic plane by $$\\p\\$$-gons, where each vertex touches exactly $$\\q\\$$ of the polygons. As an example, see the Wikipedia page for the $$\\\\{4,5\\}\\$$ tiling that Donald references above.\n\nA free polyform is a collection of regular polygons that meet at their edges, counted up to rotation and reflection.\n\n# Input\n\nYou can assume that the values of p and q which define the tiling indeed describe an actual tiling of the hyperbolic plane. This means that $$\\p \\geq 3\\$$, and\n\n• when $$\\p = 3\\$$, $$\\q \\geq 7\\$$,\n• when $$\\p = 4\\$$, $$\\q \\geq 5\\$$,\n• when $$\\p = 5\\$$, $$\\q \\geq 4\\$$,\n• when $$\\p = 6\\$$, $$\\q \\geq 4\\$$, and\n• when $$\\p \\geq 7\\$$, $$\\q \\geq 3\\$$.\n\n# Data\n\nOEIS sequence A119611 claims that f(4,5,n) = A119611(n), but Donald Knuth disputes the reasoning for the value of $$\\A119611(5)\\$$. (When I counted by hand, I got Knuth's answer, and I've included it in the table below.)\n\n| p | q | n | f(p,q,n)\n+---+---+---+---------\n| 3 | 7 | 1 | 1\n| 3 | 7 | 2 | 1\n| 3 | 7 | 3 | 1\n| 3 | 7 | 4 | 3\n| 3 | 7 | 5 | 4\n| 3 | 7 | 6 | 12\n| 3 | 7 | 7 | 27\n| 3 | 9 | 8 | 82\n| 4 | 5 | 3 | 2\n| 4 | 5 | 4 | 5\n| 4 | 5 | 5 | 16\n| 6 | 4 | 3 | 3\n| 7 | 3 | 1 | 1\n| 7 | 3 | 2 | 1\n| 7 | 3 | 3 | 3\n| 8 | 3 | 3 | 4\n| 9 | 3 | 3 | 4\n\n\nNote: these values are computed by hand, so let me know if you suspect any mistakes.\n\n# Final notes\n\nThe output of this program will result in quite a lot of new, interesting sequences for the OEIS. You are of course free to author any such sequences—but if you're not interested, I'll add the values you compute to the Encylopedia with a link to your answer.\n\n• Is this code-golf or code-challenge? – qwr Feb 26 at 4:20\n• It's a code-golf challenge: the shortest code wins. But I expect that the shortest solution might be quite long, as in this answer to a previous code-golf challenge. – Peter Kagey Feb 26 at 5:06\n• Can you provide a reference implementation or some pseudocode? – S.S. Anne Feb 27 at 21:44\n• I don't know this stuff so I probably won't be able to do it. Can you maybe tell how you did it by hand? – S.S. Anne Feb 27 at 22:06\n• Monomino. Do do do-do-do. ♫♪ – mbomb007 Mar 3 at 22:09\n\n# GAP and its kbmag package, 711 682 658 bytes\n\nNote that the kbmag package consists not only of GAP code, it contains C programs that have to be compiled (see the package's README file).\n\nLoadPackage(\"kbmag\");I:=function(p,q,n)local F,H,R,r,s,x,c;F:=FreeGroup(2);s:=F.1;r:=F.2;R:=KBMAGRewritingSystem(F/[s^2,r^p,(s*r)^q]);AutomaticStructure(R);H:=SubgroupOfKBMAGRewritingSystem(R,[r]);AutomaticStructureOnCosets(R,H);x:=w->ReducedCosetRepresentative(R,H,w);c:=function(n,U,S,P)local N,Q,Z;if n=0 then Z:=Set(U,t->Set(U,p->(p/t)));return 1/Size(SetX(Union(Z,Set(Z,Q->Set(Q,q->(MappedWord(q,[s,r],[s,r^-1]))))),[1..p],{Q,i}->Set(Q,q->x(q*r^i))));fi;if P=[]then return 0;fi;N:=P;Q:=P{[2..Size(P)]};Z:=Filtered(Set([1..p],i->x(s*r^i*N)),w->not w in S);return c(n,U,S,Q)+c(n-1,Union(U,[N]),Union(S,Z),Union(Q,Z));end;return c(n,[],[r/r],[r/r]);end;\n\n\nThis is the result of removing indentation and newlines from this version, and some inlining:\n\nLoadPackage(\"kbmag\");\nI:=function(p,q,n)\nlocal F,G,H,R,r,s,x,c;\nF:=FreeGroup(2);\ns:=F.1;r:=F.2;\nG:=F/[s^2,r^p,(s*r)^q];\nR:=KBMAGRewritingSystem(G);\nAutomaticStructure(R);\nH:=SubgroupOfKBMAGRewritingSystem(R,[r]);\nAutomaticStructureOnCosets(R,H);\nx:=w->ReducedCosetRepresentative(R,H,w);\nc:=function(n,U,S,P)\nlocal N,Q,Z;\nif n=0 then\nZ:=Set(U,t->Set(U,p->(p/t)));\nZ:=Union(Z,Set(Z,Q->Set(Q,q->(MappedWord(q,[s,r],[s,r^-1])))));\nZ:=SetX(Z,[1..p],{Q,i}->Set(Q,q->x(q*r^i)));\nreturn 1/Size(Z);\nfi;\nif P=[]then return 0;fi;\nN:=P;Q:=P{[2..Size(P)]};\nZ:=Filtered(Set([1..p],i->x(s*r^i*N)),w->not w in S);\nreturn c(n,U,S,Q)+c(n-1,Union(U,[N]),Union(S,Z),Union(Q,Z));\nend;\nreturn c(n,[],[r/r],[r/r]);\nend;\n\n\nIf the line containing {Q,i}-> doesn't work, your GAP is too old. You can then replace that line with:\n\nZ:=SetX(Z,[1..p],function(Q,i)return Set(Q,q->x(q*r^i));end);\n\n\nSeveral of the Set operations could be slightly faster List operations (the improved version at least uses that it is a set for even more golfing and a little speed compensation), but that would cost one byte each time.\n\nAnd yes, Knuth's and your result is confirmed:\n\ngap> Read(\"i.gap\");\n─────────────────────────────────────────────────────────────────────────────\nby Derek Holt (https://homepages.warwick.ac.uk/staff/D.F.Holt/).\nHomepage: https://gap-packages.github.io/kbmag\n─────────────────────────────────────────────────────────────────────────────\ngap> I(4,5,5);\n16\ngap> I(4,5,6);\n55\ngap> I(4,5,7);\n224\ngap> I(4,5,8);\n978\ngap> I(4,5,9);\n4507\ngap> I(4,5,10);\n21430\n\n\nThe $$\\n=7\\$$ computation already takes several minutes. My computations also agree with the other results in the table.\n\n• Pretty awesome! – user9207 Feb 28 at 18:43\n• I'm blown away by this. Even if someone writes something faster for the current bounty, I'll make another one to give to you. – Peter Kagey Feb 28 at 20:42\n• This appears to work for polyhedra and tilings of the plane too? – Peter Kagey Feb 28 at 20:53\n• @PeterKagey Yes, since all the groups are automatic. – Christian Sievers Feb 28 at 21:34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86924326,"math_prob":0.977425,"size":2689,"snap":"2020-45-2020-50","text_gpt3_token_len":874,"char_repetition_ratio":0.1113594,"word_repetition_ratio":0.112522684,"special_character_ratio":0.35329118,"punctuation_ratio":0.10166359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99641466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T23:55:01Z\",\"WARC-Record-ID\":\"<urn:uuid:0f1adbc5-defb-4b79-9ab1-a097ef633281>\",\"Content-Length\":\"172524\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1a96993-010b-4017-b6ff-b3ff0d1dc9f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6c8c13a-b70c-4aae-9d94-d93b18fba09b>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/200122/impress-donald-knuth-by-counting-polyominoes-on-the-hyperbolic-plane\",\"WARC-Payload-Digest\":\"sha1:NIFRH2MG52ZCK4J34SIVBGR6ETIDD55V\",\"WARC-Block-Digest\":\"sha1:MIRNEYMREOYSYAQ7YNSRXGJKGTU67674\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141194634.29_warc_CC-MAIN-20201127221446-20201128011446-00641.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/1903.08192/
[ "# [\n\n###### Abstract\n\nWe study the problem of robust linear regression with response variable corruptions. We consider the oblivious adversary model, where the adversary corrupts a fraction of the responses in complete ignorance of the data. We provide a nearly linear time estimator which consistently estimates the true regression vector, even with fraction of corruptions. Existing results in this setting either don’t guarantee consistent estimates or can only handle a small fraction of corruptions. We also extend our estimator to robust sparse linear regression and show that similar guarantees hold in this setting. Finally, we apply our estimator to the problem of linear regression with heavy-tailed noise and show that our estimator consistently estimates the regression vector even when the noise has unbounded variance (e.g., Cauchy distribution), for which most existing results don’t even apply. Our estimator is based on a novel variant of outlier removal via hard thresholding in which the threshold is chosen adaptively and crucially relies on randomness to escape bad fixed points of the non-convex hard thresholding operation.\n\nR\n\nNear-optimal Consistent Robust Regression]Adaptive Hard Thresholding for\nNear-optimal Consistent Robust Regression \\coltauthor\\NameArun Sai Suggalathanks: Part of the work done while interning at Microsoft Research, India. \\Email\n\\addrCarnegie Mellon University \\AND\\NameKush Bhatia \\Email\n\\addrCarnegie Mellon University \\AND\\NamePrateek Jain \\Email\n\\addrMicrosoft Research, India obust regression, heavy tails, hard thresholding, outlier removal.\n\n## 1 Introduction\n\nWe study robust least squares regression, where the goal is to robustly estimate a linear predictor from data which is potentially corrupted by an adversary. We focus on the setting where response variables are corrupted via an oblivious adversary. Such a setting has numerous applications such as click-fraud in a typical ads system, ratings-fraud in recommendation systems, as well as the less obvious application of regression with heavy tailed noise.\n\nFor the problem of oblivious adversarial corruptions, our goal is to design an estimator that satisfies three key criteria: (a) (statistical efficiency) estimates the optimal solution consistently with nearly optimal statistical rates, (b) (robustness efficiency) allows a high amount of corruption, i.e., fraction of corruptions is , (c) (computational efficiency) has the same or nearly the same computational complexity as the standard ordinary least squares (OLS) estimator. Most existing techniques do not even provide consistent estimates in this adversary model (Bhatia et al.(2015)Bhatia, Jain, and Kar; Nasrabadi et al.(2011)Nasrabadi, Tran, and Nguyen; Nguyen and Tran(2013); Prasad et al.(2018)Prasad, Suggala, Balakrishnan, and Ravikumar; Diakonikolas et al.(2018)Diakonikolas, Kamath, Kane, Li, Steinhardt, and Stewart; Wright and Ma(2010)). Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar provides statistically consistent and computationally efficient estimator, but requires the fraction of corruptions to be less than a small constant (). Tsakonas et al.(2014)Tsakonas, Jaldén, Sidiropoulos, and Ottersten study Huber-loss based regression to provide nearly optimal statistical rate with nearly optimal fraction of corruptions. But their sample complexity is sub-optimal, and more critically, the algorithm has super-linear computational complexity (in terms of number of points) and is significantly slower than the standard least squares estimator.\n\nSo the following is still an open question: “Can we design a linear time consistent estimator for robust regression that allows almost all responses to be corrupted by an oblivious adversary?”\n\nWe answer this question in affirmative, i.e., we design a novel outlier removal technique that can ensure consistent estimation at nearly optimal statistical rates, assuming Gaussian data and sub-Gaussian noise. Our results hold as long as the number of points is larger than the input dimensionality by logarithmic factors, i.e., , and allows responses to be corrupted; the number of corrupted responses can be increased to with a slightly worse generalization error rate.", null, "Figure 1: The first two plots show the parameter error (y-axis) of various estimators as we vary fraction of corruptions α in the robust regression setting (x-axis); noise variance is 0 for the first plot and 1 for the second. Plots indicate that AdaCRR is able to tolerate significantly higher fraction of outliers than most existing methods. The last plot shows parameter error over number of iterations for robust regression, indicating AdaCRR can be upto 100x faster as compared to Huber regression.\n\nOur algorithm, which we refer to as AdaCRR 111To be more precise, AdaCRR is a framework and we study two algorithms instantiated from this framework, namely AdaCRR-FC, AdaCRR-GD which differ in how they update ., uses a similar technique as Bhatia et al.(2015)Bhatia, Jain, and Kar; Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar, where we threshold out points that we estimate as outliers in each iteration. However, we show that fixed thresholding operators as in Bhatia et al.(2015)Bhatia, Jain, and Kar; Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar can get stuck at poor fixed-points in presence of a large number of outliers (see Section 4). Instead, we rely on an adaptive thresholding operator that uses noise in each iteration to avoid such sub-optimal fixed-points. Similar to Bhatia et al.(2015)Bhatia, Jain, and Kar; Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar, AdaCRR-FC solves a standard OLS problem in each iteration, so the overall complexity is where is the number of iterations and is the time-complexity of an OLS solver. We show that iterations are enough to obtain -optimal solution, i.e., the algorithm is almost as efficient as the standard OLS solvers. Our simulations also demonstrate our claim, i.e., we observe that AdaCRR-FC is significantly more efficient than Huber-loss based approaches (Tsakonas et al.(2014)Tsakonas, Jaldén, Sidiropoulos, and Ottersten) while still ensuring consistency in presence of a large number of corruptions unlike existing thresholding techniques (Bhatia et al.(2015)Bhatia, Jain, and Kar; Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar) (see Figure 1).\n\nThe above result requires which is prohibitively large for high-dimensional problems. Instead, we study the problem with sparsity structure on the regression vector (Wainwright(2009)). That is, we study the problem of sparse linear regression with oblivious response corruptions. We provide first (to the best of our knowledge) consistent estimator for the problem under standard RSC assumptions. Similar to the low-d case, we allow fraction of points to be corrupted, but the sample complexity requirement is only , where is the number of non-zero entries in the optimal sparse regression vector. Existing Huber-loss based estimators (Tsakonas et al.(2014)Tsakonas, Jaldén, Sidiropoulos, and Ottersten) would be difficult to extend to this setting due to the additional non-smooth regularization of the regression vector. Existing hard-thresholding based consistent estimators (Bhatia et al.(2017)Bhatia, Jain, Kamalaruban, and Kar) marginalize out the regression vector, which is possible only in low-d due to the closed form representation of the least squares solution, and hence, do not trivially extend to sparse regression.\n\nFinally, we enhance and apply our technique to the problem of regression with heavy-tailed noise. By treating the tail as oblivious adversarial corruptions, we obtain consistent estimators for a large class of heavy-tailed noise distributions that might not even have well-defined first or second moments. Despite being a well-studied problem, to the best of our knowledge, this is the first such result in this domain of learning with heavy tailed noise. For example, our results provide consistent estimators with Cauchy noise, for which even the mean is not well defined, with rates which are very similar to that of standard sub-Gaussian distributions. In contrast, most existing results (Sun et al.(2018)Sun, Zhou, and Fan; Hsu and Sabato(2016)) do not even hold for Cauchy noise as they require the variance of the noise to be bounded. Furthermore, existing results mostly rely on median of means technique (Hsu and Sabato(2016); Lecué and Lerasle(2017); Prasad et al.(2018)Prasad, Suggala, Balakrishnan, and Ravikumar), while we present a novel but natural viewpoint of modeling the tail of noise as adversarial but oblivious corruptions.\n\n##### Paper Organization.\n\nNext section presents the problem setup and our main results. Section 3 discusses some of the related works. Section 4 presents our algorithm and discusses why adaptive thresholding is necessary. Our extension to sparse linear regression is presented in Section 6. Section 7 presents our results for the regression with heavy tailed noise problem. We conclude with Section 8. Due to the lack of space, most proofs and experiments are presented in the appendix.\n\n## 2 Problem Setup and Main Results\n\nWe are given independent data points sampled from a Gaussian distribution and their corrupted responses , where,\n\n yi=xTiw∗+ϵi+b∗i, (1)\n\nis the true regression vector, - the white noise - is independent of and is sampled from a sub-Gaussian distribution with parameter , and is the corruption in the response of . is a sparse corruption set, i.e., where . Also, is independent of . Apart from this independence we do not impose any restrictions on the values of corruptions added by the adversary. Our goal is to robustly estimate from the corrupted data . In particular, following are the key criteria in evaluating an estimator’s performance:\n\n• Breakdown point: It is the maximum fraction of corruption, , above which the estimator is not guaranteed to recover with small error, even as \\citephampel1971general.\n\n• Statistical rates and sample complexity: We are interested in the generalization error () of the estimator and its scaling with problem dependent quantities like , , noise variance as well as the fraction of corruption .\n\n• Computational complexity: The number of computational steps taken to compute the estimator. The goal is to obtain nearly linear time estimators similar to the standard OLS solvers.\n\nAs discussed later in the section, our AdaCRR estimator is near optimal with respect to all three criteria above.\n\n##### Heavy-tailed Regression.\n\nWe also study the heavy-tailed regression problem where for all and . Noise where is a heavy-tailed distribution, such as the Cauchy distribution which does not even have bounded first moment. The goal is to design an efficient estimator that provides nearly optimal statistical rates.\n\n##### Notation.\n\nLet be the matrix whose row is equal to . Let , and . For any matrix and subset , we use to denote the submatrix of obtained by selecting the rows corresponding to . Throughout the paper, we denote vectors by bold-faced letters (), and matrices by capital letters (). for a positive definite matrix . denotes the norm of , i.e., the number of non-zero elements in . implies, for a large enough constant independent of . We use to denote the set of random variables whose Moment Generating Function (MGF) is less than the MGF of .\n\n### 2.1 Main Results\n\nRobust Regression: For robust regression with oblivious response variable corruptions, we propose the first efficient consistent estimator with break-down point of . That is, {theorem}[Robust Regression] Let be observations generated from the oblivious adversary model, i.e., where , , and is selected independently of . Suppose AdaCRR-FC is run for iterations with appropariate choice of hyperparameters. Then with probability at least , the -th iterate produced by the AdaCRR-FC algorithm satisfies:\n\n ∥wT−w∗∥Σ≤˜O⎛⎜⎝σ1−α√plog2n+(logn)3n⎞⎟⎠,\n\nfor any , where the number of iterations\n\nRegression with Heavy-tailed Noise: We present our result for regression with heavy-tailed noise. {theorem}[Heavy-tailed Regression] Let be observations generated from the linear model, i.e., where , ’s are sampled i.i.d. from a distribution s.t. for a constant and are independent of . Then, for\n\n ∥wT−w∗∥Σ≤O⎛⎝C1/δ√plogn+log2nn⎞⎠.\n\nRemarks: a) Note that our technique does not even require the first moment to exist. In contrast, existing results hold only when the variance is bounded \\citephsu2016loss. In fact, the general requirement on distribution of is significantly weaker and holds for almost every distribution whose parameters are independent of . Also, we present a similar result for mean estimation with symmetric noise .\nb) For Cauchy noise \\citepjohnson2005univariate with location parameter , and scale parameter , we can guarantee error rate of , i.e., we can obtain almost same rate as sub-Gaussian noise despite unbounded variance which precludes most of the existing results. Our empirical results also agree with the theoretical claims, i.e., they show small generalization error for AdaCRR-FC while almost trivial error for several heavy-tailed regression algorithms (see Figure 5).\nc) Similar to robust regression, the estimator is nearly linear in , . Moreover, we can extend our analysis to sparse linear regression with heavy-tailed response noise.\n\n## 3 Related Work\n\nThe problems of robust regression and heavy tailed regression have been extensively studied in the fields of robust statistics and statistical learning theory. We now review some of the relevant works in the literature and discuss their applicability to our setup.\n\n##### Robust Regression.\n\nThe problem of response corrupted robust regression can be written as the following equivalent optimization problems:\n\n (2)\n\nThe problem is NP-hard in general due to it’s combinatorial nature \\citepstuder2012recovery. \\citetrousseeuw1984least introduced the Least Trimmed Squares (LTS) estimator which computes OLS estimator over all subsets of points and selects the best estimator. Naturally, the estimator’s computational complexity is exponential in and is not practical. There are some practical variants like RANSAC \\citepransac but they are mostly heuristics and do not come with strong guarantees.\n\nA number of approaches have been proposed which relax (2) with loss \\citepwright2010dense or Huber loss \\citephuber1973robust. \\citettsakonas2014convergence analyze Huber regression estimator under the oblivious adversary model and show that it tolerates any constant fraction of corruptions, while being consistent. However, their analysis requires samples. \\citetwright2010dense, nasrabadi2011robust also study convex relaxations of (2), albeit in the sparse regression setting. While their estimators tolerate any constant fraction of corruptions, they do not guarantee consistency in presence of white noise. Statistical properties aside, a major drawback of Huber’s M-estimator and other convex relaxation based approaches is that they are computationally expensive due to sublinear convergence rates to the global optimum. Another class of approaches use greedy or local search heuristics to approximately solve the constrained objectives. For example, the estimator of  \\citetbhatia2017consistent uses alternating minimization to optimize objective (2). While this estimator is consistent and converges linearly to the optimal solution, it only tolerates a small fraction of corruptions and breaks down when is greater than a small constant.\n\nAnother active line of research on robust regression has focused on handling more challenging adversary models. One such popular model is the malicious adversary model, where the adversary looks at the data before adding corruptions. Recently there has been a flurry of research on designing robust estimators that are both computationally and statistically efficient in this setting \\citepbhatia2015robust, prasad2018robust, diakonikolas2018sever, klivans2018efficient. While the approach by [Bhatia et al.(2015)Bhatia, Jain, and Kar] is based on an alternating minimization procedure, [Prasad et al.(2018)Prasad, Suggala, Balakrishnan, and Ravikumar] and [Diakonikolas et al.(2018)Diakonikolas, Kamath, Kane, Li, Steinhardt, and Stewart] derive robust regression estimators based on robust mean estimation \\citeplai2016agnostic,diakonikolas2016robust. However, for such an adaptive adversary, we cannot expect to achieve consistent estimator. In fact, it is easy to show that we cannot expect to obtain generalization error better than where is the fraction of corruptions and is the noise variance. Furthermore, as we show in our experiments, these techniques fail to recover the parameter vector in the oblivious adversary model when the fraction of corruption is .\n\n##### Heavy-tailed Regression.\n\nRobustness to heavy-tailed noise distribution is another regression setting that is actively studied in the statistics community. The objective here is to construct estimators which work without the sub-Gaussian distributional assumptions that are typically imposed on the data distribution, and allow it to be a heavy tailed distribution. For the setting where the noise is heavy-tailed with bounded variance, Huber’s estimator is known to achieve sub-Gaussian style rates \\citepfan2017estimation, sun2018adaptive. Several other non-convex losses such as Tukey’s biweight and Cauchy loss have also been proposed for this setting \\citep[see][and references therein]loh2017statistical. For the case where both the covariates and noise are heavy-tailed, several recent works have proposed computationally efficient estimators that achieve sub-Gaussian style rates \\citephsu2016loss, lecue2017robust, prasad2018robust. As noted earlier, all of these results require bounded variance. Moreover, many of the Huber-loss style estimators typically do not have linear time computational complexity. In contrast, our result holds even if the -th moment of noise is bounded where is any arbitrary small constant. Furthermore, the estimation algorithm is nearly linear in the number of data points as well as data dimensionality.\n\nIn this section we describe our algorithm AdaCRR (see Algorithm 1), for estimating the regression vector in the oblivious adversary model. At a high level, AdaCRR uses alternating minimization to optimize objective (2). That is, AdaCRR maintains an estimate of the coefficient vector and the set of corrupted responses , and alternatively updates them at every iteration.\n\n##### Updating wt.\n\nGiven any subset , is updated using the points in . We study two variants of AdaCRR which differ in how we update . In AdaCRR-FC (Algorithm 2) we perform a fully corrective linear regression step on points from . In AdaCRR-GD (Algorithm 3) we take a gradient descent step to update . While these two variants have similar statistical properties, the GD variant is computationally more efficient, especially for large and .\n\n##### Updating St.\n\nFor any given , AdaCRR updates using a novel hard thresholding procedure, which adds all the points whose absolute residual is larger than an adaptively chosen threshold, to the set . Hard thresholding based algorithms for robust regression have been explored in the literature \\citepbhatia2017consistent,bhatia2015robust but they use thresholding with a fixed threshold or at a fixed level and are unable to guarantee break-down point. In fact, as we show in Proposition 4.2, such fixed hard thresholding operators cannot in general tolerate such large fraction of corruption.\n\nIn contrast, our hard thresholding routine (detailed in Section 4.1) selects the threshold adaptively and adds randomness to escape bad fixed points. While randomness has proven to be useful in escaping first and second order saddle points in unconstrained optimization \\citepge2015escaping,jin2017accelerated, to the best of our knowledge, our result is the first such result for a constrained optimization problem with randomness in the projection step.\n\nBefore we proceed, note that Algorithm 1 relies on a new set of samples for each iteration. This ensures independence of the current iterate from the samples and is done mainly for theoretical convenience. We believe this can be eliminated using more complex arguments in the analysis.\n\nIn this section we describe our hard thresholding operator AdaHT. There are two key steps involved in AdaHT, which we describe below. Consider the call to AdaHT in iteration of Algorithm 1.\n\n##### Interval Selection.\n\nIn the first step we find an interval on positive real line which acts as a “crude” threshold for our hard thresholding operator. We partition the positive real line into intervals of width . We then place points in these intervals based on the magnitude of their residuals. Finally, we pick the smallest such that the interval has fewer than elements in it, for some such that . Let be the chosen interval. This interval acts as a crude threshold. All the points to the left of interval are considered as un-corrupted points and added to (line 7-9, Algorithm 4); all the points to the right of interval are considered as corrupted points. The goal of such interval selection is to ensure: a) all the true un-corrupted points lie to the left of the interval and are included in , and, b) not many points fall in interval so that a large fraction of the points that are in set remain independent of each other. This independence allow us to exploit sub-Gaussian concentration results rather than employing a worse case-bounds and helps achieve optimal consistent rates.\n\nLet be a constant. Then, we select interval length as:\n\n It=18√(2^σ2+2β2(t−1)^d20)log~n, (3)\n\nwhere, and are approximate upper bounds of and :\n\n σ≤^σ≤μσand∥Δw0∥2≤^d0≤ν∥Δw0∥2, (4)\n\nwhere , . In Section 5 we show that for appropriate choice of , all the true un-corrupted points lie to the left of interval. In Appendix H we present techniques to estimate with a constant . Estimating the noise variance (and ) is significantly more tricky and it is not clear if it is even possible apriori. So, in practice one can either use prior knowledge or treat as a hyper-parameter that is selected using cross-validation.\n\n##### Points in Selected Interval.\n\nThis step decides inclusion in of points which fall in the selected interval . Let be the mid-point of this interval. For each point in this interval we sample uniformly from , for some universal constant . If the magnitude of its residual is smaller than we consider it as un-corrupted and add it to (see line 10-15, Algorithm 4). As we show in the proof of Theorem 2.1, this additional randomness is critical in avoiding poor fixed points and in obtaining the desired statistical rates for the problem.\n\n### 4.2 Fixed Hard Thresholding doesn’t work\n\nIn this section we show that algorithms of \\citetbhatia2015robust,bhatia2017consistent that rely on fixed hard thresholding operators pruning out a fixed number of elements, need not recover the true parameter when . We prove this for Torrent \\citepbhatia2015robust; proof for CRR \\citepbhatia2017consistent can be similarly worked out.\n\nTorrent is based on a similar alternating minimization procedure as AdaCRR, but differs from it in the subset selection routine: instead of adaptive hard thresholding, Torrent always chooses the smallest elements from the residual vector . The following proposition provides an example where Torrent fails to recover the underlying estimate for .\n\n{proposition}\n\n[Lower Bound for Torrent ] Let , , where , . Let for and otherwise. Consider the limit as and suppose . Then which is far from (i.e., ) such that if Torrent is initialized at , it remains at even after infinite iterations.\n\nSee Appendix A for a detailed proof of the proposition. Figure on the right shows the performance of Torrent on the -d regression problem described in Proposition 4.2. The x-axis denotes the initial point while the y-axis denotes the point of convergence of Torrent. Clearly TORRENT fails with several initializations despite samples.\n\n## 5 Analysis\n\nIn this section we provide an outline of the proof of our main result stated in Theorem 2.1. We prove a more general result in Theorem 5 from which Theorem 2.1 follows readily. {theorem}[AdaCRR-FC for Robust Regression] Consider the setting of Theorem 2.1. Set , , , for a universal constant . Let be given s.t. and with . Set , . Then the iterates of AdaCRR-FC (Algorithm 2) executed with the above given hyperparameters, satisfy the following (w.p. ):\n\n ∥wt−w∗∥Σ≤βt∥w0−w∗∥Σ+O⎛⎝μσ~n1/γ(1−β)(1−α)√plog~n+log2~n~n⎞⎠ (5)\n\nwhere break-down point and\n\n Q1={i:|b∗t+1(i)|≥τt+1+518It+1},Q2={i:|b∗t+1(i)|<τt+1−518It+1},Q3={i:|b∗t+1(i)−τt+1|≤518It+1, and |yt+1(i)−⟨xt+1,i,wt⟩|≥τt+1+ηi,t+1It+1},Q4={i:|b∗t+1(i)−τt+1|≤518It+1, and |yt+1(i)−⟨xt+1,i,wt⟩|<τt+1+ηi,t+1It+1}, (6)\n\nwhere is as defined in Line 6, Algorithm 4. Note that contains the egregious outliers in and contains all the “true” uncorrupted points. Our proof first shows that satisfies the properties described in Section 4.1. Specifically, the output of AdaHT satisfies: a) , b) and, c) . Next, we show that can be written in terms of :\n\n wt+1−w∗=−(XTt+1,St+1Xt+1,St+1)−1⎛⎝∑i∈Q2∪Q4(b∗t+1(i)+ϵt+1(i))Xt+1,i⎞⎠.\n\nThe rest of the proof focuses on bounding the two terms in the RHS of the above equation. To bound the first term involving we use the observation that are independent of and rely on concentration properties of sub-Gaussian random variables. To bound the other term involving , we rely on a crucial property of our algorithm which guarantees and perform a worst case analysis to bound . See Appendix I for a detailed proof. Discussion: Theorem 5 characterizes both the computational as well as statistical guarantees of AdaCRR-FC. More specifically, consider setting\n\n ∥wT−w∗∥Σ=O⎛⎝μσ√plog2~n+log3~n~n⎞⎠,\n\nfor where is a universal constant. This shows that constant-factor estimates of , suffices to achieve information theoretically optimal rates, up to factors, even with fraction of corruptions. In fact, AdaCRR-FC can tolerate  fraction of corruptions by setting , although with a slightly worse parameter estimation error.\n\n## 6 Consistent Robust Sparse Regression\n\nIn this section, we extend our algorithm to the problem of sparse regression with oblivious response variable corruptions. In this setting the dimension of the data is allowed to exceed the sample size . When , the linear regression model is unidentifiable. Consequently, to make the model identifiable, certain structural assumptions need to be imposed on the parameter vector . Following \\citetwainwright2009sharp, this work assumes that is -sparse i.e. has at most non-zero entries. Our objective now is to recover a sparse with small generalization error. In this setting, we modify the update step of in Algorithm 1 as follows\n\n wt←argminw:∥w∥0≤k∥yt,St−Xt,Stw∥22, (7)\n\nand start the algorithm at . We refer to this modified algorithm as AdaCRR-HD. A huge number of techniques have been proposed to solve the above optimization problem efficiently. Under certain properties of the design matrix (Restricted Eigenvalue property), these techniques estimate up to statistical precision, using just samples. In this work we use the Iterative Hard Thresholding (IHT) technique of \\citetjain2014iterative to solve the above problem. More details about the IHT Algorithm can be found in Appendix C.\n\n{theorem}\n\n[AdaCRR-HD for Sparse Robust Regression] Consider the setting of Theorem 2.1. In addition assume . Use IHT (Algorithm 6) to solve (7) in each iteration of AdaCRR-HD with parameter . Set\n\n ∥wT−w∗∥Σ=O⎛⎝μσ(1−α−2c(loglog~n)−1)√klog~nlog2p~n⎞⎠.\n\nwhere , , and . We would like to highlight nearly linear sample complexity in for well-conditioned covariates. Furthermore, the total time complexity of the algorithm is still nearly linear in and . Finally, to the best of our knowledge, this is the first result for the sparse regression setting with oblivious response corruptions and break-down point .\n\n## 7 Regression with Heavy-tailed Noise\n\nIn this section we consider the problem of linear regression with heavy-tailed noise. We consider the heavy-tailed model from Section 2 where we observe i.i.d samples from the linear model: where is sampled from a heavy-tailed distribution. We now show that our estimator from Section 4 can be adapted to this setting to estimate with sub-Gaussian error rates, even when the noise lacks the first moment.\n\nIn this setting, although there is no adversary corrupting the data, we consider any point with noise greater than a threshold as a “corrupted” point, and try not to use these points to estimate . That is, we decompose where . Note that this implies dependence between and , but as we show later in Appendix D, our proof still goes through with minor modifications, and in fact, provides similar rates as the case where is sampled from a Gaussian distribution. Below, we provide a more general result than Theorem 1, from which Theorem 1 follows by appropriate choice of . Define , the tail probability of as . {theorem}[AdaCRR-FC for Heavy-tailed Noise] Consider the setting of Theorem 1. Let be any threshold and . Set\n\n ∥wT−w∗∥Σ=O⎛⎝ρ(1−αρ−2c(loglog~n)−1)√plog~n+log2~n~n⎞⎠.\n\nNote that if the distribution of is independent of , we should always be able to find constants and to obtain nearly optimal rates. We instantiate this claim for the popular Cauchy noise, for which the existing results do not even apply due to unbounded variance. {corollary}[Cauchy noise] Consider the similar setting as in Theorem 1. Suppose the noise follows a Cauchy distribution with location parameter and scale parameter . Then, the iterate of AdaCRR-FC, for satisfies the following, w.p. :\n\n ∥wT−w∗∥Σ=O(σ)⋅√plog~n+log2~n~n.\n\nWe would like to note that despite sub-Gaussian style rates for Cauchy noise, the sample and time complexity of the algorithm is still nearly optimal.\nMean estimation: Although our result holds for regression, we can extend our result to solve the mean estimation problem as well. That is, suppose where , is the mean of a distribution and is a zero mean random variable which follows a heavy-tailed distribution. Then by using a simple symmetrization reduction, we can show that we can compute such that\n\nThis result seems to be counter-intuitive as \\citetdevroye2016sub derive lower bounds for heavy tailed mean estimation and show that over the set of all moment bounded distributions, no estimator can achieve faster rates than while we can obtain rates. However, we additionally require noise distribution to be symmetric, while the lower bound construction uses asymmetric noise distribution. We further discuss this problem in Appendix G. Similarly, our result avoids regression lower-bound by \\citetsun2018adaptive, as we do not estimate the bias term in our regression model.\n\n## 8 Conclusion\n\nIn this paper, we studied the problem of response robust regression with oblivious adversary. For this problem, we presented a simple outlier removal based algorithm that uses a novel randomized and adaptive thresholding algorithm. We proved that our algorithm provides a consistent estimator with break-down point (fraction of corruptions) of while still ensuring a nearly linear-time computational complexity. Empirical results on synthetic data agrees with our results and show computational advantage of our algorithm over Huber-loss based algorithms \\citeptsakonas2014convergence as well as better break-down point than thresholding techniques \\citepbhatia2015robust,bhatia2017consistent. We also provided an extension of our approach to the high-dimensional setting. Finally, our technique extends to the problem of linear regression with heavy-tailed noise, where we provide nearly optimal rates for a general class of noise distributions that need not have a well-defined first moment.\n\nThe finite sample break-down point of our method is which is still sub-optimal compared to the information theoretic limit of . Obtaining efficient estimators for nearly optimal break-down point is an interesting open question. Furthermore, our algorithm requires an approximate estimate of noise variance which can sometimes be difficult to select in practice. A completely parameter-free algorithm for robust regression (similar to OLS) is an interesting research direction that should have significant impact in practice as well.\n\n## Appendix A Proof of Proposition 4.2\n\nLet be the points we observe, out of which at most points are corrupted. Note that the true linear model is such that , . Based on this model, we have:\n\n y=b∗,where b∗(i)={1,if $i$ is corrupted0,otherwise.\n\nLet’s suppose we start the TORRENT algorithm at . Given , TORRENT computes its estimate of the un-corrupted points as:\n\n S=HT(1−α)n(y−Xw)=HT(1−α)n(b∗−Xw), (8)\n\nwhere returns the points in with smallest magnitude. Given , TORRENT updates its estimate of parameter vector as:\n\nNote that if , then TORRENT will be stuck at and will not make any progress. We now show that for large there in fact exists a such that .\n\nLet be the threshold used in the hard thresholding operator to compute in Equation (8); that is, is such that the magnitude of residuals of all the points in is less than and magnitude of residuals of all the points in is greater than . Note that there are fraction of points with residuals less than . Since we are working in the setting, this implies\n\n Px∼N(0,1),b∗(|b∗−xw|<τw)=(1−α).\n\nRewriting the LHS of the above expression, we get:\n\nCombining the above two equations, we get\n\n (1−α)(Φ(τww)−Φ(−τww))+α(Φ(1+τww)−Φ(1−τww))=1−α. (9)\n\nFor TORRENT to be stuck at , we require , i.e., . As uniformly at random with probability , the final term reduces to:\n\n w=αE[x∣∣|1−xw|<τw](1−α)E[x2∣∣|xw|<τw]+αE[x2∣∣|1−xw|<τw]. (10)\n\nThis shows that TORRENT will be stuck at iff there exists a such that Equations (9), (10) hold. The two are essentially system of linear equations in . And it is easy to verify feasibility of this system for various . For example, for the equations are feasible and , are approximate feasible points.\n\n## Appendix B Proof of Theorem 5\n\nBefore we present the proof of the Theorem, we introduce some notation and present useful intermediate results which we require in our proof. The proofs of all the Lemmas in this section can be found in Appendix I.\n\n##### Notation\n\nRecall that are the new points obtained in iteration of Algorithm 1. Let be the corruption vector added to these points and be the noise vector. Let be obtained from by applying the whitening transformation:\n\n ~Xt:=XtΣ−1/2, Δwt\\coloneqqΣ1/2(w∗−wt).\n\nLet be the set of un-corrupted points in . Let be the output of AdaHT in the iteration of AdaCRR-FC and be the interval chosen. For any , let be the matrix with as rows. Finally, let us define .\n\n### b.1 Intermediate Results\n\n{lemma}\n\nThe input to AdaHT can be written in terms of as\n\n rt=b∗t+~XtΔwt−1+ϵt,\n\nwhere is the corruption vector of points . The following Lemma obtains a bound on , the interval number, chosen by Algorithm 4. {lemma}[Interval Number] Let be the interval chosen by AdaHT in the iteration of AdaCRR-FC. Then . The following Lemma presents a condition on which ensures that all the uncorrupted points fall to the left of interval. {lemma}[Interval Length] Consider the iteration of AdaCRR-FC. Suppose AdaHT is run with the interval length such that: and and . Define sets , which are subsets of points in , as follows:\n\n Q1={i:|b∗t(i)|>(jt−2/9)It}andQ2={i:|b∗t(i)|<(jt−7/9)It}.\n\nThen the following statements hold with probability at least :\n\n Q1∩St={},S∗t⊆Q2⊆St.\n\nMoreover, all the points in fall in the interval.\n\n### b.2 Main Argument\n\nWe first prove the following Lemma, which obtains a bound on the progress made by AdaCRR-FC in each iteration, assuming . In Section B.2.2 we use this Lemma to prove Theorem 5. {lemma} Consider the setting of Theorem 5. Let Then, , w.p. :\n\n ∥Δwt∥2=O(γ(1−α)log~n)∥Δwt−1∥2+O⎛⎝~n1/γ1−α√p+log~n~n⎞⎠It+O⎛⎝σ1−α√αplog~n~n⎞⎠,\n\nwhere" ]
[ null, "https://media.arxiv-vanity.com/render-output/4391279/x1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8639864,"math_prob":0.95704335,"size":45343,"snap":"2021-21-2021-25","text_gpt3_token_len":10455,"char_repetition_ratio":0.13846798,"word_repetition_ratio":0.046174143,"special_character_ratio":0.21875483,"punctuation_ratio":0.14350158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99411595,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T19:31:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a4e69a3b-cb0f-494a-b807-d35cd66fe330>\",\"Content-Length\":\"1049652\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2288e434-93ae-4507-b79d-38ce5911f8a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:5222417e-a0e0-470e-8f46-bb5d27502c55>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1903.08192/\",\"WARC-Payload-Digest\":\"sha1:7PBQTRXX7XUQ6I7VDLOWJ7YKAS4PJEJZ\",\"WARC-Block-Digest\":\"sha1:FAZNEWKUWE44FU7PD2DQBCTUEILIKDIK\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988759.29_warc_CC-MAIN-20210506175146-20210506205146-00377.warc.gz\"}"}
https://metanumbers.com/51535
[ "## 51535\n\n51,535 (fifty-one thousand five hundred thirty-five) is an odd five-digits composite number following 51534 and preceding 51536. In scientific notation, it is written as 5.1535 × 104. The sum of its digits is 19. It has a total of 3 prime factors and 8 positive divisors. There are 37,440 positive integers (up to 51535) that are relatively prime to 51535.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 19\n• Digital Root 1\n\n## Name\n\nShort name 51 thousand 535 fifty-one thousand five hundred thirty-five\n\n## Notation\n\nScientific notation 5.1535 × 104 51.535 × 103\n\n## Prime Factorization of 51535\n\nPrime Factorization 5 × 11 × 937\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 51535 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 51,535 is 5 × 11 × 937. Since it has a total of 3 prime factors, 51,535 is a composite number.\n\n## Divisors of 51535\n\n1, 5, 11, 55, 937, 4685, 10307, 51535\n\n8 divisors\n\n Even divisors 0 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 67536 Sum of all the positive divisors of n s(n) 16001 Sum of the proper positive divisors of n A(n) 8442 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 227.013 Returns the nth root of the product of n divisors H(n) 6.1046 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 51,535 can be divided by 8 positive divisors (out of which 0 are even, and 8 are odd). The sum of these divisors (counting 51,535) is 67,536, the average is 8,442.\n\n## Other Arithmetic Functions (n = 51535)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 37440 Total number of positive integers not greater than n that are coprime to n λ(n) 4680 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5271 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 37,440 positive integers (less than 51,535) that are coprime with 51,535. And there are approximately 5,271 prime numbers less than or equal to 51,535.\n\n## Divisibility of 51535\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 3 0 1 1 7 1\n\nThe number 51,535 is divisible by 5.\n\n## Classification of 51535\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n• Sphenic\n\n## Base conversion (51535)\n\nBase System Value\n2 Binary 1100100101001111\n3 Ternary 2121200201\n4 Quaternary 30211033\n5 Quinary 3122120\n6 Senary 1034331\n8 Octal 144517\n10 Decimal 51535\n12 Duodecimal 259a7\n20 Vigesimal 68gf\n36 Base36 13rj\n\n## Basic calculations (n = 51535)\n\n### Multiplication\n\nn×i\n n×2 103070 154605 206140 257675\n\n### Division\n\nni\n n⁄2 25767.5 17178.3 12883.8 10307\n\n### Exponentiation\n\nni\n n2 2655856225 136869550555375 7053572287871250625 363505847855444900959375\n\n### Nth Root\n\ni√n\n 2√n 227.013 37.2135 15.067 8.75831\n\n## 51535 as geometric shapes\n\n### Circle\n\n Diameter 103070 323804 8.34362e+09\n\n### Sphere\n\n Volume 5.73318e+14 3.33745e+10 323804\n\n### Square\n\nLength = n\n Perimeter 206140 2.65586e+09 72881.5\n\n### Cube\n\nLength = n\n Surface area 1.59351e+10 1.3687e+14 89261.2\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 154605 1.15002e+09 44630.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 4.60008e+09 1.61302e+13 42078.2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6178713,"math_prob":0.9932152,"size":4544,"snap":"2021-21-2021-25","text_gpt3_token_len":1606,"char_repetition_ratio":0.11872247,"word_repetition_ratio":0.028106509,"special_character_ratio":0.45268485,"punctuation_ratio":0.0749354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99852747,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-15T23:51:45Z\",\"WARC-Record-ID\":\"<urn:uuid:5dcbbee8-f3fa-49b7-b6c9-9595c92ed2e5>\",\"Content-Length\":\"48330\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0f7b517-1f88-4290-9010-e9d005af0585>\",\"WARC-Concurrent-To\":\"<urn:uuid:449d2100-d7f7-41a9-9022-7854b2d1dced>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/51535\",\"WARC-Payload-Digest\":\"sha1:V3MC4HNHCFYDGN3QBA4ES7AN5HRXP5EW\",\"WARC-Block-Digest\":\"sha1:JGFH4QEV67XGHPV4B62JN7U7DGSE743V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991488.53_warc_CC-MAIN-20210515223209-20210516013209-00596.warc.gz\"}"}
https://www.numbers.education/9109.html
[ "Is 9109 a prime number? What are the divisors of 9109?\n\n## Is 9109 a prime number?\n\nYes, 9109 is a prime number.\n\nIndeed, the definition of a prime numbers is to have only two distinct positive divisors, 1 and itself. A number is a divisor of another number when the remainder of Euclid’s division of the second one by the first one is zero. Concerning the number 9109, the only two divisors are 1 and 9109. Therefore 9109 is a prime number.\n\nAs a consequence, 9109 is only a multiple of 1 and 9109.\n\nSince 9109 is a prime number, 9109 is also a deficient number, that is to say 9109 is a natural integer that is strictly larger than the sum of its proper divisors, i.e., the divisors of 9109 without 9109 itself (that is 1, by definition!).\n\n## Parity of 9109\n\n9109 is an odd number, because it is not evenly divisible by 2.\n\n## Is 9109 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 9109 is about 95.441.\n\nThus, the square root of 9109 is not an integer, and therefore 9109 is not a square number.\n\nAnyway, 9109 is a prime number, and a prime number cannot be a perfect square.\n\n## What is the square number of 9109?\n\nThe square of a number (here 9109) is the result of the product of this number (9109) by itself (i.e., 9109 × 9109); the square of 9109 is sometimes called \"raising 9109 to the power 2\", or \"9109 squared\".\n\nThe square of 9109 is 82 973 881 because 9109 × 9109 = 91092 = 82 973 881.\n\nAs a consequence, 9109 is the square root of 82 973 881.\n\n## Number of digits of 9109\n\n9109 is a number with 4 digits.\n\n## What are the multiples of 9109?\n\nThe multiples of 9109 are all integers evenly divisible by 9109, that is all numbers such that the remainder of the division by 9109 is zero. There are infinitely many multiples of 9109. The smallest multiples of 9109 are:\n\n• 0: indeed, 0 is divisible by any natural number, and it is thus a multiple of 9109 too, since 0 × 9109 = 0\n• 9109: indeed, 9109 is a multiple of itself, since 9109 is evenly divisible by 9109 (we have 9109 / 9109 = 1, so the remainder of this division is indeed zero)\n• 18 218: indeed, 18 218 = 9109 × 2\n• 27 327: indeed, 27 327 = 9109 × 3\n• 36 436: indeed, 36 436 = 9109 × 4\n• 45 545: indeed, 45 545 = 9109 × 5\n• etc.\n\n## Nearest numbers from 9109\n\nFind out whether some integer is a prime number" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9177199,"math_prob":0.99703854,"size":2259,"snap":"2019-35-2019-39","text_gpt3_token_len":685,"char_repetition_ratio":0.21241686,"word_repetition_ratio":0.022222223,"special_character_ratio":0.37892872,"punctuation_ratio":0.13690476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994609,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T08:16:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a302143a-a984-4e62-bc1d-27dec2ff06f4>\",\"Content-Length\":\"12223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b886d6b-7d5a-42d2-b0be-427b0ddcf561>\",\"WARC-Concurrent-To\":\"<urn:uuid:91bb1107-0ff2-48a6-9c71-31b7d9fe17d4>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/9109.html\",\"WARC-Payload-Digest\":\"sha1:VUM7EOU22AK7VGFUWY2QZUNWJNBJ6WEF\",\"WARC-Block-Digest\":\"sha1:HEE6PHRFJCZH6KZAE3R3JNI5KO5DT46S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573065.17_warc_CC-MAIN-20190917081137-20190917103137-00230.warc.gz\"}"}
https://open.metu.edu.tr/handle/11511/48586
[ "# A new integrable generalization of the Korteweg-de Vries equation\n\n2008-07-01\nKarasu-Kalkanli, Ayse\nKarasu, Atalay\nSakovich, Anton\nSakovich, Sergei\nTURHAN, REFİK\nA new integrable sixth-order nonlinear wave equation is discovered by means of the Painleve analysis, which is equivalent to the Korteweg-de Vries equation with a source. A Lax representation and an auto-Backlund transformation are found for the new equation, and its traveling wave solutions and generalized symmetries are studied. (C) 2008 American Institute of Physics.\nJOURNAL OF MATHEMATICAL PHYSICS\n\n# Suggestions\n\n String-Theory Realization of Modular Forms for Elliptic Curves with Complex Multiplication Kondo, Satoshi; Watari, Taizan (Springer Science and Business Media LLC, 2019-04-01) It is known that the L-function of an elliptic curve defined over Q is given by the Mellin transform of a modular form of weight 2. Does that modular form have anything to do with string theory? In this article, we address a question along this line for elliptic curves that have complex multiplication defined over number fields. So long as we use diagonal rational N=(2,2) superconformal field theories for the string-theory realizations of the elliptic curves, the weight-2 modular form turns out to be the Bo...\n Symmetry reductions of a Hamilton-Jacobi-Bellman equation arising in financial mathematics Naicker, V; Andriopoulos, K; Leach, PGL (Informa UK Limited, 2005-05-01) We determine the solutions of a nonlinear Hamilton-Jacobi-Bellman equation which arises in the modelling of mean-variance hedging subject to a terminal condition. Firstly we establish those forms of the equation which admit the maximal number of Lie point symmetries and then examine each in turn. We show that the Lie method is only suitable for an equation of maximal symmetry. We indicate the applicability of the method to cases in which the parametric function depends also upon the time.\n The Lie algebra sl(2,R) and so-called Kepler-Ermakov systems Leach, PGL; Karasu, Emine Ayşe (Informa UK Limited, 2004-05-01) A recent paper by Karasu (Kalkanli) and Yildirim (Journal of Nonlinear Mathematical Physics 9 (2002) 475-482) presented a study of the Kepler-Ermakov system in the context of determining the form of an arbitrary function in the system which was compatible with the presence of the sl(2, R) algebra characteristic of Ermakov systems and the existence of a Lagrangian for a subset of the systems. We supplement that analysis by correcting some results.\n A Monte Carlo procedure for the determination of the relaxation time constant of spin systems Kokten, H; Yalabik, M C (IOP Publishing, 1990-10-21) A new Monte Carlo method for the determination of relaxation time constants of classical spin systems is presented. The method is applied to a dynamical finite-size scaling calculatio\n An algebraic method for the analytical solutions of the Klein-Gordon equation for any angular momentum for some diatomic potentials Akçay, Hüseyin; Sever, Ramazan (IOP Publishing, 2014-01-01) Analytical solutions of the Klein-Gordon equation are obtained by reducing the radial part of the wave equation to a standard form of a second-order differential equation. Differential equations of this standard form are solvable in terms of hypergeometric functions and we give an algebraic formulation for the bound state wave functions and for the energy eigenvalues. This formulation is applied for the solutions of the Klein-Gordon equation with some diatomic potentials.\nCitation Formats\nA. Karasu-Kalkanli, A. Karasu, A. Sakovich, S. Sakovich, and R. TURHAN, “A new integrable generalization of the Korteweg-de Vries equation,” JOURNAL OF MATHEMATICAL PHYSICS, pp. 0–0, 2008, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/48586.", null, "" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81952477,"math_prob":0.9546441,"size":5053,"snap":"2022-40-2023-06","text_gpt3_token_len":1242,"char_repetition_ratio":0.11467617,"word_repetition_ratio":0.026573427,"special_character_ratio":0.20205818,"punctuation_ratio":0.10844749,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9866249,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T08:39:13Z\",\"WARC-Record-ID\":\"<urn:uuid:53f8b0f1-349f-4867-ac6b-851a474ff67b>\",\"Content-Length\":\"54054\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6217065-3d0e-402e-adb9-82c43fb09b1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a8b1480-5e75-48f7-8865-d380eb60e650>\",\"WARC-IP-Address\":\"144.122.144.37\",\"WARC-Target-URI\":\"https://open.metu.edu.tr/handle/11511/48586\",\"WARC-Payload-Digest\":\"sha1:ISPUDTATSXBWGUFL7HBSZWHSB5OIGIPA\",\"WARC-Block-Digest\":\"sha1:B45I32PHJK2KHWTO4GKKENKUD5WOI3GQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494974.98_warc_CC-MAIN-20230127065356-20230127095356-00598.warc.gz\"}"}
https://bz.vita-aidelos.com/170-geomagic-square.html
[ "# Geomagic square", null, "The image shows an incomplete geomagic square. In a similar way to the magic squares in which when adding all the numbers of a row, column or diagonal we always get the same result, in this geomagic square if we put all the pieces of a row, column or diagonal together we always get a figure of it size and shape\n\nDiscover the figure that should appear on the sidelines in each case.\n\nExtracted from the page www.geomagicsquares.com where you will find many geomagic squares.\n\n#### Solution\n\nThe following image shows the solution:" ]
[ null, "https://vita-aidelos.com/img/cuadrado-geom-gico.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86977744,"math_prob":0.98634183,"size":522,"snap":"2021-31-2021-39","text_gpt3_token_len":110,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.045454547,"special_character_ratio":0.19157088,"punctuation_ratio":0.08737864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629723,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T14:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c51b8d7c-a502-4b7b-941e-93b355e810dc>\",\"Content-Length\":\"56043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81b3d826-f532-483f-822e-48ea4f69fb60>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad50aae0-1478-4d69-a90c-24ce405ee9b9>\",\"WARC-IP-Address\":\"172.67.162.217\",\"WARC-Target-URI\":\"https://bz.vita-aidelos.com/170-geomagic-square.html\",\"WARC-Payload-Digest\":\"sha1:VB2YJNVSDKFNJ7Y5U7DBHZGMAU5AWKA4\",\"WARC-Block-Digest\":\"sha1:ZNCVR4KMTUULDTRPH6KNEKW5WR2TFGG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153729.44_warc_CC-MAIN-20210728123318-20210728153318-00522.warc.gz\"}"}
https://stacks.math.columbia.edu/tag/0AZR
[ "## 43.14 Intersection multiplicities using Tor formula\n\nA basic fact we will use frequently is that given sheaves of modules $\\mathcal{F}$, $\\mathcal{G}$ on a ringed space $(X, \\mathcal{O}_ X)$ and a point $x \\in X$ we have\n\n$\\text{Tor}_ p^{\\mathcal{O}_ X}(\\mathcal{F}, \\mathcal{G})_ x = \\text{Tor}_ p^{\\mathcal{O}_{X, x}}(\\mathcal{F}_ x, \\mathcal{G}_ x)$\n\nas $\\mathcal{O}_{X, x}$-modules. This can be seen in several ways from our construction of derived tensor products in Cohomology, Section 20.26, for example it follows from Cohomology, Lemma 20.26.4. Moreover, if $X$ is a scheme and $\\mathcal{F}$ and $\\mathcal{G}$ are quasi-coherent, then the modules $\\text{Tor}_ p^{\\mathcal{O}_ X}(\\mathcal{F}, \\mathcal{G})$ are quasi-coherent too, see Derived Categories of Schemes, Lemma 36.3.9. More important for our purposes is the following result.\n\nLemma 43.14.1. Let $X$ be a locally Noetherian scheme.\n\n1. If $\\mathcal{F}$ and $\\mathcal{G}$ are coherent $\\mathcal{O}_ X$-modules, then $\\text{Tor}_ p^{\\mathcal{O}_ X}(\\mathcal{F}, \\mathcal{G})$ is too.\n\n2. If $L$ and $K$ are in $D^-_{\\textit{Coh}}(\\mathcal{O}_ X)$, then so is $L \\otimes _{\\mathcal{O}_ X}^\\mathbf {L} K$.\n\nProof. Let us explain how to prove (1) in a more elementary way and part (2) using previously developed general theory.\n\nProof of (1). Since formation of $\\text{Tor}$ commutes with localization we may assume $X$ is affine. Hence $X = \\mathop{\\mathrm{Spec}}(A)$ for some Noetherian ring $A$ and $\\mathcal{F}$, $\\mathcal{G}$ correspond to finite $A$-modules $M$ and $N$ (Cohomology of Schemes, Lemma 30.9.1). By Derived Categories of Schemes, Lemma 36.3.9 we may compute the $\\text{Tor}$'s by first computing the $\\text{Tor}$'s of $M$ and $N$ over $A$, and then taking the associated $\\mathcal{O}_ X$-module. Since the modules $\\text{Tor}_ p^ A(M, N)$ are finite by Algebra, Lemma 10.75.7 we conclude.\n\nBy Derived Categories of Schemes, Lemma 36.10.3 the assumption is equivalent to asking $L$ and $K$ to be (locally) pseudo-coherent. Then $L \\otimes _{\\mathcal{O}_ X}^\\mathbf {L} K$ is pseudo-coherent by Cohomology, Lemma 20.44.5. $\\square$\n\nLemma 43.14.2. Let $X$ be a nonsingular variety. Let $\\mathcal{F}$, $\\mathcal{G}$ be coherent $\\mathcal{O}_ X$-modules. The $\\mathcal{O}_ X$-module $\\text{Tor}_ p^{\\mathcal{O}_ X}(\\mathcal{F}, \\mathcal{G})$ is coherent, has stalk at $x$ equal to $\\text{Tor}_ p^{\\mathcal{O}_{X, x}}(\\mathcal{F}_ x, \\mathcal{G}_ x)$, is supported on $\\text{Supp}(\\mathcal{F}) \\cap \\text{Supp}(\\mathcal{G})$, and is nonzero only for $p \\in \\{ 0, \\ldots , \\dim (X)\\}$.\n\nProof. The result on stalks was discussed above and it implies the support condition. The $\\text{Tor}$'s are coherent by Lemma 43.14.1. The vanishing of negative $\\text{Tor}$'s is immediate from the construction. The vanishing of $\\text{Tor}_ p$ for $p > \\dim (X)$ can be seen as follows: the local rings $\\mathcal{O}_{X, x}$ are regular (as $X$ is nonsingular) of dimension $\\leq \\dim (X)$ (Algebra, Lemma 10.116.1), hence $\\mathcal{O}_{X, x}$ has finite global dimension $\\leq \\dim (X)$ (Algebra, Lemma 10.110.8) which implies that $\\text{Tor}$-groups of modules vanish beyond the dimension (More on Algebra, Lemma 15.66.19). $\\square$\n\nLet $X$ be a nonsingular variety and $W, V \\subset X$ be closed subvarieties with $\\dim (W) = s$ and $\\dim (V) = r$. Assume $V$ and $W$ intersect properly. In this case Lemma 43.13.4 tells us all irreducible components of $V \\cap W$ have dimension equal to $r + s - \\dim (X)$. The sheaves $\\text{Tor}_ j^{\\mathcal{O}_ X}(\\mathcal{O}_ W, \\mathcal{O}_ V)$ are coherent, supported on $V \\cap W$, and zero if $j < 0$ or $j > \\dim (X)$ (Lemma 43.14.2). We define the intersection product as\n\n$W \\cdot V = \\sum \\nolimits _ i (-1)^ i [\\text{Tor}_ i^{\\mathcal{O}_ X}(\\mathcal{O}_ W, \\mathcal{O}_ V)]_{r + s - \\dim (X)}.$\n\nWe stress that this makes sense only because of our assumption that $V$ and $W$ intersect properly. This fact will necessitate a moving lemma in order to define the intersection product in general.\n\nWith this notation, the cycle $V \\cdot W$ is a formal linear combination $\\sum e_ Z Z$ of the irreducible components $Z$ of the intersection $V \\cap W$. The integers $e_ Z$ are called the intersection multiplicities\n\n$e_ Z = e(X, V \\cdot W, Z) = \\sum \\nolimits _ i (-1)^ i \\text{length}_{\\mathcal{O}_{X, Z}} \\text{Tor}_ i^{\\mathcal{O}_{X, Z}}(\\mathcal{O}_{W, Z}, \\mathcal{O}_{V, Z})$\n\nwhere $\\mathcal{O}_{X, Z}$, resp. $\\mathcal{O}_{W, Z}$, resp. $\\mathcal{O}_{V, Z}$ denotes the local ring of $X$, resp. $W$, resp. $V$ at the generic point of $Z$. These alternating sums of lengths of $\\text{Tor}$'s satisfy many good properties, as we will see later on.\n\nIn the case of transversal intersections, the intersection number is $1$.\n\nLemma 43.14.3. Let $X$ be a nonsingular variety. Let $V, W \\subset X$ be closed subvarieties which intersect properly. Let $Z$ be an irreducible component of $V \\cap W$ and assume that the multiplicity (in the sense of Section 43.4) of $Z$ in the closed subscheme $V \\cap W$ is $1$. Then $e(X, V \\cdot W, Z) = 1$ and $V$ and $W$ are smooth in a general point of $Z$.\n\nProof. Let $(A, \\mathfrak m, \\kappa ) = (\\mathcal{O}_{X, \\xi }, \\mathfrak m_\\xi , \\kappa (\\xi ))$ where $\\xi \\in Z$ is the generic point. Then $\\dim (A) = \\dim (X) - \\dim (Z)$, see Varieties, Lemma 33.20.3. Let $I, J \\subset A$ cut out the trace of $V$ and $W$ in $\\mathop{\\mathrm{Spec}}(A)$. Set $\\overline{I} = I + \\mathfrak m^2/\\mathfrak m^2$. Then $\\dim _\\kappa \\overline{I} \\leq \\dim (X) - \\dim (V)$ with equality if and only if $A/I$ is regular (this follows from the lemma cited above and the definition of regular rings, see Algebra, Definition 10.60.10 and the discussion preceding it). Similarly for $\\overline{J}$. If the multiplicity is $1$, then $\\text{length}_ A(A/I + J) = 1$, hence $I + J = \\mathfrak m$, hence $\\overline{I} + \\overline{J} = \\mathfrak m/\\mathfrak m^2$. Then we get equality everywhere (because the intersection is proper). Hence we find $f_1, \\ldots , f_ a \\in I$ and $g_1, \\ldots g_ b \\in J$ such that $\\overline{f}_1, \\ldots , \\overline{g}_ b$ is a basis for $\\mathfrak m/\\mathfrak m^2$. Then $f_1, \\ldots , g_ b$ is a regular system of parameters and a regular sequence (Algebra, Lemma 10.106.3). The same lemma shows $A/(f_1, \\ldots , f_ a)$ is a regular local ring of dimension $\\dim (X) - \\dim (V)$, hence $A/(f_1, \\ldots , f_ a) \\to A/I$ is an isomorphism (if the kernel is nonzero, then the dimension of $A/I$ is strictly less, see Algebra, Lemmas 10.106.2 and 10.60.13). We conclude $I = (f_1, \\ldots , f_ a)$ and $J = (g_1, \\ldots , g_ b)$ by symmetry. Thus the Koszul complex $K_\\bullet (A, f_1, \\ldots , f_ a)$ on $f_1, \\ldots , f_ a$ is a resolution of $A/I$, see More on Algebra, Lemma 15.30.2. Hence\n\n\\begin{align*} \\text{Tor}_ p^ A(A/I, A/J) & = H_ p(K_\\bullet (A, f_1, \\ldots , f_ a) \\otimes _ A A/J) \\\\ & = H_ p(K_\\bullet (A/J, f_1 \\bmod J, \\ldots , f_ a \\bmod J)) \\end{align*}\n\nSince we've seen above that $f_1 \\bmod J, \\ldots , f_ a \\bmod J$ is a regular system of parameters in the regular local ring $A/J$ we conclude that there is only one cohomology group, namely $H_0 = A/(I + J) = \\kappa$. This finishes the proof. $\\square$\n\nExample 43.14.4. In this example we show that it is necessary to use the higher tors in the formula for the intersection multiplicities above. Let $X$ be a nonsingular variety of dimension $4$. Let $p \\in X$ be a closed point. Let $V, W \\subset X$ be closed subvarieties in $X$. Assume that there is an isomorphism\n\n$\\mathcal{O}_{X, p}^\\wedge \\cong \\mathbf{C}[[x, y, z, w]]$\n\nsuch that the ideal of $V$ is $(xz, xw, yz, yw)$ and the ideal of $W$ is $(x - z, y - w)$. Then a computation shows that\n\n$\\text{length}\\ \\mathbf{C}[[x, y, z, w]]/ (xz, xw, yz, yw, x - z, y - w) = 3$\n\nOn the other hand, the multiplicity $e(X, V \\cdot W, p) = 2$ as can be seen from the fact that formal locally $V$ is the union of two smooth planes $x = y = 0$ and $z = w = 0$ at $p$, each of which has intersection multiplicity $1$ with the plane $x - z = y - w = 0$ (Lemma 43.14.3). To make an actual example, take a general morphism $f : \\mathbf{P}^2 \\to \\mathbf{P}^4$ given by $5$ homogeneous polynomials of degree $> 1$. The image $V \\subset \\mathbf{P}^4 = X$ will have singularities of the type described above, because there will be $p_1, p_2 \\in \\mathbf{P}^2$ with $f(p_1) = f(p_2)$. To find $W$ take a general plane passing through such a point.\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7184669,"math_prob":0.9999927,"size":8435,"snap":"2022-05-2022-21","text_gpt3_token_len":2944,"char_repetition_ratio":0.15680228,"word_repetition_ratio":0.038793102,"special_character_ratio":0.35696504,"punctuation_ratio":0.14035088,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T11:28:00Z\",\"WARC-Record-ID\":\"<urn:uuid:a509ea41-48b5-44e6-a109-c18f104cff53>\",\"Content-Length\":\"23603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:033b8072-77fa-462f-a9f8-e525af4447a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:34e588ec-fb27-4629-a594-1fdbca4b34ee>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/0AZR\",\"WARC-Payload-Digest\":\"sha1:LWKAXJTJRYOTP3DHXV5SSKBSFMEKGCNQ\",\"WARC-Block-Digest\":\"sha1:AE6JWH4P6SJUN4D5OIVM247ZPLCFU4IG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662527626.15_warc_CC-MAIN-20220519105247-20220519135247-00560.warc.gz\"}"}
https://package.frelm.org/repo/152/1.0.6/Matrix2
[ "This is an alternative site for discovering Elm packages. You may be looking for the official Elm package site instead.\n\n# Matrix2\n\n## Matrix2\n\ntype alias Float2x2 = Mat2 Float\ntype alias Mat2 a = ( Vec2 a, Vec2 a )\n\n## General operations\n\nmap : (a -> b) -> Mat2 a -> Mat2 b\n``````elementsSquared =\nmap (\\x -> x ^ 2)\n``````\nmap2 : (a -> b -> c) -> Mat2 a -> Mat2 b -> Mat2 c\n``````elementWiseDivision =\nmap2 (/)\n``````\nfoldl : (elem -> acc -> acc) -> acc -> Mat2 elem -> acc\nfoldr : (elem -> acc -> acc) -> acc -> Mat2 elem -> acc\n\n## Math\n\nidentity : Float2x2\n\nThe identity matrix.\n\n``````I = |1 0|\n|0 1|\n\nI*A = A = A*I\n``````\nfromRows : Float2 -> Float2 -> Float2x2\n\nConstruct a matrix from rows.\n\n``````fromRows (1,2) (3,4) == ((1,2),(3,4))\n``````\nfromColumns : Float2 -> Float2 -> Float2x2\n\nConstruct a matrix from columns.\n\n``````fromColumns (1,2) (3,4) == ((1,3),(2,4))\n``````\nadd : Float2x2 -> Float2x2 -> Float2x2\n\n``````|a b| |e f| |a+e b+f|\n|c d| + |g h| = |c+g d+h|\n``````\nsub : Float2x2 -> Float2x2 -> Float2x2\n\nMatrix subtraction.\n\n`A - B`\n\nmul : Float2x2 -> Float2x2 -> Float2x2\n\nMatrix multiplication.\n\n`A*B`\n\nelementWiseMul : Float2x2 -> Float2x2 -> Float2x2\n\nElement wise multiplication. Also called Hadamard product, Schur product or entrywise product.\n\n``````|a b| |e f| |ae bf|\n|c d| .* |g h| = |cg dh|\n``````\nmulByConst : Float -> Float2x2 -> Float2x2\n\n`a*A` Multiply a matrix by a constant\n\ntranspose : Float2x2 -> Float2x2\n\nThe transpose. Flips a matrix along it's diagonal.\n\n``````|a b|T |a c|\n|c d| = |b d|\n``````\nmulVector : Float2x2 -> Float2 -> Float2\n\nMatrix-vector multiplication.\n\n`````` |a b| |x| |ax+by|\nA*v = |c d|*|y| = |cx+dy|\n``````\n\n## Other\n\nalmostEqual : Float -> Float2x2 -> Float2x2 -> Bool\n\nThis checks whether `|A - B| < eps`.\n\n``````almostEqual eps a b\n``````\n\nThis is useful for testing, see the tests of this library for how this makes testing easy.\n\nSince any definition of a norm can be used for this, it uses the simple `maxNorm`\n\nmaxNorm : Float2x2 -> Float\n\nThe max norm. This is the biggest element of a matrix. Useful for fuzz testing.\n\n``````module Matrix2 exposing (..)\n\n{-|\n\n## Matrix2\n\n@docs Float2x2, Mat2\n\n## General operations\n\n@docs map, map2, foldl, foldr\n\n## Math\n\n@docs identity, fromRows, fromColumns\n\n@docs add, sub, mul, elementWiseMul, mulByConst, transpose, mulVector\n\n## Other\n\n@docs almostEqual, maxNorm\n\n-}\n\nimport Vector2 as V2 exposing (Float2, Vec2)\n\n{-| -}\ntype alias Mat2 a =\n( Vec2 a, Vec2 a )\n\n{-| -}\ntype alias Float2x2 =\nMat2 Float\n\n{-|\n\nelementsSquared =\nmap (\\x -> x ^ 2)\n-}\nmap : (a -> b) -> Mat2 a -> Mat2 b\nmap f =\nV2.map (V2.map f)\n\n{-|\n\nelementWiseDivision =\nmap2 (/)\n-}\nmap2 : (a -> b -> c) -> Mat2 a -> Mat2 b -> Mat2 c\nmap2 f =\nV2.map2 (V2.map2 f)\n\n{-| -}\nfoldl : (elem -> acc -> acc) -> acc -> Mat2 elem -> acc\nfoldl f init ( r1, r2 ) =\nV2.foldl f (V2.foldl f init r1) r2\n\n{-| -}\nfoldr : (elem -> acc -> acc) -> acc -> Mat2 elem -> acc\nfoldr f init ( r1, r2 ) =\nV2.foldr f (V2.foldr f init r2) r1\n\n-- Math\n\n{-| The identity matrix.\n\nI = |1 0|\n|0 1|\n\nI*A = A = A*I\n\n-}\nidentity : Float2x2\nidentity =\n( ( 1, 0 )\n, ( 0, 1 )\n)\n\n{-| Construct a matrix from rows.\n\nfromRows (1,2) (3,4) == ((1,2),(3,4))\n\n-}\nfromRows : Float2 -> Float2 -> Float2x2\nfromRows a b =\n( a, b )\n\n{-| Construct a matrix from columns.\n\nfromColumns (1,2) (3,4) == ((1,3),(2,4))\n\n-}\nfromColumns : Float2 -> Float2 -> Float2x2\nfromColumns ( a11, a21 ) ( a12, a22 ) =\n( ( a11, a12 ), ( a21, a22 ) )\n\n|a b| |e f| |a+e b+f|\n|c d| + |g h| = |c+g d+h|\n\n-}\nadd : Float2x2 -> Float2x2 -> Float2x2\nmap2 (+)\n\n{-| Matrix subtraction.\n\n`A - B`\n\n-}\nsub : Float2x2 -> Float2x2 -> Float2x2\nsub =\nmap2 (-)\n\n{-| Matrix multiplication.\n\n`A*B`\n\n-}\nmul : Float2x2 -> Float2x2 -> Float2x2\nmul ( ( a11, a12 ), ( a21, a22 ) ) ( ( b11, b12 ), ( b21, b22 ) ) =\n( ( a11 * b11 + a12 * b21, a11 * b12 + a12 * b22 )\n, ( a21 * b11 + a22 * b21, a21 * b12 + a22 * b22 )\n)\n\n{-| Element wise multiplication. Also called Hadamard product, Schur product or entrywise product.\n\n|a b| |e f| |ae bf|\n|c d| .* |g h| = |cg dh|\n\n-}\nelementWiseMul : Float2x2 -> Float2x2 -> Float2x2\nelementWiseMul =\nmap2 (*)\n\n{-| `a*A`\nMultiply a matrix by a constant\n-}\nmulByConst : Float -> Float2x2 -> Float2x2\nmulByConst a ( ( a11, a12 ), ( a21, a22 ) ) =\n( ( a * a11, a * a12 ), ( a * a21, a * a22 ) )\n\n-- determinants are not in the scope of this library\n--{-| The determinant.\n--\n-- |a b|\n-- det|c d| = ad - bc\n---}\n--det : Float2x2 -> Float\n--det ( ( a11, a12 ), ( a21, a22 ) ) =\n-- a11 * a22 - a12 * a21\n\n{-| The transpose.\nFlips a matrix along it's diagonal.\n\n|a b|T |a c|\n|c d| = |b d|\n\n-}\ntranspose : Float2x2 -> Float2x2\ntranspose ( ( a11, a12 ), ( a21, a22 ) ) =\n( ( a11, a21 )\n, ( a12, a22 )\n)\n\n{-| Matrix-vector multiplication.\n\n|a b| |x| |ax+by|\nA*v = |c d|*|y| = |cx+dy|\n\n-}\nmulVector : Float2x2 -> Float2 -> Float2\nmulVector ( v1, v2 ) v =\n( V2.dot v1 v, V2.dot v2 v )\n\n-- Inverses are not in the scope of this library\n--| The inverse.\n--It's almost always a better idea to use `solve`.\n--http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/\n-- A^-1*A = I = A*A^-1\n--inverse : Float2x2 -> Float2x2\n--inverse (((a11,a12),(a21,a22)) as m) =\n-- scale (1/(det m)) ((a22, -a12),(-a21,a11))\n--\n--solve ((a11, a12), (a21, a22)) (bx, by) =\n\n{-| This checks whether `|A - B| < eps`.\n\nalmostEqual eps a b\n\nThis is useful for testing, see the tests of this library for how this makes testing easy.\n\nSince any definition of a norm can be used for this, it uses the simple `maxNorm`\n\n-}\nalmostEqual : Float -> Float2x2 -> Float2x2 -> Bool\nalmostEqual eps a b =\nmaxNorm (sub a b) <= eps\n\n{-| The max norm. This is the biggest element of a matrix.\nUseful for fuzz testing.\n-}\nmaxNorm : Float2x2 -> Float\nmaxNorm =\nfoldl (\\elem acc -> max (abs elem) acc) 0\n```\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58313835,"math_prob":0.9948679,"size":4657,"snap":"2019-13-2019-22","text_gpt3_token_len":1761,"char_repetition_ratio":0.14119923,"word_repetition_ratio":0.35833332,"special_character_ratio":0.45308137,"punctuation_ratio":0.17211328,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99876535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T23:12:18Z\",\"WARC-Record-ID\":\"<urn:uuid:83aaa9f8-e448-440f-bbad-a0e67a7d0743>\",\"Content-Length\":\"15264\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4ff2c04-45c2-49c8-9f73-49d2ae9236fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:4071ada8-94e3-4446-be00-ec9272c1816a>\",\"WARC-IP-Address\":\"54.85.157.136\",\"WARC-Target-URI\":\"https://package.frelm.org/repo/152/1.0.6/Matrix2\",\"WARC-Payload-Digest\":\"sha1:7SYWUW2O2AGPRZJF3YI536TRLMI6752Y\",\"WARC-Block-Digest\":\"sha1:BXQRFFLZ7GHJXBFSPSEDUTPHSWJZWH7Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203093.63_warc_CC-MAIN-20190323221914-20190324003914-00406.warc.gz\"}"}
https://www.michaelpj.com/blog/2012/12/29/covariance-and-contravariance-in-scala.html
[ "I spent some time trying to figure out co- and contra-variance in Scala, and it turns out to be both interesting enough to be worth blogging about, and subtle enough that doing so will test my understanding!\n\nSo, you’ve probably seen classes in scala that look a bit like this:\n\nsealed abstract class List[+A] {\ndef ::[B >: A](x : B) : List[B] = ...\n...\n}\n\n\nAnd you’ve probably heard that the +A means that A is a “covariant type parameter”, whatever that means. And if you’ve tried to use classes with co- or contra-variant type parameters, you’ve probably run into cryptic errors about “covariant positions” and other such gibberish. Hopefully, by the end of this post, you’ll have some idea what that all means.\n\nThe first thing that’s going on there is that List is a “generic” type. That is, you can have lots of List types. You can have List[Int], and List[MyClass] or whatever. To put this in another way, List[_] is a type constructor; it’s like a function that takes another concrete type and produces a new one. So if you already have a type X, you can use the List type constructor to make a new type, List[X].\n\n## A little bit of category theory\n\nTo get the cool stuff in all its generality, we’re going to need to start thinking about things in terms of tags. Fortunately, it’s pretty non-scary tags stuff. Recall that a category $\\mathcal{C}$ is just some objects and some arrows (which we usually gloss as “functions”). Arrows go from one object to another, and the only requirements for being a category are that you have some binary operation on arrows (usually glossed as “composition”), that makes new arrows that go from and to the right places; and that you have an “identity” arrow on every object that does just what you’d expect.1 The category we’re mostly interested in is the category of types: types like Int, Person, Map[Foo, Bar] are the objects, and arrows are precisely functions.\n\nThe other concept we’re going to need is that of a functor. A functor $F : \\mathcal{C} \\rightarrow \\mathcal{D}$ is a mapping between tags. However, there’s no reason you can’t have functors from tags to themselves (helpfully called “endofunctors”), and those are the ones we’re going to be interested in. Functors have to turn objects in the source category into objects in the target category, and they also have to turn arrows into new arrows. Again, functors have to obey certain laws, but don’t worry too much about that.2\n\nOkay, so who cares about functors? The answer is that type constructors are basically functors on the category of types. How is that? Well, they turn types (which are our objects) into other types: check! But what about the arrows (i.e. functions). Don’t functors have to map those over as well? Yes, they do, but in Scala we don’t call the function that comes out of the List functor List[f], we call it map(f).3\n\nOne final concept and then I promise this will start to get relevant. Some mappings between tags look a lot like functors, except that they reverse the direction of arrows. So instead of getting $F(f): FX \\rightarrow FY$, you get $F(f): FY \\rightarrow FX$. So these got a special name, they’re called contravariant functors. To distiguish them, normal functors are called covariant functors.\n\nLook at that, there are those funny words again. But what on earth do contravariant functors have to do with Scala?\n\nGood question.\n\n## Subtyping\n\nThe key feature of Scala, for our purposes, is that it’s a language with subtyping. Classes (types) can be sub- or super- types of other classes. This gives us the familiar idea of a class hierarchy. Looking at it mathematically, we can say that we have a relation $<:$ between types that acts as a partial order. Here comes neat Category Theory Trick no. 1: we can view any partially ordered set as a category! The objects are the objects, and we have an arrow $A \\rightarrow B$ iff $A <: B$. This is a bit weird, because we’re only ever going to have one arrow between objects, and they’re not really “functions” any more, but all the formal machinery still works.4\n\nNow some type constructors on this category still look like functors. They map objects to other objects, and if one of those objects is a subtype of the other, then they may or may not impose a relationship between the mapped objects.\n\nThis is where the Scala type annotations come in. When we declare List[+A], we are saying that List is covariant in the parameter A.5 What that means is that it takes a type, say Parent, to a new type List[Parent], and if Child is a subtype of Parent, then List[Child] will be a subtype of List[Parent]. If we’d declared List to be contravariant (List[-A]), then List[Child] would be a supertype of List[Parent].\n\nThere’s one final possibility. Since subtyping is a partial order, we can have two types where neither one is a subtype of the other. There’s no reason in principle why a type constructor T couldn’t take Parent and Child to new types which were completely unrelated. In Scala, this is the case when you don’t provide an annotation for the type in the declaration; such a constructor is said to be invariant in that parameter. Arrays, for example, have this property.\n\nAnd that, fundamentally, is it. That’s what those little +s and -s on type paramters mean. You can go home now.\n\nclass GParent\nclass Parent extends GParent\nclass Child extends Parent\nclass Box[+A]\nclass Box2[-A]\n\ndef foo(x : Box[Parent]) : Box[Parent] = identity(x)\ndef bar(x : Box2[Parent]) : Box2[Parent] = identity(x)\n\nfoo(new Box[Child]) // success\nfoo(new Box[GParent]) // type error\n\nbar(new Box2[Child]) // type error\nbar(new Box2[GParent]) // success\n\n\n## But what about those cryptic errors?\n\nclass Box[+A] {\ndef set(x : A) : Box[A]\n}\n// won't compile\n\n\nYou get these kinds of errors in Scala because of the subtleties of how variance relates to functions (and later, methods). We can see that there’s something weird going on if we look at the declaration of the Function trait:\n\ntrait Function1[-T1, +R] {\ndef apply(t : T1) : R\n...\n}\n\n\nWhoa. That’s pretty strange. Not only does it have two type parameters, one of them is contravariant. Weird. Let’s work through this methodically.\n\nWe have Function1[A,B], which is a type of one-parameter functions that go from type A to type B. It can therefore be a sub- or super-type of other (function) types. For example,\n\nFunction1[GParent, Child] <: Function1[Parent, Parent]\n\n\nHow do I know this? Because of the variance annotations on Function1. The first parameter is contravariant, so can vary upwards, and the second parameter is covariant, so can vary downwards.\n\nThe reason why Function1 behaves in this way is a bit subtle, but makes sense if you think about the way substitution has to work when you have subtyping. If you have a function from A to B, what can you substitue for it? Anything you put in its place must make fewer requirements on it’s input type; since the function can’t, for example, get away with calling a method that only exists on subtypes of A. On the other hand, it must return a type at least as specialised as B, since the caller of the function may be expecting all the methods on B to be available.\n\n## Function Functors\n\nThere’s actually a nice category theory justification for why things have to be this way. In general, for any category $\\mathcal{C}$ we can also construct a category of the Hom-sets of $\\mathcal{C}$. Functions between these sets will just be higher-order functions that turn functions into different functions. There is then an obvious functor, $Hom(-, -)$ that takes two objects A and B and produces $Hom(A, B)$. The Hom-functor is a bit tricky because it’s a bifunctor: it takes two arguments. The easiest way to deal with it is to sort of “partially apply” it and look at how it behaves on each of its arguments individually.\n\nSo $Hom(A, -)$ takes an object B to the set of functions from A to B. How does it act on functions? If we have a morphism $f:B \\rightarrow B’$ we need a function $Hom(A, f): Hom(A, B) \\rightarrow Hom(A, B’)$. The obvious definition is\n\n$Hom(A, f)(g) = f \\circ g$\n\nThat is, you do g first, to get from A to B, and then f to get from B to B’. So $Hom(A, -)$ acts as a covariant functor.\n\nOn the other hand, if you try and make $Hom(-, B)$ into a covariant functor, good luck! The types just don’t line up if you try and do composition. What does work is the following:\n\n$Hom(f, B)(g) = g \\circ f$\n\nwhere g is in $Hom(B’, B)$, rather than $Hom(A, B)$. So $Hom(-, B)$ acts as a contravariant functor.6 Which makes $Hom(A, B)$ contravariant in A, and covariant in B – just like Function1!7\n\nThis is actually a more general result, since it applies in any category, and not just in the category of types with subtyping. Cool!\n\n## Back to Earth\n\nOkay, so functions in Scala have these weird variance properties. But from a theoretical point of view, methods are just functions, and so they ought to have the same variance properties, even though we can’t see them (methods don’t have a trait in Scala!).\n\nSo we can now see why we got that cryptic compile error. We declared that A was covariant in our class, and also that set takes a parameter of type A. But then, for some B <: A we could replace an instance of Box[A] with an instance of Box[B], and hence an instance of Box[A].set(x) with Box[B].set(x), where x:B. But set[A] can’t be replaced by set[B] as an argument, for the reasons we disucussed above; at best it can be contravariant. So this would allow us to do stuff we shouldn’t be able to do. Likewise, if we declared A as contravariant then we would run into conflict with the return type of set. So it looks like we have to make A invariant.\n\nAs an aside, this is why it’s an absolutely terrible idea that Java’s arrays are covariant. That means that you can write code like the following:\n\nInteger[] ints = [1,2]\nObject[] objs = ints\nobjs = \"I'm an integer!\"\n\n\nWhich will compile, but throw an ArrayStoreException at runtime. Nice.\n\nActually, we don’t have to make container types with an “append”-like method invariant. Scala also lets us put type bounds on things. So if we modify Box as follows:\n\nclass BoundedBox[+A] {\nset[B >: A](x : B) : Box[B]\n}\n\n\nthen it will compile. This ensures that the input type of the set method is properly contravariant.\n\nAnd that’s about it. The thing to remember with Scala is that everything is a method. So if you’re getting surprising variance errors, it might be that you have a sneaky method somewhere that needs a lower bound.\n\n1. In full, the requirements are:\n\nA class of objects: $Obj(\\mathcal{C})$\n\nFor every pair of objects, a class of morphisms between them: $Hom(A, B)$\n\nA binary operation $\\circ : Hom(A, B) \\times Hom(B, C) \\rightarrow Hom(A, C)$ which is associative and has the identity morphism as its identity.\n\n2. These are:\n\n$F(id_{X}) = id_{FX}$\n\n$F(f \\circ g) = F(f) \\circ F(g)$\n\n3. The astute reader will have noticed that not all type constructors come with a map function. This does indeed mean that not all type constructors are functors. But pretend that they are for now.\n\n4. Crucially, we can use the relation to give us our arrows because it’s transitive, and hence composition will work properly.\n\n5. Yes, there can be more than one parameter. Don’t worry about it for now.\n\n6. If you’re wondering whether there couldn’t be some other way of mapping the functions that would work, it turns out that there can’t be one that also makes the functor laws work. You can try it yourself if you don’t believe me!\n\n7. We actually need to do a little bit more work to show that $Hom(-, -)$ is a true bifunctor (functor on the product category), but it’s not terribly interesting." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9004046,"math_prob":0.9639817,"size":11707,"snap":"2022-40-2023-06","text_gpt3_token_len":2951,"char_repetition_ratio":0.1287704,"word_repetition_ratio":0.0029112082,"special_character_ratio":0.2475442,"punctuation_ratio":0.12358185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732232,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T17:51:06Z\",\"WARC-Record-ID\":\"<urn:uuid:826cf291-d108-480b-86c1-f07bf90894e1>\",\"Content-Length\":\"34140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:817c7d01-4525-473d-8395-b38a60b6a09c>\",\"WARC-Concurrent-To\":\"<urn:uuid:161ec2f4-86af-4c46-afba-21922db459bd>\",\"WARC-IP-Address\":\"45.63.99.65\",\"WARC-Target-URI\":\"https://www.michaelpj.com/blog/2012/12/29/covariance-and-contravariance-in-scala.html\",\"WARC-Payload-Digest\":\"sha1:USQTSRD6CQZDLM4DS47H6KJ4OGYUIIHP\",\"WARC-Block-Digest\":\"sha1:LKRRPLB27UBOF5CS34WXCHWANA3V3UJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500151.93_warc_CC-MAIN-20230204173912-20230204203912-00167.warc.gz\"}"}
http://www.birs.ca/events/2020/5-day-workshops/20w5194/schedule
[ "# Schedule for: 20w5194 - Topological Complexity and Motion Planning (Online)\n\nBeginning on Thursday, September 17 and ending Sunday September 20, 2020\n\nAll times in Oaxaca, Mexico time, CDT (UTC-5).\n\nThursday, September 17\n08:55 - 09:00 Introduction and Welcome by CMO Staff (Online)\n09:00 - 09:45 Michael Farber: Topology of parametrised motion planning algorithms\nWe introduce and study a new concept of parameterised topological complexity, a topological invariant motivated by the motion planning problem of robotics. In the parametrised setting, a motion planning algorithm has high degree of universality and flexibility, it can function under a variety of external conditions (such as positions of the obstacles etc). We explicitly compute the parameterised topological complexity of obstacle-avoiding collision-free motion of many particles (robots) in 3-dimensional space. Our results show that the parameterised topological complexity can be significantly higher than the standard (non-parametrised) invariant. Joint work with Daniel Cohen and Shmuel Weinberger.\n(Online)\n09:45 - 10:00 Group Photo (Online)\nPlease turn on your cameras for the \"group photo\" -- a screenshot in Zoom's Gallery view.\n(Online)\n10:00 - 10:45 Ayse Borat: A simplicial analog of homotopic distance\nHomotopic distance as introduced by Macias-Virgos and Mosquera-Lois in can be realized as a generalization of topological complexity (TC) and Lusternik Schnirelmann category (cat). In this talk, we will introduce a simplicial analog (in the sense of Gonzalez in ) of homotopic distance and show that it has a relation with simplicial complexity (SC) as homotopic distance has with TC. We will also introduce some basic properties of simplicial distance.\n\n J. Gonzalez, Simplicial Complexity: Piecewise Linear Motion Planning in Robotics, New York Journal of Mathematics 24 (2018), 279-292.\n\n E. Macias-Virgos, D. Mosquera-Lois, Homotopic Distance between Maps, preprint. arXiv: 1810.12591v2.\n(Online)\n10:45 - 11:15 Coffee break (Online)\n11:15 - 11:30 Daniel Koditschek: Vector Field Methods of Motion Planning\nA long tradition in robotics has deployed dynamical systems as “reactive” motion planners by encoding goals as attracting sets and obstacles as repelling sets of vector fields arising from suitably constructed feedback laws . This raises the prospects for a topologically informed notion of “closed loop” planning complexity , holding substantial interest for robotics, and whose contrast with the original “open loop” notion may be of mathematical interest as well. This talk will briefly review the history of such ideas and provide context for the next three talks which discuss some recent advances in the closed loop tradition, reviewing the implications for practical robotics as well as associated mathematical questions.\n\n D. E. Koditschek and E. Rimon, “Robot navigation functions on manifolds with boundary,” Adv. Appl. Math., vol. 11, no. 4, pp. 412–442, 1990, doi: doi:10.1016/0196-8858(90)90017-S.\n\n Y. Baryshnikov and B. Shapiro, “How to run a centipede: a topological perspective,” in Geometric Control Theory and Sub-Riemannian Geometry, Springer International Publishing, 2014, pp. 37–51.\n\n M. Farber, “Topological complexity of motion planning,” Discrete Comput. Geom., vol. 29, no. 2, pp. 211–221, 2003.\n(Online)\n11:30 - 11:45 Vasileios Vasilopoulos: Doubly Reactive Methods of Task Planning for Robotics\nA recent advance in vector field methods of motion planning for robotics replaced the need for perfect a priori information about the environment’s geometry with a real-time, “doubly reactive” construction that generates the vector field as well as its flow at execution time – directly from sensory inputs – but at the cost of assuming a geometrically simple environment . Still more recent developments have adapted to this doubly reactive online setting the original offline deformation of detailed obstacles into their geometrically simple topological models . Consequent upon these new insights and algorithms, empirical navigation can now be achieved in partially unknown unstructured physical environments by legged robots, with formal guarantees that ensure safe convergence for simpler, wheeled mechanical platforms . These ideas can be extended to cover a far broader domain of robot task planning wherein the robot has the job of rearranging objects in the world by visiting, grasping, moving them and then repeating as necessary until the rearrangement task is complete.\n\n D. Koditschek and E. Rimon, “Exact robot navigation using artificial potential functions,” IEEE Trans Robot Autom., vol. 8, pp. 501–518, 1992.\n\n O. Arslan and D. E. Koditschek, “Sensor-based reactive navigation in unknown convex sphere worlds,” Int. J. Robot. Res., vol. 38, no. 2–3, pp. 196–223, Mar. 2019, doi: 10.1177/0278364918796267.\n\n V. Vasilopoulos and D. E. Koditschek, “Reactive Navigation in Partially Known Non-convex Environments,” in Algorithmic Foundations of Robotics XIII, Cham, 2020, vol. 14, pp. 406–421, doi: 10.1007/978-3-030-44051-0_24.\n\n E. Rimon and D. E. Koditschek, “The construction of analytic diffeomorphisms for exact robot navigation on star worlds,” Trans. Am. Math. Soc., vol. 327, no. 1, pp. 71–116, 1991.\n\n V. Vasilopoulos et al., “Reactive Semantic Planning in Unexplored Semantic Environments Using Deep Perceptual Feedback,” IEEE Robot. Autom. Lett., vol. 5, no. 3, pp. 4455–4462, Jul. 2020, doi: 10.1109/LRA.2020.3001496.\n\n V. Vasilopoulos, W. Vega-Brown, O. Arslan, N. Roy, and D. E. Koditschek, “Sensor-Based Reactive Symbolic Planning in Partially Known Environments,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), May 2018, pp. 1–5, doi: 10.1109/ICRA.2018.8460861.\n\n V. Vasilopoulos, T. T. Topping, W. Vega-Brown, N. Roy, and D. E. Koditschek, “Sensor-Based Reactive Execution of Symbolic Rearrangement Plans by a Legged Mobile Manipulator,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2018, pp. 3298–3305, doi: 10.1109/IROS.2018.8594342.\n(Online)\n11:45 - 12:00 Paul Gustafson: A Category Theoretic Treatment of Robot Hybrid Dynamics with Applications to Reactive Motion Planning and Beyond\nHybrid dynamical systems have emerged from the engineering literature as an interesting new class of mathematical objects that intermingle features of both discrete time and continuous time systems. In a typical engineering setting, a hybrid system describes the evolution of states driven into different physical modes by events that may be instigated by an external controller or simply imposed by the natural world. Extending the formal convergence and safety guarantees of the original omniscient reactive systems introduced in the first talk of this series to the new imperfectly known environments negotiated by their doubly reactive siblings introduced in the second talk requires reasoning about hybrid dynamical systems wherein each new encounter with a different obstacle triggers a reset of the continuous model space . A recent categorical treatment of robot hybrid dynamical systems affords a method of hierarchical composition, raising the prospect of further formal extensions that might cover as well the more broadly useful class of mobile manipulation tasks assigned to dynamically dexterous (e.g., legged) robots.\n\n V. Vasilopoulos, G. Pavlakos, K. Schmeckpeper, K. Daniilidis, and D. E. Koditschek, “Reactive Navigation in Partially Familiar Non-Convex Environments Using Semantic Perceptual Feedback,” Rev., p. (under review), 2019, [Online]. Available: https://arxiv.org/abs/2002.08946.\n\n J. Culbertson, P. Gustafson, D. E. Koditschek, and P. F. Stiller, “Formal composition of hybrid systems,” Theory Appl. Categ., no. arXiv:1911.01267 [cs, math], p. (under review), Nov. 2019, Accessed: Nov. 24, 2019. [Online]. Available: http://arxiv.org/abs/1911.01267.\n\n A. M. Johnson, S. A. Burden, and D. E. Koditschek, “A hybrid systems model for simple manipulation and self-manipulation systems,” Int. J. Robot. Res., vol. 35, no. 11, pp. 1354--1392, Sep. 2016, doi: 10.1177/0278364916639380.\n(Online)\n12:00 - 12:15 Matthew Kvalheim: Toward a Task Planning Theory for Robot Hybrid Dynamics\nA theory of topological dynamics for hybrid systems has recently begun to emerge . This talk will discuss this theory and, in particular, explain how suitably restricted objects in the formal category introduced in the third talk of this series can be shown to admit a version of Conley’s Fundamental Theorem of Dynamical Systems. This raises the hope for a more general theory of dynamical planning complexity that might bring mathematical insights from both the open loop and closed loop tradition to the physically ineluctable but mathematically under-developed class of robot hybrid dynamics .\n\n Y. Baryshnikov and B. Shapiro, “How to run a centipede: a topological perspective,” in Geometric Control Theory and Sub-Riemannian Geometry, Springer International Publishing, 2014, pp. 37–51.\n\n M. Farber, “Topological complexity of motion planning,” Discrete Comput. Geom., vol. 29, no. 2, pp. 211–221, 2003.\n\n A. M. Johnson, S. A. Burden, and D. E. Koditschek, “A hybrid systems model for simple manipulation and self-manipulation systems,” Int. J. Robot. Res., vol. 35, no. 11, pp. 1354--1392, Sep. 2016, doi: 10.1177/0278364916639380.\n\n M. D. Kvalheim, P. Gustafson, and D. E. Koditschek, “Conley’s fundamental theorem for a class of hybrid systems,” ArXiv200503217 Cs Math, p. (under review), May 2020, Accessed: May 31, 2020. [Online]. Available: http://arxiv.org/abs/2005.03217.\n(Online)\n12:15 - 13:15 Chat Rooms (Online)\nFriday, September 18\n09:00 - 09:45 Jie Wu: Topological complexity of the work map\nWe introduce the topological complexity of the work map associated to a robot system. In broad terms, this measures the complexity of any algorithm controlling, not just the motion of the configuration space of the given system, but the task for which the system has been designed. From a purely topological point of view, this is a homotopy invariant of a map which generalizes the classical topological complexity of a space. Joint work with Aniceto Murillo.\n(Online)\n09:45 - 10:00 Coffee break (Online)\n10:00 - 10:45 Petar Pavesic: Two questions on TC\n1. What is the $TC$ of a wedge?\n\nIn the literature one can find two relatively coarse estimates of $TC(X\\vee Y)$: Farber states that $$\\max\\{TC(X),TC(Y)\\} \\le TC(X\\vee Y)\\le \\max\\{TC(X),TC(Y), cat(X)+cat(Y)-1\\}$$ (where the proof of the upper bound is only sketched), while Dranishnikov gives $$\\max\\{TC(X),TC(Y), cat(X\\times Y)\\} \\le TC(X\\vee Y)\\le TC(X)+TC(Y)+1.$$ At first sight the two estimates almost contradict each other, because the overlap of the two intervals is very small. Nevertheless, all known examples satisfy both estimates. We will show that under suitable assumptions Dranishnikov's method yields a proof of Farber's upper bound.\n\n2. What can be said about closed manifolds with small TC?\n\nIf $M$ is a closed manifold with $TC(M)=2$, then by Grant, Lupton and Oprea $M$ is homeomorphic to an odd-dimensional sphere. We will make another step and study closed manifolds whose topological complexity is equal to 3.\n\nOf course, all spaces considered are CW-complexes and $TC(\\mathbf{\\cdot})=1$.\n\n A. Dranishnikov. Topological complexity of wedges and covering maps. Proc. AMS 142, 2014, 4365-4376.\n\n M. Farber, Topology of robot motion planning. Morse theoretic methods in nonlinear analysis and in symplectic topology, NATO Sci.Ser.II, Math.Phys.Chem., vol. 217, Springer, Dordrecht, 2006.\n\n M. Grant, G. Lupton, J. Oprea, Spaces of topological complexity one. Homology, Homotopy and Applications, 15, 2013, 73-61.\n(Online)\n10:45 - 11:15 Coffee break (Online)\n11:15 - 12:00 Hellen Colman: Morita Invariance of Invariant Topological Complexity\nWe show that the invariant topological complexity defines a new numerical invariant for orbifolds.\n\nOrbifolds may be described as global quotients of spaces by compact group actions with finite isotropy groups. The same orbifold may have descriptions involving different spaces and different groups. We say that two actions are Morita equivalent if they define the same orbifold. Therefore, any notion defined for group actions should be Morita invariant to be well defined for orbifolds.\n\nWe use the homotopy invariance of equivariant principal bundles to prove that the equivariant A-category of Clapp and Puppe is invariant under Morita equivalence. As a corollary, we obtain that both the equivariant Lusternik-Schnirelmann category of a group action and the invariant topological complexity are invariant under Morita equivalence. This allows a definition of topological complexity for orbifolds.\n\nThis is joint work with Andres Angel, Mark Grant and John Oprea\n(Online)\n12:00 - 13:00 Chat Rooms (Online)\nSaturday, September 19\n09:00 - 09:45 Stephan Mescher: Spherical complexities and closed geodesics\nI will present a new kind of integer-valued homotopy invariants of topological spaces, which allow for a Lusternik-Schnirelmann-type approach to counting critial orbits of G-invariant functions on subspaces of C^0(S^n,X). Here, G is a closed subgroup of O(n+1) acting on C^0(S^n,X) by reparametrization. These invariants, so-called spherical complexities, are sectional categories of fibrations generalizing topological complexity. I will explain how to obtain lower bounds for them using sectional category weights of cohomology classes and how to find suitable classes of higher weight. Moreover, I will present some consequences of the method for the topological complexity of manifolds. As an application, I will outline how to derive new existence results for closed geodesics of Finsler metrics of positive flag curvature on spheres. Closed geodesics are given as the critical points of the SO(2)-invariant energy functional of the Finsler metrics on a Hilbert manifold of free loops, thus well-suited to our approach.\n(Online)\n09:45 - 10:00 Coffee break (Online)\n10:00 - 10:45 Yuliy Baryshnikov: Euler characteristics of exotic configuration spaces\nExponential generating functions for Euler characteristics of exotic configuration spaces have a remarkably simple representation in terms of the local geometry of the underlying spaces.\n(Online)\n10:45 - 11:15 Coffee break (Online)\n11:15 - 12:00 Alexander Dranishnikov: On topological complexity of hyperbolic groups\nWe will discuss the proof of the equality TC(G)=2cd(G) for nonabelian hyperbolic groups\n(Online)\n12:00 - 13:00 Chat Rooms (Online)\nSunday, September 20\n09:00 - 09:45 David Recio-Mitter: Geodesic complexity and motion planning on graphs\nThe topological complexity TC(X) of a space X was introduced in 2003 by Farber to measure the instability of robot motion planning in X. The motion is not required to be along shortest paths in that setting. We define a new version of topological complexity in which we require the robot to move along shortest paths (more specifically geodesics), which we call the geodesic complexity GC(X). In order to study GC(X) we introduce the total cut locus.\n\nWe show that the geodesic complexity is sensitive to the metric and in general differs from the topological complexity, which only depends on the homotopy type of the space. We also show that in some cases both numbers agree. In particular, we construct the first optimal motion planners on configuration spaces of graphs along shortest paths (joint work with Donald Davis and Michael Harrison).\n(Online)\n09:45 - 10:00 Coffee break (Online)\n10:00 - 10:45 John Oprea: Logarithmicity, the TC-generating function and right-angled Artin groups\nThe $TC$-generating function associated to a space $X$ is the formal power series $\\mathcal{F}_X(x) = \\sum_{r=1}^\\infty TC_{r+1}(X)\\,x^r.$ For many examples $X$, it is known that $\\mathcal{F}_X(x)= \\frac{P_X(x)}{(1-x)^2},$ where $P_X(x)$ is a polynomial with $P_X(1)=cat(X)$. Is this true in general? I shall discuss recent developments concerning this question, including observing that the answer is related to $X$ satisfying logarithmicity of LS-category. Also, in the examples mentioned above, it is always the case that $P_X(x)$ has degree less than or equal to $2$. Is this true in general? I shall discuss this question in the context of right-angled Artin (RAA) groups and along the way see how RAA groups yield some interesting byproducts for the study of $TC$.\n(Online)\n10:45 - 11:15 Coffee break (Online)\n11:15 - 12:00 Don Davis: Geodesic complexity of non-geodesic spaces\nWe define the notion of near geodesic between points where no geodesic exists, and use this to define geodesic complexity for non-geodesic spaces. We determine explicit near geodesics and geodesic complexity in a variety of cases.\n(Online)\n12:00 - 13:00 Chat Rooms (Online)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83938545,"math_prob":0.8645498,"size":17027,"snap":"2021-04-2021-17","text_gpt3_token_len":4504,"char_repetition_ratio":0.122422606,"word_repetition_ratio":0.092319936,"special_character_ratio":0.26510835,"punctuation_ratio":0.17738023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9608752,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T01:08:51Z\",\"WARC-Record-ID\":\"<urn:uuid:ec2fc530-c1c3-4075-821c-cd1fb4fbee3c>\",\"Content-Length\":\"35008\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a3f431fe-4799-498a-9183-38136160629b>\",\"WARC-Concurrent-To\":\"<urn:uuid:822c8ac1-dc60-4126-9c5c-ed69796bdb17>\",\"WARC-IP-Address\":\"172.67.157.47\",\"WARC-Target-URI\":\"http://www.birs.ca/events/2020/5-day-workshops/20w5194/schedule\",\"WARC-Payload-Digest\":\"sha1:N7OND54NA4LHYGMB37JFENRMRNUFV5II\",\"WARC-Block-Digest\":\"sha1:B3DZWI3XROUC24JNRXUNZMXZU3B6CPLY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519843.24_warc_CC-MAIN-20210119232006-20210120022006-00402.warc.gz\"}"}
https://gis.stackexchange.com/questions/72648/how-to-do-regression-analysis-out-of-memory-on-a-set-of-large-rasters-in-r
[ "# How to do regression analysis out-of-memory on a set of large rasters in R?\n\nI am trying to do a regression analysis on a set of large rasters (254,004,000 cells each). Ultimately, I want to run something like the following (or a bit more complex, but let's start simple!):\n\nmodel<-lm(dv ~ iv1+iv2+iv3... data=df,na.action=na.exclude)\n\nwhere \"dv\" is the values from one raster and \"iv1\", \"iv2\" \"iv3\" ... are values from other rasters (up to 10 variables) with the same extent and resolution. It seems I should be able to do this out-of-memory using the Raster package, but I am confused how. Whether I create a brick, stack, or set of individual Raster objects, I cannot figure out how to send the variables to the lm function without using getValues and thus calling everything into memory (mine cannot even handle two variables).\n\nA point in the right direction would be much appreciated!\n\n• I would take a moment and consider this in statistical terms. 1) you are effectively using the population and not a sample thus, negating the need for a regression. 2) using all the cells in a rasters is going to certainty add an unnecessary autocorrelation issue to a linear model. 3) In classical statistical terms, you will have a psuedoreplication (lack of independence) issue. 4) I highly doubt that you would meet iid assumptions. I would recommend taking a sample of the raster(s), use the sample data to build your regression model then estimate the model to your raster(s). Sep 26, 2013 at 23:26\n• Thanks, Jeffery, I appreciate the note. This would not be my final statistical product, but in conjunction with autocorrelation plots for each variable, I find it helps me with diagnostics. The answer below seems like it might be a fruitful path. Sep 27, 2013 at 13:30\n• I beg to differ (slightly) with some of @Jeffrey Evans' points. First, regression for an entire population is meaningful: it describes relationships among variables. Second, autocorrelation is not necessarily a problem, but the advice to worry about it is excellent. There is a direct solution: tile your rasters. For each tile compute the mean, the count, and the [SSP matrix]. You can combine these statistics and proceed with the solution. There's no limit to the raster size this applies to. Another approach (using 2 rasters at a time) is given at stats.stackexchange.com/a/71257. Sep 27, 2013 at 15:56\n• I should be more specific. I do believe that regression approaches on rasters are useful in the context of \"exploratory\" analysis. One thought, have you considered an OLS rather than a straight linear model? The resulting residual error in OLS is a bit more robust to autocorrelation issues. Sep 27, 2013 at 19:22\n• Sorry, I should have been more specific as well. I do not necessarily want to pin myself to a linear model, I just thought that if I could get something to run with lm, I could carry it over to other similar packages (certainly not the most direct approach; a direct route to an OLS or other more robust method would be very welcome!) I have seen several examples of lm run in memory with the Raster package and get the impression that it can manage problems like these out of memory as well using a brick/stack object Sep 27, 2013 at 20:18\n\nThe help for `lm` references `biglm`:\n`biglm` in package biglm for an alternative way to fit linear models to large datasets.\nThe help pages for `biglm` indicate this package was developed for precisely such problems. The algorithm it references, AS274, is an updating procedure, allowing a solution based on a subset of the cases (cells) to be modified as additional cases are given." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9387217,"math_prob":0.7284621,"size":1855,"snap":"2023-40-2023-50","text_gpt3_token_len":365,"char_repetition_ratio":0.102647215,"word_repetition_ratio":0.0,"special_character_ratio":0.19191375,"punctuation_ratio":0.081871346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9615681,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T16:53:35Z\",\"WARC-Record-ID\":\"<urn:uuid:aba535ca-cd36-46d2-b150-7047b102632b>\",\"Content-Length\":\"170489\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdc0f646-1779-4ddc-8147-43f5d042cd0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e425376-60c7-42e5-96f7-05cc5f48af56>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/72648/how-to-do-regression-analysis-out-of-memory-on-a-set-of-large-rasters-in-r\",\"WARC-Payload-Digest\":\"sha1:S2ZODPFOWE4EALQP5A4DF7AMK3A7GJ3L\",\"WARC-Block-Digest\":\"sha1:O6B5MXQ5VKXR2VWOLCCXJEC3XNB7XJIX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506028.36_warc_CC-MAIN-20230921141907-20230921171907-00063.warc.gz\"}"}
https://projecteuclid.org/euclid.jsl/1243948324
[ "## Journal of Symbolic Logic\n\n### Learning correction grammars\n\n#### Abstract\n\nWe investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of computably enumerable (c.e.) languages. Knowing a language may feature a representation of it in terms of two grammars. The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars capture the observable fact that people do correct their linguistic utterances during their usual linguistic activities.\n\nWe show that learning correction grammars for classes of c.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in the TxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n ≥ 0, there is a similar learning advantage, again in learning correction grammars for classes of c.e. languages, but where we compare learning correction grammars that make n+1 corrections to those that make n corrections.\n\nThe concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. This transfinite extension can also be conceptualized as being about learning Ershov-descriptions for c.e. languages. For u a notation in Kleene's general system (O,< o) of ordinal notations for constructive ordinals, we introduce the concept of an u-correction grammar, where u is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if u and v are notations for constructive ordinals such that u < o v, then there are classes of c.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars.\n\nSurprisingly, we show that—above “ω-many” corrections—it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of c.e. languages, where u is a notation in O for any ordinal, can be simulated by TxtBc-learning w-correction grammars, where w is any notation for the smallest infinite ordinal ω.\n\n#### Article information\n\nSource\nJ. Symbolic Logic, Volume 74, Issue 2 (2009), 489-516.\n\nDates\nFirst available in Project Euclid: 2 June 2009\n\nhttps://projecteuclid.org/euclid.jsl/1243948324\n\nDigital Object Identifier\ndoi:10.2178/jsl/1243948324\n\nMathematical Reviews number (MathSciNet)\nMR2518808\n\nZentralblatt MATH identifier\n1193.03067\n\n#### Citation\n\nCarlucci, Lorenzo; Case, John; Jain, Sanjay. Learning correction grammars. J. Symbolic Logic 74 (2009), no. 2, 489--516. doi:10.2178/jsl/1243948324. https://projecteuclid.org/euclid.jsl/1243948324" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8149649,"math_prob":0.5251777,"size":3046,"snap":"2019-43-2019-47","text_gpt3_token_len":720,"char_repetition_ratio":0.18244576,"word_repetition_ratio":0.02739726,"special_character_ratio":0.2275115,"punctuation_ratio":0.14209591,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T09:53:43Z\",\"WARC-Record-ID\":\"<urn:uuid:23297676-5614-4ab6-b1e0-af2e5f50fbf0>\",\"Content-Length\":\"31084\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1eb640dd-9cda-402c-98a7-3379acddc87d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a21b453a-3db3-4006-91bb-f521ba9a107c>\",\"WARC-IP-Address\":\"132.236.27.47\",\"WARC-Target-URI\":\"https://projecteuclid.org/euclid.jsl/1243948324\",\"WARC-Payload-Digest\":\"sha1:6IV4T5HDIOPND7BVQSNB4TGEXAVDTBEF\",\"WARC-Block-Digest\":\"sha1:SOYH7BSIX7OP7LIEH55SCGX4O7GPNBR6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668334.27_warc_CC-MAIN-20191114081021-20191114105021-00290.warc.gz\"}"}
https://forums.raywenderlich.com/t/ch-4-mathlibrary/57293
[ "", null, "# Ch 4: MathLibrary\n\n#1\n\nI think I found a bug in init(eye:, center:, up:) of MathLibrary.swift. It says\n\n``````let W = float4(w.x, w.y, x.z, 1)\n``````\n\nbut the third argument should be w.z rather than x.z.\n\nThis fix alone seems not address the look-at transformation, though…\n\nKen\n\n#2\n\n@kwakita - thank you for reporting this. I’ll look into this in the next few days.\n\n#3\n\n@kwakita - the `lookAt` matrix is back-to-front. It’s used only twice in the book I think, for shadows, and the code there compensates for it being back-to-front by negating the eye position before calling it.\n\nTry this `lookAt` matrix:\n\n``````init(eye: float3, center: float3, up: float3) {\nlet z = normalize(center - eye)\nlet x = normalize(cross(up, z))\nlet y = cross(z, x)\n\nlet X = float4(x.x, y.x, z.x, 0)\nlet Y = float4(x.y, y.y, z.y, 0)\nlet Z = float4(x.z, y.z, z.z, 0)\nlet W = float4(-dot(x, eye), -dot(y, eye), -dot(z, eye), 1)\n\nself.init()\ncolumns = (X, Y, Z, W)\n}\n``````\n\nFor further insight about `viewMatrix` and various camera matrices, check out: https://www.3dgep.com/understanding-the-view-matrix/ - although be aware that he is using a right handed coordinate system.\n\n(And while it’s not relevant here, OpenGL uses NDC coordinates of (-1, 1) on the z axis, whereas Metal uses (0, 1) on the z axis.)\n\n1 Like\n#4\n\n@caroline thanks for your support and for giving me a pointer to further note. I will try the code and come here back when I need more support.\n\nken" ]
[ null, "https://hulk.raywenderlich.com/original/3X/2/f/2f96ac4d01ba72e9b7bb6ee1b0589588bc391d9c.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8050018,"math_prob":0.96313417,"size":1371,"snap":"2019-26-2019-30","text_gpt3_token_len":415,"char_repetition_ratio":0.10021946,"word_repetition_ratio":0.3275862,"special_character_ratio":0.30561635,"punctuation_ratio":0.2367688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9759739,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T17:15:55Z\",\"WARC-Record-ID\":\"<urn:uuid:3ef8f692-43d1-4f49-bf76-8e939838c893>\",\"Content-Length\":\"38224\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40fd280a-a006-4aeb-8334-fa4d3ed6701d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ec2971c-ccdc-4e79-a00f-3da85f5ec3de>\",\"WARC-IP-Address\":\"52.44.64.215\",\"WARC-Target-URI\":\"https://forums.raywenderlich.com/t/ch-4-mathlibrary/57293\",\"WARC-Payload-Digest\":\"sha1:5EYPQXOME5X3DYBJ7OFLVXHQZOEHGYWQ\",\"WARC-Block-Digest\":\"sha1:FV7GI5E2LVZIUUVFBUDN2DXNBPKBDZXU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526324.57_warc_CC-MAIN-20190719161034-20190719183034-00099.warc.gz\"}"}
https://www.colorhexa.com/424342
[ "# #424342 Color Information\n\nIn a RGB color space, hex #424342 is composed of 25.9% red, 26.3% green and 25.9% blue. Whereas in a CMYK color space, it is composed of 1.5% cyan, 0% magenta, 1.5% yellow and 73.7% black. It has a hue angle of 120 degrees, a saturation of 0.8% and a lightness of 26.1%. #424342 color hex could be obtained by blending #848684 with #000000. Closest websafe color is: #333333.\n\n• R 26\n• G 26\n• B 26\nRGB color chart\n• C 1\n• M 0\n• Y 1\n• K 74\nCMYK color chart\n\n#424342 color description : Very dark grayish lime green.\n\n# #424342 Color Conversion\n\nThe hexadecimal color #424342 has RGB values of R:66, G:67, B:66 and CMYK values of C:0.01, M:0, Y:0.01, K:0.74. Its decimal value is 4342594.\n\nHex triplet RGB Decimal 424342 `#424342` 66, 67, 66 `rgb(66,67,66)` 25.9, 26.3, 25.9 `rgb(25.9%,26.3%,25.9%)` 1, 0, 1, 74 120°, 0.8, 26.1 `hsl(120,0.8%,26.1%)` 120°, 1.5, 26.3 333333 `#333333`\nCIE-LAB 28.29, -0.641, 0.456 5.237, 5.566, 5.952 0.313, 0.332, 5.566 28.29, 0.787, 144.57 28.29, -0.476, 0.609 23.592, -1.662, 1.555 01000010, 01000011, 01000010\n\n# Color Schemes with #424342\n\n• #424342\n``#424342` `rgb(66,67,66)``\n• #434243\n``#434243` `rgb(67,66,67)``\nComplementary Color\n• #434342\n``#434342` `rgb(67,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424343\n``#424343` `rgb(66,67,67)``\nAnalogous Color\n• #434243\n``#434243` `rgb(67,66,67)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #434243\n``#434243` `rgb(67,66,67)``\nSplit Complementary Color\n• #434242\n``#434242` `rgb(67,66,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424243\n``#424243` `rgb(66,66,67)``\nTriadic Color\n• #434342\n``#434342` `rgb(67,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424243\n``#424243` `rgb(66,66,67)``\n• #434243\n``#434243` `rgb(67,66,67)``\nTetradic Color\n• #1c1c1c\n``#1c1c1c` `rgb(28,28,28)``\n• #292929\n``#292929` `rgb(41,41,41)``\n• #353635\n``#353635` `rgb(53,54,53)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #4f504f\n``#4f504f` `rgb(79,80,79)``\n• #5b5d5b\n``#5b5d5b` `rgb(91,93,91)``\n• #686a68\n``#686a68` `rgb(104,106,104)``\nMonochromatic Color\n\n# Alternatives to #424342\n\nBelow, you can see some colors close to #424342. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #424342\n``#424342` `rgb(66,67,66)``\nSimilar Colors\n\n# #424342 Preview\n\nText with hexadecimal color #424342\n\nThis text has a font color of #424342.\n\n``<span style=\"color:#424342;\">Text here</span>``\n#424342 background color\n\nThis paragraph has a background color of #424342.\n\n``<p style=\"background-color:#424342;\">Content here</p>``\n#424342 border color\n\nThis element has a border color of #424342.\n\n``<div style=\"border:1px solid #424342;\">Content here</div>``\nCSS codes\n``.text {color:#424342;}``\n``.background {background-color:#424342;}``\n``.border {border:1px solid #424342;}``\n\n# Shades and Tints of #424342\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #080808 is the darkest color, while #fdfdfd is the lightest one.\n\n• #080808\n``#080808` `rgb(8,8,8)``\n• #111211\n``#111211` `rgb(17,18,17)``\n• #1b1b1b\n``#1b1b1b` `rgb(27,27,27)``\n• #252525\n``#252525` `rgb(37,37,37)``\n• #2f2f2f\n``#2f2f2f` `rgb(47,47,47)``\n• #383938\n``#383938` `rgb(56,57,56)``\n• #424342\n``#424342` `rgb(66,67,66)``\n• #4c4d4c\n``#4c4d4c` `rgb(76,77,76)``\n• #555755\n``#555755` `rgb(85,87,85)``\n• #5f615f\n``#5f615f` `rgb(95,97,95)``\n• #696b69\n``#696b69` `rgb(105,107,105)``\n• #737473\n``#737473` `rgb(115,116,115)``\n• #7c7e7c\n``#7c7e7c` `rgb(124,126,124)``\nShade Color Variation\n• #868886\n``#868886` `rgb(134,136,134)``\n• #909290\n``#909290` `rgb(144,146,144)``\n• #9a9c9a\n``#9a9c9a` `rgb(154,156,154)``\n• #a4a5a4\n``#a4a5a4` `rgb(164,165,164)``\n• #aeafae\n``#aeafae` `rgb(174,175,174)``\n• #b8b9b8\n``#b8b9b8` `rgb(184,185,184)``\n• #c2c2c2\n``#c2c2c2` `rgb(194,194,194)``\n• #cbcccb\n``#cbcccb` `rgb(203,204,203)``\n• #d5d6d5\n``#d5d6d5` `rgb(213,214,213)``\n• #dfe0df\n``#dfe0df` `rgb(223,224,223)``\n• #e9e9e9\n``#e9e9e9` `rgb(233,233,233)``\n• #f3f3f3\n``#f3f3f3` `rgb(243,243,243)``\n• #fdfdfd\n``#fdfdfd` `rgb(253,253,253)``\nTint Color Variation\n\n# Tones of #424342\n\nA tone is produced by adding gray to any pure hue. In this case, #424342 is the less saturated color, while #058005 is the most saturated one.\n\n• #424342\n``#424342` `rgb(66,67,66)``\n• #3d483d\n``#3d483d` `rgb(61,72,61)``\n• #384d38\n``#384d38` `rgb(56,77,56)``\n• #335233\n``#335233` `rgb(51,82,51)``\n• #2e572e\n``#2e572e` `rgb(46,87,46)``\n• #285d28\n``#285d28` `rgb(40,93,40)``\n• #236223\n``#236223` `rgb(35,98,35)``\n• #1e671e\n``#1e671e` `rgb(30,103,30)``\n• #196c19\n``#196c19` `rgb(25,108,25)``\n• #147114\n``#147114` `rgb(20,113,20)``\n• #0f760f\n``#0f760f` `rgb(15,118,15)``\n• #0a7b0a\n``#0a7b0a` `rgb(10,123,10)``\n• #058005\n``#058005` `rgb(5,128,5)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #424342 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53250986,"math_prob":0.6913019,"size":3672,"snap":"2021-04-2021-17","text_gpt3_token_len":1607,"char_repetition_ratio":0.12459106,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.5735294,"punctuation_ratio":0.23496659,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99314207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T04:56:31Z\",\"WARC-Record-ID\":\"<urn:uuid:31c47cf1-e22f-4bad-9efd-230451cd235e>\",\"Content-Length\":\"36221\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ff56877-3198-4460-a2b3-4be3dbab4864>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0e14abc-3972-4937-a505-4097f6a518ae>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/424342\",\"WARC-Payload-Digest\":\"sha1:UATEEINJO274SRD3HHP3FL3NMME2WITO\",\"WARC-Block-Digest\":\"sha1:6NEJ4GW3F5XKZAZHHGTGCJQRXGWYVPU2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038101485.44_warc_CC-MAIN-20210417041730-20210417071730-00341.warc.gz\"}"}
https://www.teachoo.com/9377/2122/Ex-11.3--8/category/Ex-11.3/
[ "Ex 11.3\n\nChapter 11 Class 8 Mensuration\nSerial order wise", null, "", null, "### Transcript\n\nEx 11.3, 8 The lateral surface area of a hollow cylinder is 4224〖 𝑐𝑚〗^2. It is cut along its height and formed a rectangular sheet of width 33 cm. Find the perimeter of rectangular sheet? Given that Hollow cylinder is converted into a rectangular sheet So, Area of cylinder and sheet must be the same ∴ Curved Surface Area of hollow cylinder = Area of rectangular sheet 4224 = Length × Breadth 4224 = 𝑙 × 33 4224/33 = 𝑙 𝑙 = 4224/33 𝑙 = 128 cm Now, Perimeter of sheet = 2 × (Length + Breadth) = 2 × (128 + 33) = 2 × 161 = 322 cm", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/621a526a-9483-482a-88c5-2611036cb1ff/slide22.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/e53edbfd-fadd-49db-bdfc-7605ef7eedda/slide23.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87784785,"math_prob":0.9981574,"size":653,"snap":"2022-05-2022-21","text_gpt3_token_len":218,"char_repetition_ratio":0.13867489,"word_repetition_ratio":0.061068702,"special_character_ratio":0.36906585,"punctuation_ratio":0.09701493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996721,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T12:29:06Z\",\"WARC-Record-ID\":\"<urn:uuid:05576828-414a-4f3f-ad30-e31de85550ab>\",\"Content-Length\":\"141220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a379208-58f7-4680-ad46-cdc6b7dfda1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:11f52ae1-5c19-4321-9040-6d0504ae37a7>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://www.teachoo.com/9377/2122/Ex-11.3--8/category/Ex-11.3/\",\"WARC-Payload-Digest\":\"sha1:NWAPYW2D3J2YQCOP5CPT4BBRWIGYKCY7\",\"WARC-Block-Digest\":\"sha1:PLJIUVKTBO2BV2P2TTB6LYCRYNX63FB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510117.12_warc_CC-MAIN-20220516104933-20220516134933-00500.warc.gz\"}"}
https://nrich.maths.org/7104?part=index
[ "#### You may also like", null, "### Ball Bearings\n\nIf a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n.", null, "### Overarch 2\n\nBricks are 20cm long and 10cm high. How high could an arch be built without mortar on a flat horizontal surface, to overhang by 1 metre? How big an overhang is it possible to make like this?", null, "### Cushion Ball\n\nThe shortest path between any two points on a snooker table is the straight line between them but what if the ball must bounce off one wall, or 2 walls, or 3 walls?\n\n# Population Dynamics - Part 2\n\n### Discrete Modelling\n\nWe often use discrete mathematics to model a population when time is modelled in discrete steps. This fits well with annual censuses of wildlife populations.\n\nSometimes populations are themselves discrete, such as:\n\n• Species with non-overlapping generations (eg. annual plants)\n• Species with pulsed reproductions (eg. many wildlife species in seasonal environments)\n\n### Geometric Growth\n\nThe population equation, $N_{t+1}=\\lambda N_t$ , from before means that over discrete intervals of time,$t_0, t_1, t_2, ...$, the rate of change in population size is proportional to the size of the population.\n\nWe first solve this equation: \\begin{align*} N_{t+1}&=\\lambda N_t \\\\ &=\\lambda \\lambda N_{t-1} \\\\& =...\\\\ &= \\lambda^{t+1} N_0 \\\\ \\Rightarrow N_t &=\\lambda^t N_0 \\end{align*} The population size will depend on the value of $\\lambda$\n\n• If $\\lambda> 1$ then exponential increase\n• If $\\lambda=1$ then stationary population\n• If $\\lambda< 1$ then exponential decrease\n\nQuestion:  If a population of owls increases by 40% in a year, what is the value of r and $\\lambda$ ?\n\nGiven there were initially 10 owls, what will the population size be in 75 days?  Can you plot this population growth?\n\n### Exponential Growth\n\nSome populations may grow continuously, without pulsed births and deaths (eg. humans). In these cases, time is a continuous smooth curve, so we use differential equations to represent this continuous model.\n\nUsing our discrete model from above: \\begin{align*} N_{t+\\Delta t}&=\\lambda^{\\Delta t} N_t =(1+r)^{\\Delta t}N(t)\\approx (1+r\\Delta t) N(t)\\\\ \\Rightarrow \\Delta N_t&\\approx r \\Delta t N_t \\\\ \\\\\\Rightarrow \\lim_{\\Delta t \\to 0} \\frac{\\Delta N(t)}{\\Delta t} &=\\frac {\\mathrm{d}N(t)}{\\mathrm{d}t}=rN(t) \\end{align*} Question:  Solve the equation, $\\frac {\\mathrm{d}N(t)}{\\mathrm{d}t}=rN(t)$ , using standard integrals, showing that the solution is $N(t)=N_0e^{rt}$.\n\nDifferent values of r determine the change in population size, as shown below.\n\n####", null, "Also note the connection between the discrete and continous solutions:  \\begin{align*} N_t =\\lambda^t N_0 &\\text{ and } N(t)=N_0 e^{rt} \\\\ \\Rightarrow \\lambda^t&=e^{rt} \\\\ \\lambda&=e^r \\\\ \\ln(\\lambda)&=r \\end{align*} Question:  Using the discrete model above, how long does it take for this population to double in size? What about the continous case?\n\n### Limitations of the Models\n\nConsider a population of insects which suddenly dies out right before the start of every time period, and whose children hatch right after. A discrete model would lead us to believe that there are no insects during the entire period, so instead we should use a continuous model.\n\nOn the other hand, it is often impossible to continually monitor the population size, so we approximate using the discrete case.\n\nChoosing which of discrete or continuous to use is an important decision in modelling populations.\n\nCan you also think of any assumptions we have made with these models, and why they could be a problem? Consider the environment the population inhabits and differences between members of the population.\n\nQuestion:  If $\\lambda = 1.25$, by how much does a population of blue footed boobies increase per year?\nThe population N(t) of blue footed boobies is assumed to satisfy the logistic growth equation $\\frac {\\mathrm{d}N}{\\mathrm{d}t}=\\frac{1}{500} N(t) \\big( 1-N(t)\\big)$ . Given $N_0=200$, solve for N(t). Repeat for $N_0=2000$. Discuss the long-term behaviour of the population in both cases." ]
[ null, "https://nrich.maths.org/content/00/01/15plus1/icon.jpg", null, "https://nrich.maths.org/content/00/01/15plus5/icon.jpg", null, "https://nrich.maths.org/content/00/03/15plus4/icon.jpg", null, "https://nrich.maths.org/content/id/7104/Exponential%20Num%201.JPG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84934783,"math_prob":0.99776024,"size":3170,"snap":"2022-40-2023-06","text_gpt3_token_len":830,"char_repetition_ratio":0.155717,"word_repetition_ratio":0.0,"special_character_ratio":0.2719243,"punctuation_ratio":0.10779436,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99970603,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T01:49:26Z\",\"WARC-Record-ID\":\"<urn:uuid:8e007b89-5bb5-418a-ab3f-3e2a893ab1a5>\",\"Content-Length\":\"15448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d3c1c7a-02b0-4663-a378-08fec2b81b1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4183091-b3c7-42f1-8fde-96a944b59dc7>\",\"WARC-IP-Address\":\"131.111.18.195\",\"WARC-Target-URI\":\"https://nrich.maths.org/7104?part=index\",\"WARC-Payload-Digest\":\"sha1:WUMPQBM6FC35C6EUZWNKEPN4WOMYMACQ\",\"WARC-Block-Digest\":\"sha1:YBUGE7BS3K5TXZ53FSDFL6UIDTOTKOXW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499697.75_warc_CC-MAIN-20230129012420-20230129042420-00519.warc.gz\"}"}
http://www.percentagecal.com/answer/what-is-10-percent-of-39036
[ "#### Solution for What is 10 percent of 39036:\n\n10 percent *39036 =\n\n(10:100)*39036 =\n\n(10*39036):100 =\n\n390360:100 = 3903.6\n\nNow we have: 10 percent of 39036 = 3903.6\n\nQuestion: What is 10 percent of 39036?\n\nPercentage solution with steps:\n\nStep 1: Our output value is 39036.\n\nStep 2: We represent the unknown value with {x}.\n\nStep 3: From step 1 above,{39036}={100\\%}.\n\nStep 4: Similarly, {x}={10\\%}.\n\nStep 5: This results in a pair of simple equations:\n\n{39036}={100\\%}(1).\n\n{x}={10\\%}(2).\n\nStep 6: By dividing equation 1 by equation 2 and noting that both the RHS (right hand side) of both\nequations have the same unit (%); we have\n\n\\frac{39036}{x}=\\frac{100\\%}{10\\%}\n\nStep 7: Again, the reciprocal of both sides gives\n\n\\frac{x}{39036}=\\frac{10}{100}\n\n\\Rightarrow{x} = {3903.6}\n\nTherefore, {10\\%} of {39036} is {3903.6}\n\n#### Solution for What is 39036 percent of 10:\n\n39036 percent *10 =\n\n(39036:100)*10 =\n\n(39036*10):100 =\n\n390360:100 = 3903.6\n\nNow we have: 39036 percent of 10 = 3903.6\n\nQuestion: What is 39036 percent of 10?\n\nPercentage solution with steps:\n\nStep 1: Our output value is 10.\n\nStep 2: We represent the unknown value with {x}.\n\nStep 3: From step 1 above,{10}={100\\%}.\n\nStep 4: Similarly, {x}={39036\\%}.\n\nStep 5: This results in a pair of simple equations:\n\n{10}={100\\%}(1).\n\n{x}={39036\\%}(2).\n\nStep 6: By dividing equation 1 by equation 2 and noting that both the RHS (right hand side) of both\nequations have the same unit (%); we have\n\n\\frac{10}{x}=\\frac{100\\%}{39036\\%}\n\nStep 7: Again, the reciprocal of both sides gives\n\n\\frac{x}{10}=\\frac{39036}{100}\n\n\\Rightarrow{x} = {3903.6}\n\nTherefore, {39036\\%} of {10} is {3903.6}\n\nCalculation Samples" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7842559,"math_prob":0.9996056,"size":1645,"snap":"2019-26-2019-30","text_gpt3_token_len":580,"char_repetition_ratio":0.15539305,"word_repetition_ratio":0.4606299,"special_character_ratio":0.47902736,"punctuation_ratio":0.17451523,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994314,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T12:47:26Z\",\"WARC-Record-ID\":\"<urn:uuid:9f2f7783-fb48-4d71-a396-09711e163e34>\",\"Content-Length\":\"10070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23e7e52c-f5ce-483f-bebd-6c18ccebbff3>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b215f28-0ac8-4a07-b04b-64612ad39fb6>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"http://www.percentagecal.com/answer/what-is-10-percent-of-39036\",\"WARC-Payload-Digest\":\"sha1:CVGBZYOEWIGKXDQFDBGJFS5U2T6PKMFP\",\"WARC-Block-Digest\":\"sha1:UR6T7Z45REXOCHUHTV6YFOXWFS4YTE5P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526237.47_warc_CC-MAIN-20190719115720-20190719141720-00178.warc.gz\"}"}
https://www.hindawi.com/journals/mpe/2019/8260563/
[ "#### Abstract\n\nThe concept of aligning reinforcing fibers in arbitrary directions offers a new perception of exploiting the anisotropic characteristic of the carbon fiber-reinforced polymer (CFRP) composites. Complementary to the design concept of multiaxial composites, a laminate reinforced with curvilinear fibers is called variable-axial (also known as variable stiffness and variable angle tow). The Tailored Fiber Placement (TFP) technology is well capable of manufacturing textile preforming with a variable-axial fiber design by using adapted embroidery machines. This work introduces a novel concept for simulation and optimization of curvilinear fiber-reinforced composites, where the novelty relies on the local optimization of both fiber angle and intrinsic thickness build-up concomitantly. This framework is called Direct Fiber Path Optimization (DFPO). Besides the description of DFPO, its capabilities are exemplified by optimizing a CFRP open-hole tensile specimen. Key results show a clear improvement compared to the current often used approach of applying principal stress trajectories for a variable-axial reinforcement pattern.\n\n#### 1. Introduction\n\nRecently, the demand for energy efficient systems leveraged the use of CFRP lightweight composites in structural components. These materials are increasingly being employed in aeronautical, aerospace, and automotive applications. Due to the high cost of carbon fibers, their efficient usage becomes essential . By employing a variable-axial (VA) fiber design, stiffness and strength properties may be improved when comparing to classical CFRP designs . Thereby, the term VA means varying the fiber orientation at the ply level. The desired performance of CFRP composites is achieved by guiding the loads almost exclusively along the fiber orientation and thus minimizing the shear load of the matrix. For a technical realization, TFP technology, which was developed at Leibniz-Institut für Polymerforschung Dresden (Germany), is well suited. Basics and some applications of TFP technology are described in [3, 4]. The placement of carbon fibers is usually carried out by stitching dry rovings, as shown in Figure 1. The roving is guided through a rotatable roving pipe onto a base material, where a sewing thread applied in the zig-zag-pattern holds it in place.\n\nSeveral approaches have been developed to optimize VA composites. An extensive overview of curvilinear fiber-reinforced composites was recently performed by Ribeiro et al. . Under the name variable angle tow steering, Weaver et al. improved the postbuckling performance of composite panels with a VA layout, whereas Panesar and Weaver optimized blended bistable laminates suitable for morphing flap applications. Duvaut et al. implemented a varying fiber density in order to consider local stress intensity. For a similar purpose, the local layer thickness was varied by Parnas et al. as an additional design parameter. Groh and Weaver proposed a minimum-mass design of a typical aircraft wing panel under end-compression. Khani et al. developed a mathematical optimization algorithm for variable stiffness panels using lamination parameters. Van Campen et al. proposed a methodology to convert known lamination parameters distribution for a VA composite laminate into realistic fiber angles, with minimum loss of structural performance. Cho and Rowlands reduced stress concentrations in an open-hole laminate with a genetic algorithm.\n\nIn contrast to optimization procedures, principal stress criterion has been often used for deriving curvilinear fiber paths, e.g., . Kelly et al. , Waldmann et al. , and Malakhov and Polilov designed the curvilinear fiber path based on the concept of aligning fibers following the load path by placing fibers along principal stresses. Both approaches were assumed to be optimization criteria, although no mathematical optimization process was explicitly carried out and only an a priori design criterion was applied. Furthermore, gradient based numerical optimization processes [9, 2123] and optimization approaches based on evolutionary algorithms to design VA composites have been also developed.\n\nHowever, due to the often chosen approach of varying angles of single quasi-isotropic finite elements (FE), the number of design variables is considerably high, also increasing computational costs. Therefore, most examples have been computed by using FE models with a limited number of finite elements. Increased design freedom causes increased design problem complexity. For example, Conti et al. found that using fiber angles as design variables inevitably leads to an ill-behaved objective function with many local minima.\n\nUsually, a VA fiber pattern can be implied in a varying density of fibers, which causes a nonuniform thickness of the dry preforms. Hence, this heterogeneous thickness build-up is extremely complex to be accounted for in the analysis. Given this complexity, current approaches neglect the thickness accumulation and consider only the fiber angle variation.\n\nThus, the major criticism of many state-of-the-art optimization approaches is that there is no mathematical optimization procedure to begin or to operate without necessary information on the manufacturing process due to the lack of an appropriate modeling procedure. However, the knowledge of the thickness distribution and the local fiber orientation corresponding to an arbitrary fiber layout, which can be produced by TFP, is essential for the part design process. Spickenheuer et al. [1, 29] and Albers et al. made initial attempts to separate the optimization process of a curvilinear fiber-reinforced composite manufactured via TFP from the actual numerical models in order to limit the number of required design variables, making them independent of the applied FE mesh resolution. Thus, once a sufficiently accurate modeling of VA fiber layouts is established, optimization techniques can be applied to the fiber pattern. This allows fully utilizing the high degrees of freedom in the design process and the maximization of the anisotropic material characteristics of CFRPs.\n\nGiven the identified gaps in the current state-of-the-art in properly modeling VA composites, this work presents a novel design procedure for illustrating the capability for generating a VA pattern for an open-hole tensile specimen, where an optimal fiber pattern cannot be easily derived. Hence, the novel optimization approach, called Direct Fiber Path Optimization (DFPO) for VA composites, will be introduced and numerically evaluated on the example of an open-hole tensile specimen.\n\n#### 2. Finite Element Modeling\n\n##### 2.1. Model Setup\n\nAccording to the state-of-the-art, modeling of composite structures is mostly limited to stacking layers with a constant thickness and a constant fiber angle within each layer. Models for structural analysis of uniform spirals and single curved tapes of parallel fibers and constant thickness have been successfully applied additionally by an increase in modeling efforts . In this case, an analytic description for local preform thickness and the fiber orientation is known, which can be used to build appropriate FE models for structural simulation. However, the existing model limitations are too strong if one plans to apply optimization strategies to fully exploit the potential of CFRP manufactured by TFP and to adapt production requirements. Although there are many approaches of employing the mathematical description of the optimization algorithm to deduce the numerical model, e.g., the geometry of each iteration step, this work is going to describe the modeling independent of the optimization and thus as a generic module for any VA structure with a similar placement characteristic, strictly following the manufacturing characteristics of TFP. The objective of the modeling is the elastic description of the laminate compliance and the prediction of initial failures based on a physically based failure criterion. A mesoscaled model is used to evaluate the specific properties of the TFP process following the recommendations raised by Uhlig et al. .\n\nTo generate continuous layers with TFP, the rovings have to be placed with a slight overlap to avoid gaps between them. If placed with constant thickness and fiber orientation, parallel laminates can be produced for a small range of distances between neighboring rovings. The thickness is calculated according to the following (see Figure 2):\n\nwhere is the roving cross-section area, the distance between neighboring rovings, the roving fineness, the fiber volume fraction, and the fiber density.\n\nFor arbitrary nonparallel roving placement, the thickness evaluation becomes more complex. As a starting point for the analytical description of such a preform the placement path is used, which is the basis for the fiber placement with a TFP machine. This path or more generally a sequence of paths will be referenced as the design pattern. The simplest mathematical description is a sequence of straight lines in two dimensions. Curved placement paths, e.g., containing primitives such as arcs or splines, will be approximated with a sequence of short straight lines within the accuracy of production. A straight line is defined by the starting point and end point inside the placement plane. All points in between are described bywhere is the parameterization variable ranging from zero to one. Note that either the total sequence of straight lines can be connected, in the case if there is just one fiber path, or at least some succeeding lines are not connected, which represents completely separate fiber paths, as can be seen in Figure 3.\n\nThis type of design pattern contains only the information of the fiber placement paths including the length of rovings but no information about the width or cross-section area. A formal extension of the straight path information combined with the cross-section area is the line thickness distribution line for line segments with length :\n\nHere, the concept of the Dirac delta distribution was used to define the density function. This line thickness distribution function is only an intermediate step as the total fiber volume is concentrated along the infinitely thin lines and an infinitely high thickness is obtained on the lines and zero elsewhere. However, the function already fulfills the normalization condition. By integrating over the total design space or any area containing all line segments the total fiber volume is obtained:\n\nFor practical purposes this thickness distribution is not very useful as it lacks the information about the width of a typical roving which is placed by an embroidery machine. This width usually depends on the type of rovings used, e.g., the number of filaments, material density, and most importantly, on a machine parameter, the width of the zigzag stitch used to fix the roving on the base material.\n\nBy convolution with different smoothing functions, the information about the width of the roving can be added. A very convenient approach is the coarse-graining by convolution with a Gaussian centered at () of width determined by the placement width to obtain the Gaussian thickness distribution:\n\nThe coarse-graining is done by integrating the function in the whole plane of . By using the definition of the line thickness distribution , a solution for this convolution can be expressed in terms of error functions. This makes a numerical implementation very fast. This Gaussian thickness distribution represents a Gaussian weighted average of the roving volume density in the area around the point at which the thickness needs to be computed. For single straight rovings, this thickness distribution leads to a Gaussian cross-section area, which roughly approximates the real cross-section areas for the TFP process, as shown in microsections by Uhlig et al. . Other smoothing functions such as a cylindrical average approximate the cross-section of a single roving to a closer degree. However, the resulting laminates exhibit many discontinuities, which negatively influence the convergence of the modeling. With the Gaussian thickness distribution the laminate boundary needs to be defined by a cut-off thickness, as the Gaussian is nowhere exactly zero.\n\nThe main challenge for numerical modeling is to obtain the geometry and the fiber orientation based on the placement pattern of a single layer. Successive layers are stacked on top of each other without regard to draping behavior, which is fine as long as thickness gradients of the lower layers are small enough. The description is restricted to layers of noncrossing rovings or at least to roving placements where overlapping rovings cross at small angles, such that an element wise average of fiber orientation is meaningful. Note that, for many examples which contain a self-crossing roving path, the layer can be split into smaller layers with noncrossing rovings. In Figure 4, a schematic description of the modeling procedure is shown. Based on a two-dimensional (2D) mesh of the planar design space a three-dimensional finite element model is derived using localized information of the Gaussian thickness distribution and the averaged fiber orientation. The thickness is evaluated at each corner node and the fiber orientation at the center of each element. The fiber orientation is well defined for linear line segments. The elemental fiber orientation is averaged by a thickness weighted average of all linear line segments, which contribute to the total thickness at the center point of each element.\n\nSuccessive layers can be stacked on top of each other. The resulting three-dimensional (3D) FE model represents a piece-wise linearization per element of the locally averaged characteristics, namely, thickness and fiber angle. Alternatively, the thickness and fiber angle can be combined at the center of the FE into a 2D layered shell element description to obtain a model for the same fiber layout with less computational cost. The main difference arises from neglecting the out-of-plane component of the fiber orientation and thickness gradients within an element.\n\nNext, two numerical examples are considered by using DFPO. For both cases, the following parameters are used: fiber volume fraction () of 58%; roving fineness () of 400 tex; density () of 1.76 g.cm−3; and width smoothing parameter () of 1 mm.\n\n##### 2.2. Case 1: Open-Hole Tensile Specimen\n\nAn open-hole specimen under tensile loading is chosen to demonstrate the modeling capabilities and the optimization of VA laminates by employing the DFPO approach. Their specimen geometry and dimensions are presented in Figure 5(a). In order to directly evaluate the capabilities of the proposed optimization framework, the specimen comprises two layers, achieved by stacking a carbon fiber TFP layer (layer to be optimized) on top of the base material (±45° woven fabric with area weight of 256 ). Figure 6 shows in detail the two-layer open-hole specimen in study.\n\nBased on a 2D meshing of the supporting plane, the local thickness is evaluated at each node for each laminate layer along the FE mesh, as it is shown in blue color scale (Figure 7(b)). In addition, the elemental fiber orientation (Figure 7(a)) is set as the averaged fiber orientation at the center of each element. In areas where the current considered fiber pattern places no rovings, the thickness computation yields effectively zero. However, to provide a continuous mesh in this case, a very small thickness of 0.001 mm is set at the corresponding nodes and the corresponding element material properties are set to resin properties (blue elements in Figure 6). The corresponding FE model additionally incorporates at the bottom of the laminate a layer of constant thickness (0.24 mm) of base material, as Figure 6 depicts.\n\nSymmetrical boundary conditions are applied along all axes of the specimen. The load is applied at the top-edge of the specimen. These details can be seen in Figure 6. Finite element simulations are carried out in ANSYS APDL using quadratic SOLID186 and linear SOLID185 elements (ANSYS library reference).\n\n##### 2.3. Case 2: Narrow-Middle Tensile Specimen\n\nIn order to provide another example for the applicability of the proposed DFPO framework, a sample under the same loading conditions has been considered. For that, a narrow-middle specimen under tensile loading is analyzed and optimized. Details on the geometry and dimensions of the narrow-middle tensile specimen are shown in Figure 5(b). In order to evaluate the capabilities of DFPO, similarly to the open-hole specimen, the sample consists of two layers, attained by stacking a carbon fiber TFP layer (layer to be optimized) on top of the base material (± 45° carbon fiber woven fabric with areal weight of 256 ). The material properties of both UD carbon fiber/epoxy TFP layer and the carbon fiber/epoxy woven fabric laminated composites used in the FE models and optimizations are presented in Table 1.\n\n#### 3. Optimization Process\n\nThe optimization problem for the fiber path is described by the minimization of an objective function, in which compliance minimization is the objective function, which analogously stands for stiffness maximization () under variation of each roving placement path . Within the context of the actual optimization, two compliances are aimed to be minimized as follows:where minimizing the maximum of the displacement in -direction is the objective function for stiffness optimization, whereas minimization of the maximum of MIA (mode interaction parameter) is the objective function for strength optimization. This MIA parameter is related to the physically based failure mode concept developed by Cuntze . With this criterion, it is possible to distinguish several failure modes, namely, tension and compression induced failure modes for fiber failure and compression, tension, and shear induced inter-fiber-failure modes. Cuntze’s Failure Mode Concept (FMC) is based on the stress and strengths quantities, which means that MIA (failure parameter) is calculated based on the stress state of the laminate at each interaction along the analysis. In other words, if , then the laminate fails; analogously if , the laminate is safe. Additionally, all failure modes can be combined into a single numerical value suitable for optimization with the mode interaction (MIA) quantity. Since the whole formulation of Cuntze’s FMC is very extensive, its full description can be seen in .\n\nMathematically, the dimensionality of the optimization problem of even a single roving path is infinite. However, due to limited production accuracy, the placement path can be modeled using a finite set of parameters within some placement path representation.\n\nThe optimization flowchart is implemented and presented in Figure 8. The parameterized fiber layout is represented by a finite set of coefficients, e.g., spline control points. The 2D fiber path is computed which in turn is analyzed by the 3D modeling tool to generate the finite element model. The local thickness and fiber orientation are taken into account. Loads and boundary conditions are applied and then the model is solved. Based on this solution, the target optimization value (compliance minimization or stiffness maximization) is derived. The optimization value is the sole input value for gradient free optimization algorithms, such as BOBYQA (Bound Optimization BY Quadratic Approximation) by Powell , which can modify the fiber path parameters within predefined boundaries to achieve a minimal displacement value. As long as no gradients are derived only gradient free optimization algorithms can be used. BOBYQA provides a fast converging algorithm for smooth optimization functions due to its quadratic approximation also implementing box constraints that can be used to restrict the fiber pattern to within reasonable locations. Details on the optimization parameters are given in Section 3.2. In general, other optimization values, such as failure stress, can be applied. However, the convergence to overall good solutions is much better for stiffness optimization in comparison to strength optimization. Thus, for a strength optimization, a stiffness optimized layout is used as an initial layout.\n\n##### 3.1. Convergence Study\n\nFor the use in optimization procedures, the numerical model must be sufficiently stable and free of mesh dependence, once otherwise numerical fluctuations lead to nonconverging behavior in the optimization algorithm.\n\nFor layers that fully cover the design space such that neighboring rovings overlap, i.e., maximum displacement in -direction (Figure 9(a)) and maximum MIA (Figure 9(b)), the simulation converges or stabilizes with increasing the number of elements (), as Figure depicts. Regarding stiffness optimization (Figure 9(a)), the FE model composed of quadratic elements (SOLID 186) easily converges for any element size, whereas for the FE model with linear elements (SOLID 185), the model converges well with a minimum number of 20,000 elements. On the other hand, for strength optimization (Figure 9(b)), both element types take a while to converge, but for a mesh density of 200,000 elements, the FE model converges for both linear and quadratic elements. In this way, for the stiffness objective function, the mesh with 20,000 elements has been employed in all further optimization and FE analyses.\n\nThe convergence is only achieved if the boundaries of the rovings overlap the previous and next rovings, thus forming continuous layers without gaps. If the rovings do not fill the whole mesh, this base mesh elements need to be aligned along the bounding contour of the fiber layers to allow a realistic material description per element.\n\n##### 3.2. Open-Hole and Narrow-Middle Specimens Optimization\n\nIn this section, the parameterization of the fiber layout is described in more detail. For both examples, only the 0° layer is optimized. However, in general, multiple layers can be parameterized in a similar way and the collective parameter sets are combined to form a single optimization parameter vector. A basis or an initial fiber layout is chosen and the parameterization describes only modifications of this layout. For the 0° layer of both examples, a layout of equidistant straight and parallel fibers is chosen as an initial layout. Deviations from this layout are restricted to shifts in -direction (see coordinate system in Figure 6), which limits possible layouts to angles of less than 90° between fiber orientation and the load which is parallel to the -direction. In addition, closed loops cannot be described with such an approach. The angle limitation is useful especially if multiple layers are considered where fiber layers are assigned to specific “tasks”, which should not be exchanged between layers during the optimization. (Closed loops and abruptly ending fibers within the part are also impractical for production with TFP.) Similar to Nagendra et al. the fiber path is modeled based on 2D cubic B-splines. However, only deviations from the initial path are described, the straight and parallel fiber layout in this case, with the spline functions. The x-coordinate of the placement path is given bywhere are spline basis functions and the control points define the optimization parameters. The linear scaling factors and determine the total length scale. An equidistant set of defines the different rovings next to each other in x-direction and the total set of curves for each roving path along the y-direction is given by variation of :\n\nFor , the initial layout with straight fibers is obtained. By fixing for and , the boundary conditions of equidistant rovings in the clamping area with a smooth transition can be fulfilled. The demand for smooth rovings also at the symmetry line leads to additional restrictions of for . The optimization parameters for both examples are 16 independent control points () at the beginning and increase up to 112 obtained by node insertion after BOBYQA algorithm converges for a lower resolution. In principle, BOBYQA algorithm converges even for larger number of optimization parameters of several hundreds of parameters. However, the manufacturing precision limits meaningful increase of the resolution. The optimization is considered to be converged if the control points do not change by more than 0.005 mm between successive iterations. The initial resolution of 16 parameters converges in about 60 iterations and takes about 10 min in a typical workstation.\n\n#### 4. Results and Discussion\n\nFigure 10 shows the various layouts of the open-hole specimen. The reference layout with equidistant and parallel fibers is given in Figure 10(a)), the stiffness optimization result is in Figure 10(b)), and for comparison the result of a principal stress orientation of fibers is given in Figure 10(c)). The optimization results provide a different solution when compared to previously optimized fiber pattern for open-hole tensile specimen, as can be seen in [9, 14, 28], where they employed the principal stress criterion. Not surprisingly, DFPO achieves better improvement than those ones. The disturbance of fibers reaches much farther away from the hole, such that globally straighter fibers with overall similar length are obtained.\n\nIn addition to the open-hole specimen, another example is provided to demonstrate the potential of the DFPO framework for another case. Then, a tensile specimen with a narrow section in the middle is considered, where the ratio of the narrow section to the full width is 50%. Due to the smooth transition region of the narrowed section, the principal stress layout (Figure 11(c)) works very well in this case and more fiber rovings divert from the straight path (Figure 11(a)). The DFPO solution is qualitatively similar to the open-hole solution but with stronger fiber concentration due to the stronger narrowing of the defect (Figure 11(b)). In addition, in this case, the effect of the optimization using DFPO is much more “global” compared to the principal stress layout.\n\nFigure 12 presents the stiffness and strength increase of the optimized fiber layouts relative to the reference design (Figure 10(a)). The principal stress oriented layout (Figure 10(c)) yields to a 5% increase in stiffness (Figure 11(a)) (20% for the second example) and about 139% increase in strength in terms of Cuntze fiber failure mode interaction max(MIA) (Figure 12(b)) (237% for the second example), whereas the DFPO-optimized layout (Figure 10(b)) results in about 9% increase of stiffness (25% for the second example) and 197% increase in strength (275% for the second example). Please note that the boundary conditions of the optimizations were such that the total number of rovings next to each other was fixed and thus the volume and mass change for different fiber layouts. However, the increase in volume of 0.6% for principal stress and 1.6% for DFPO (5.9% and 6.3%, respectively, for the second example) is smaller than the gain in both stiffness and strength.\n\nIn contrast to the principal stress design, DFPO represents a real optimization procedure and consequently takes global and not just local features of the specimen into account. The thickness distribution is nonuniform in both cases and a thickness concentration near the defect of the structure is observed. In the DFPO case this thickness concentration extends further from the defect area than in the principal stress layout. The fiber length of single rovings is much more uniform along each family of specimen for the DFPO-optimized, such that the load balance of all rovings under tensile load is better. Compared to other optimization techniques where elemental fiber orientations and thickness values are optimized without correlations induced by endless fibers, in DFPO, each fiber layout considered in every optimization iteration is already manufacturable and no subsequent adaptation is necessary. Thus, these gains obtained by the optimization can be fully transferred to the application.\n\n#### 5. Conclusions\n\nThe key objective of this investigation was to present a novel methodology for optimizing the fiber path with a variable-axial fiber reinforcement design by employing a novel optimization methodology, called Direct Fiber Path Optimization (DFPO). The main achievement is the local optimization of both fiber angle and thickness at each finite element along the base mesh in order to reach global optimum. DFPO demonstrated its capabilities on the optimization of both open-hole and narrow-middle examples under uniaxial tension. For both cases, the results show a clear increase in both stiffness and strength compared to a reference design with equidistant straight fiber-reinforced parallel fibers, as well as compared to the principal stress oriented layouts.\n\n#### Data Availability\n\nThe data used to support the findings of this study are available from the corresponding and first authors ([email protected]) upon request.\n\n#### Conflicts of Interest\n\nThe authors declare that they have no conflicts of interest.\n\n#### Acknowledgments\n\nThe authors would like to thank K. Uhlig for fruitful discussions and E. Richter (both from IPF-Dresden) for his support with the figures. The financial support of DFG grants HE 4466/29-1 and KR 1713/19-1 is also gratefully acknowledged; José Humberto S. Almeida Jr. acknowledges CAPES and Alexander von Humboldt Foundations for the financial support." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.877657,"math_prob":0.91929805,"size":38001,"snap":"2023-40-2023-50","text_gpt3_token_len":8199,"char_repetition_ratio":0.14688002,"word_repetition_ratio":0.040759448,"special_character_ratio":0.2079682,"punctuation_ratio":0.1459085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97072685,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T19:58:20Z\",\"WARC-Record-ID\":\"<urn:uuid:7059858c-b333-4986-983a-ccc2b79ed108>\",\"Content-Length\":\"492709\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6346b7be-3ee7-4e96-8edd-a8ecf6cebb02>\",\"WARC-Concurrent-To\":\"<urn:uuid:6eadfdba-69b0-4768-be68-cb7b67634dc8>\",\"WARC-IP-Address\":\"104.18.40.243\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/mpe/2019/8260563/\",\"WARC-Payload-Digest\":\"sha1:NLRZ4NEMLGK3K42J4Z5DBE3HE5SDYYSW\",\"WARC-Block-Digest\":\"sha1:J2T5YQ75SNZOTBSPIODV3VGHZPGMXYCZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00185.warc.gz\"}"}
https://openstax.org/books/college-physics/pages/21-5-null-measurements
[ "College Physics\n\n# 21.5Null Measurements\n\nCollege Physics21.5 Null Measurements\n\nStandard measurements of voltage and current alter the circuit being measured, introducing uncertainties in the measurements. Voltmeters draw some extra current, whereas ammeters reduce current flow. Null measurements balance voltages so that there is no current flowing through the measuring device and, therefore, no alteration of the circuit being measured.\n\nNull measurements are generally more accurate but are also more complex than the use of standard voltmeters and ammeters, and they still have limits to their precision. In this module, we shall consider a few specific types of null measurements, because they are common and interesting, and they further illuminate principles of electric circuits.\n\n### The Potentiometer\n\nSuppose you wish to measure the emf of a battery. Consider what happens if you connect the battery directly to a standard voltmeter as shown in Figure 21.34. (Once we note the problems with this measurement, we will examine a null measurement that improves accuracy.) As discussed before, the actual quantity measured is the terminal voltage $VV size 12{V} {}$, which is related to the emf of the battery by $V=emf−IrV=emf−Ir size 12{V=\"emf\" - ital \"Ir\"} {}$, where $II size 12{I} {}$ is the current that flows and $rr size 12{r} {}$ is the internal resistance of the battery.\n\nThe emf could be accurately calculated if $rr size 12{r} {}$ were very accurately known, but it is usually not. If the current $II size 12{I} {}$ could be made zero, then $V=emfV=emf size 12{V=\"emf\"} {}$, and so emf could be directly measured. However, standard voltmeters need a current to operate; thus, another technique is needed.\n\nFigure 21.34 An analog voltmeter attached to a battery draws a small but nonzero current and measures a terminal voltage that differs from the emf of the battery. (Note that the script capital E symbolizes electromotive force, or emf.) Since the internal resistance of the battery is not known precisely, it is not possible to calculate the emf precisely.\n\nA potentiometer is a null measurement device for measuring potentials (voltages). (See Figure 21.35.) A voltage source is connected to a resistor $R,R,$ say, a long wire, and passes a constant current through it. There is a steady drop in potential (an $IRIR size 12{ ital \"IR\"} {}$ drop) along the wire, so that a variable potential can be obtained by making contact at varying locations along the wire.\n\nFigure 21.35(b) shows an unknown $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$ (represented by script $ExEx size 12{\"emf\" rSub { size 8{x} } } {}$ in the figure) connected in series with a galvanometer. Note that $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$ opposes the other voltage source. The location of the contact point (see the arrow on the drawing) is adjusted until the galvanometer reads zero. When the galvanometer reads zero, $emfx=IRxemfx=IRx size 12{\"emf\" rSub { size 8{x} } = ital \"IR\" rSub { size 8{x} } } {}$, where $RxRx size 12{R rSub { size 8{x} } } {}$ is the resistance of the section of wire up to the contact point. Since no current flows through the galvanometer, none flows through the unknown emf, and so $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$ is directly sensed.\n\nNow, a very precisely known standard $emfsemfs size 12{\"emf\" rSub { size 8{s} } } {}$ is substituted for $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$, and the contact point is adjusted until the galvanometer again reads zero, so that $emfs=IRsemfs=IRs size 12{\"emf\" rSub { size 8{s} } = ital \"IR\" rSub { size 8{s} } } {}$. In both cases, no current passes through the galvanometer, and so the current $II size 12{I} {}$ through the long wire is the same. Upon taking the ratio $emfxemfs emfxemfs size 12{ { {\"emf\" rSub { size 8{x} } } over {\"emf\" rSub { size 8{s} } } } } {}$, $II size 12{I} {}$ cancels, giving\n\n$emfxemfs=IRxIRs=RxRs.emfxemfs=IRxIRs=RxRs. size 12{ { {\"emf\" rSub { size 8{x} } } over {\"emf\" rSub { size 8{s} } } } = { { ital \"IR\" rSub { size 8{x} } } over { ital \"IR\" rSub { size 8{s} } } } = { {R rSub { size 8{x} } } over {R rSub { size 8{s} } } } } {}$\n21.71\n\nSolving for $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$ gives\n\n$emfx=emfsRxRs.emfx=emfsRxRs. size 12{\"emf\" rSub { size 8{x} } =\"emf\" rSub { size 8{s} } { {R rSub { size 8{x} } } over {R rSub { size 8{s} } } } } {}$\n21.72\nFigure 21.35 The potentiometer, a null measurement device. (a) A voltage source connected to a long wire resistor passes a constant current $I I size 12{I} {}$ through it. (b) An unknown emf (labeled script $E x E x$ in the figure) is connected as shown, and the point of contact along $R R size 12{R} {}$ is adjusted until the galvanometer reads zero. The segment of wire has a resistance $R x R x size 12{R rSub { size 8{x} } } {}$ and script $E x = IR x E x = IR x size 12{E rSub { size 8{x} } = ital \"IR\" rSub { size 8{x} } } {}$, where $I I size 12{I} {}$ is unaffected by the connection since no current flows through the galvanometer. The unknown emf is thus proportional to the resistance of the wire segment.\n\nBecause a long uniform wire is used for $RR size 12{R} {}$, the ratio of resistances $Rx/RsRx/Rs size 12{R rSub { size 8{x} } /R rSub { size 8{s} } } {}$ is the same as the ratio of the lengths of wire that zero the galvanometer for each emf. The three quantities on the right-hand side of the equation are now known or measured, and $emfxemfx size 12{\"emf\" rSub { size 8{x} } } {}$ can be calculated. The uncertainty in this calculation can be considerably smaller than when using a voltmeter directly, but it is not zero. There is always some uncertainty in the ratio of resistances $Rx/RsRx/Rs size 12{R rSub { size 8{x} } /R rSub { size 8{s} } } {}$ and in the standard $emfsemfs size 12{\"emf\" rSub { size 8{s} } } {}$. Furthermore, it is not possible to tell when the galvanometer reads exactly zero, which introduces error into both $RxRx size 12{R rSub { size 8{x} } } {}$ and $RsRs size 12{R rSub { size 8{s} } } {}$, and may also affect the current $II size 12{I} {}$.\n\n### Resistance Measurements and the Wheatstone Bridge\n\nThere is a variety of so-called ohmmeters that purport to measure resistance. What the most common ohmmeters actually do is to apply a voltage to a resistance, measure the current, and calculate the resistance using Ohm’s law. Their readout is this calculated resistance. Two configurations for ohmmeters using standard voltmeters and ammeters are shown in Figure 21.36. Such configurations are limited in accuracy, because the meters alter both the voltage applied to the resistor and the current that flows through it.\n\nFigure 21.36 Two methods for measuring resistance with standard meters. (a) Assuming a known voltage for the source, an ammeter measures current, and resistance is calculated as $R = V I R = V I size 12{R= { {V} over {I} } } {}$. (b) Since the terminal voltage $V V size 12{V} {}$ varies with current, it is better to measure it. $V V size 12{V} {}$ is most accurately known when $I I size 12{I} {}$ is small, but $I I size 12{I} {}$ itself is most accurately known when it is large.\n\nThe Wheatstone bridge is a null measurement device for calculating resistance by balancing potential drops in a circuit. (See Figure 21.37.) The device is called a bridge because the galvanometer forms a bridge between two branches. A variety of bridge devices are used to make null measurements in circuits.\n\nResistors $R1R1 size 12{R rSub { size 8{1} } } {}$ and $R2R2 size 12{R rSub { size 8{2} } } {}$ are precisely known, while the arrow through $R3R3 size 12{R rSub { size 8{3} } } {}$ indicates that it is a variable resistance. The value of $R3R3 size 12{R rSub { size 8{3} } } {}$ can be precisely read. With the unknown resistance $RxRx size 12{R rSub { size 8{x} } } {}$ in the circuit, $R3R3 size 12{R rSub { size 8{3} } } {}$ is adjusted until the galvanometer reads zero. The potential difference between points b and d is then zero, meaning that b and d are at the same potential. With no current running through the galvanometer, it has no effect on the rest of the circuit. So the branches abc and adc are in parallel, and each branch has the full voltage of the source. That is, the $IRIR size 12{ ital \"IR\"} {}$ drops along abc and adc are the same. Since b and d are at the same potential, the $IRIR size 12{ ital \"IR\"} {}$ drop along ad must equal the $IRIR size 12{ ital \"IR\"} {}$ drop along ab. Thus,\n\n$I1R1=I2R3.I1R1=I2R3. size 12{I rSub { size 8{1} } R rSub { size 8{1} } =I rSub { size 8{2} } R rSub { size 8{3} } } {}$\n21.73\n\nAgain, since b and d are at the same potential, the $IRIR size 12{ ital \"IR\"} {}$ drop along dc must equal the $IRIR size 12{ ital \"IR\"} {}$ drop along bc. Thus,\n\n$I1R2=I2Rx.I1R2=I2Rx. size 12{I rSub { size 8{1} } R rSub { size 8{2} } =I rSub { size 8{2} } R rSub { size 8{x} } } {}$\n21.74\n\nTaking the ratio of these last two expressions gives\n\n$I1R1I1R2=I2R3I2Rx.I1R1I1R2=I2R3I2Rx. size 12{ { {I rSub { size 8{1} } R rSub { size 8{1} } } over {I rSub { size 8{1} } R rSub { size 8{2} } } } = { {I rSub { size 8{2} } R rSub { size 8{3} } } over {I rSub { size 8{2} } R rSub { size 8{x} } } } } {}$\n21.75\n\nCanceling the currents and solving for Rx yields\n\n$Rx=R3R2R1.Rx=R3R2R1. size 12{R rSub { size 8{x} } =R rSub { size 8{3} } { {R rSub { size 8{2} } } over {R rSub { size 8{1} } } } } {}$\n21.76\nFigure 21.37 The Wheatstone bridge is used to calculate unknown resistances. The variable resistance $R 3 R 3 size 12{R rSub { size 8{3} } } {}$ is adjusted until the galvanometer reads zero with the switch closed. This simplifies the circuit, allowing $R x R x size 12{R rSub { size 8{x} } } {}$ to be calculated based on the $IR IR size 12{ ital \"IR\"} {}$ drops as discussed in the text.\n\nThis equation is used to calculate the unknown resistance when current through the galvanometer is zero. This method can be very accurate (often to four significant digits), but it is limited by two factors. First, it is not possible to get the current through the galvanometer to be exactly zero. Second, there are always uncertainties in $R1R1 size 12{R rSub { size 8{1} } } {}$, $R2R2 size 12{R rSub { size 8{2} } } {}$, and $R3R3 size 12{R rSub { size 8{3} } } {}$, which contribute to the uncertainty in $RxRx size 12{R rSub { size 8{x} } } {}$.\n\n### Check Your Understanding\n\nIdentify other factors that might limit the accuracy of null measurements. Would the use of a digital device that is more sensitive than a galvanometer improve the accuracy of null measurements?", null, "Do you know how you learn best?\nKinetic by OpenStax offers access to innovative study tools designed to help you maximize your learning potential.\nOrder a print copy\n\nAs an Amazon Associate we earn from qualifying purchases." ]
[ null, "https://openstax.org/rex/releases/v4/a5e5fde/static/media/kinetic.f14ce455.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94066256,"math_prob":0.998708,"size":6094,"snap":"2022-40-2023-06","text_gpt3_token_len":1235,"char_repetition_ratio":0.15073891,"word_repetition_ratio":0.033563673,"special_character_ratio":0.1933049,"punctuation_ratio":0.10689046,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99792403,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T15:14:20Z\",\"WARC-Record-ID\":\"<urn:uuid:27062212-9eef-43b6-9745-037ce27f0282>\",\"Content-Length\":\"464318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c96d95ed-668a-4c89-b729-d6db0c831c71>\",\"WARC-Concurrent-To\":\"<urn:uuid:842ebec6-b895-4af8-bddc-ebefe3551a9d>\",\"WARC-IP-Address\":\"18.160.46.108\",\"WARC-Target-URI\":\"https://openstax.org/books/college-physics/pages/21-5-null-measurements\",\"WARC-Payload-Digest\":\"sha1:D4JNEM3RR6HZSI6O2SY63I4IPJ4MJTDS\",\"WARC-Block-Digest\":\"sha1:NJJFYAO2HUBC43YWOBKNQOUM6I5WLABI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337631.84_warc_CC-MAIN-20221005140739-20221005170739-00716.warc.gz\"}"}
http://www.cut-the-knot.org/proofs/two_color.shtml
[ "<\n\n# Coloring Points in the Plane and Elsewhere\n\nThere is a class of exciting problems that fall under the purview of what is called Euclidean Ramsey Theory. Points on a line, in a plane or space are assigned colors - are getting colored, so to speak. A configuration of points is said to be monochromatic if all the points in the configuration are of the same color. The theory clarifies the question of what kind of monochromatic configurations are there?\n\n1. Points in the plane are each colored with one of two colors: red or blue. Prove that, for a given distance d, there always exist two points of the same color at the distance d from each other. (Solution)\n\n2. Points in the plane are each colored with one of three colors: red, green, or blue. Prove that, for a given distance d, there always exist two points of the same color at the distance d from each other. (Solution)\n\n3. Points in the plane are each colored with one of two colors: red or blue. The set of distances between the blue points is blue and the set of distances between the red points is red. Prove that either one or the other contains all the positive reals. (Solution)\n\n4. Points on a straight line are colored in two colors. Prove that it is always possible to find three points of the same color with one being the midpoint of the other two. (Solution)\n\n5. Points in the plane are colored in two colors. Prove that it is always possible to find a monochromatic equilateral triangle, i.e., three points of the same color with all pairwise distances equal. (Solution)\n\n6. Is there a coloring of the plane with three colors such that any straight line is bichromatic, i.e. only contains points of two colors? (Solution)\n\n7. If each point of the plane is colored red or blue then some rectangle has its vertices all the same color. (Solution)\n\n8. Six points are given in the space such that the pairwise distances between them are all distinct. Consider the triangles with vertices at these points. Prove that the longest side of one of these triangles is at the same time the shortest side of another. (Solution\n\n9. The design obtained by cutting the plane with straight lines can be colored with just two colors so that no two regions that share a side are of the same color. (Solution)\n\n### References\n\n1. R. L. Graham, Euclidean Ramsey Theory, in Handbook of Discrete and Computational Mathematics, J. E. Goodman, J. O'Rourke (eds), Chapman & Hall/CRC, 2004\n2. R. B. J. T. Allenby, A. Slomson, How to Count: An Introduction to Combinatorics, CRC Press, 2011 (2nd edition)\n3. A. Soifer, Geometric Etudes in Combinatorial Mathematics, Springer, 2010 (2nd, expanded edition)", null, "", null, "" ]
[ null, "http://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "http://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8949273,"math_prob":0.98465395,"size":3279,"snap":"2019-26-2019-30","text_gpt3_token_len":799,"char_repetition_ratio":0.15450382,"word_repetition_ratio":0.21913044,"special_character_ratio":0.23360781,"punctuation_ratio":0.11305732,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98625576,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T11:47:24Z\",\"WARC-Record-ID\":\"<urn:uuid:c5a8b04f-be92-49a3-9f38-672a5208f5d7>\",\"Content-Length\":\"15872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:660ef2a4-6ed1-4f0d-9c8e-2f50a8baf406>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cad691b-3db3-4816-81bd-e00bd0bb48de>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"http://www.cut-the-knot.org/proofs/two_color.shtml\",\"WARC-Payload-Digest\":\"sha1:56GUBPTER5X3ANGHFLZLQMEBG5TYAIQJ\",\"WARC-Block-Digest\":\"sha1:ZYN4EWWJZFH6Z5ICKLUVH6FULAXJI4QA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525136.58_warc_CC-MAIN-20190717101524-20190717123524-00306.warc.gz\"}"}
https://www.colorhexa.com/44c793
[ "# #44c793 Color Information\n\nIn a RGB color space, hex #44c793 is composed of 26.7% red, 78% green and 57.6% blue. Whereas in a CMYK color space, it is composed of 65.8% cyan, 0% magenta, 26.1% yellow and 22% black. It has a hue angle of 156.2 degrees, a saturation of 53.9% and a lightness of 52.4%. #44c793 color hex could be obtained by blending #88ffff with #008f27. Closest websafe color is: #33cc99.\n\n• R 27\n• G 78\n• B 58\nRGB color chart\n• C 66\n• M 0\n• Y 26\n• K 22\nCMYK color chart\n\n#44c793 color description : Moderate cyan - lime green.\n\n# #44c793 Color Conversion\n\nThe hexadecimal color #44c793 has RGB values of R:68, G:199, B:147 and CMYK values of C:0.66, M:0, Y:0.26, K:0.22. Its decimal value is 4507539.\n\nHex triplet RGB Decimal 44c793 `#44c793` 68, 199, 147 `rgb(68,199,147)` 26.7, 78, 57.6 `rgb(26.7%,78%,57.6%)` 66, 0, 26, 22 156.2°, 53.9, 52.4 `hsl(156.2,53.9%,52.4%)` 156.2°, 65.8, 78 33cc99 `#33cc99`\nCIE-LAB 72.349, -47.837, 15.779 28.072, 44.18, 34.65 0.263, 0.413, 44.18 72.349, 50.372, 161.745 72.349, -53.186, 30.087 66.468, -40.932, 15.619 01000100, 11000111, 10010011\n\n# Color Schemes with #44c793\n\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #c74478\n``#c74478` `rgb(199,68,120)``\nComplementary Color\n• #44c752\n``#44c752` `rgb(68,199,82)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #44bac7\n``#44bac7` `rgb(68,186,199)``\nAnalogous Color\n• #c75244\n``#c75244` `rgb(199,82,68)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #c744ba\n``#c744ba` `rgb(199,68,186)``\nSplit Complementary Color\n• #c79344\n``#c79344` `rgb(199,147,68)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #9344c7\n``#9344c7` `rgb(147,68,199)``\n• #78c744\n``#78c744` `rgb(120,199,68)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #9344c7\n``#9344c7` `rgb(147,68,199)``\n• #c74478\n``#c74478` `rgb(199,68,120)``\n• #2c936a\n``#2c936a` `rgb(44,147,106)``\n• #32a678\n``#32a678` `rgb(50,166,120)``\n• #38ba86\n``#38ba86` `rgb(56,186,134)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #58cd9e\n``#58cd9e` `rgb(88,205,158)``\n• #6bd3aa\n``#6bd3aa` `rgb(107,211,170)``\n• #7fd9b5\n``#7fd9b5` `rgb(127,217,181)``\nMonochromatic Color\n\n# Alternatives to #44c793\n\nBelow, you can see some colors close to #44c793. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #44c772\n``#44c772` `rgb(68,199,114)``\n• #44c77d\n``#44c77d` `rgb(68,199,125)``\n• #44c788\n``#44c788` `rgb(68,199,136)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #44c79e\n``#44c79e` `rgb(68,199,158)``\n• #44c7a9\n``#44c7a9` `rgb(68,199,169)``\n• #44c7b4\n``#44c7b4` `rgb(68,199,180)``\nSimilar Colors\n\n# #44c793 Preview\n\nThis text has a font color of #44c793.\n\n``<span style=\"color:#44c793;\">Text here</span>``\n#44c793 background color\n\nThis paragraph has a background color of #44c793.\n\n``<p style=\"background-color:#44c793;\">Content here</p>``\n#44c793 border color\n\nThis element has a border color of #44c793.\n\n``<div style=\"border:1px solid #44c793;\">Content here</div>``\nCSS codes\n``.text {color:#44c793;}``\n``.background {background-color:#44c793;}``\n``.border {border:1px solid #44c793;}``\n\n# Shades and Tints of #44c793\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030907 is the darkest color, while #f9fdfc is the lightest one.\n\n• #030907\n``#030907` `rgb(3,9,7)``\n• #071812\n``#071812` `rgb(7,24,18)``\n• #0c271c\n``#0c271c` `rgb(12,39,28)``\n• #103727\n``#103727` `rgb(16,55,39)``\n• #154632\n``#154632` `rgb(21,70,50)``\n• #19553d\n``#19553d` `rgb(25,85,61)``\n• #1e6448\n``#1e6448` `rgb(30,100,72)``\n• #227353\n``#227353` `rgb(34,115,83)``\n• #27825e\n``#27825e` `rgb(39,130,94)``\n• #2b9169\n``#2b9169` `rgb(43,145,105)``\n• #30a074\n``#30a074` `rgb(48,160,116)``\n• #34af7f\n``#34af7f` `rgb(52,175,127)``\n• #39be89\n``#39be89` `rgb(57,190,137)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #53cc9c\n``#53cc9c` `rgb(83,204,156)``\n• #62d0a4\n``#62d0a4` `rgb(98,208,164)``\n``#71d5ad` `rgb(113,213,173)``\n• #80d9b6\n``#80d9b6` `rgb(128,217,182)``\n• #8fdebf\n``#8fdebf` `rgb(143,222,191)``\n• #9fe2c7\n``#9fe2c7` `rgb(159,226,199)``\n• #aee7d0\n``#aee7d0` `rgb(174,231,208)``\n• #bdebd9\n``#bdebd9` `rgb(189,235,217)``\n• #ccf0e1\n``#ccf0e1` `rgb(204,240,225)``\n• #dbf4ea\n``#dbf4ea` `rgb(219,244,234)``\n• #eaf9f3\n``#eaf9f3` `rgb(234,249,243)``\n• #f9fdfc\n``#f9fdfc` `rgb(249,253,252)``\nTint Color Variation\n\n# Tones of #44c793\n\nA tone is produced by adding gray to any pure hue. In this case, #858686 is the less saturated color, while #15f69d is the most saturated one.\n\n• #858686\n``#858686` `rgb(133,134,134)``\n• #7c8f87\n``#7c8f87` `rgb(124,143,135)``\n• #739889\n``#739889` `rgb(115,152,137)``\n• #69a28b\n``#69a28b` `rgb(105,162,139)``\n• #60ab8d\n``#60ab8d` `rgb(96,171,141)``\n• #57b48f\n``#57b48f` `rgb(87,180,143)``\n• #4dbe91\n``#4dbe91` `rgb(77,190,145)``\n• #44c793\n``#44c793` `rgb(68,199,147)``\n• #3bd095\n``#3bd095` `rgb(59,208,149)``\n• #31da97\n``#31da97` `rgb(49,218,151)``\n• #28e399\n``#28e399` `rgb(40,227,153)``\n• #1fec9b\n``#1fec9b` `rgb(31,236,155)``\n• #15f69d\n``#15f69d` `rgb(21,246,157)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #44c793 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5008683,"math_prob":0.4795527,"size":3720,"snap":"2020-10-2020-16","text_gpt3_token_len":1630,"char_repetition_ratio":0.120559745,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5653226,"punctuation_ratio":0.23463687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828417,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T11:59:12Z\",\"WARC-Record-ID\":\"<urn:uuid:b3084ef4-8c9e-4e3d-9580-7ff569d51d15>\",\"Content-Length\":\"36326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab49690d-104c-468f-a675-32a3118a7d6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d44821aa-bb41-4b10-93c0-9c1024423b29>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/44c793\",\"WARC-Payload-Digest\":\"sha1:N43LLPDWXYURH3HKMU5WDUGUZG2MFUG5\",\"WARC-Block-Digest\":\"sha1:NKGQDS3JJTVTPRSGBNRTHYCRDTPGTC5U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370510846.12_warc_CC-MAIN-20200403092656-20200403122656-00520.warc.gz\"}"}
https://matpitka.blogspot.com/2023/10/
[ "## Monday, October 30, 2023\n\n### How could strong interactions emerge at the level of scattering amplitudes?\n\nThe above considerations are dangerous in that the intuitive QFT based thinking based is applied in TGD context where all interactions reduced to the dynamics of 3-surfaces and fields are geometrized by reducing them to the induced geometry at the level of space-time surface. Quantum field theory limit is obtained as an approximation and the applications of its notions at the fundamental level might be dangerous. In any case, it seems that only electroweak gauge potentials appear in the fermionic vertices and this might be a problem.\n1. By holography perturbation series is not needed in TGD. Scattering amplitudes are sums of amplitudes associated with Bohr orbits, which are not completely deterministic: there is no path integral. Whether path integral could be an approximate approximation for this sum under some conditions is an interesting question.\n2. It is best to start from a concrete problem. Is pair creation possible in TGD? The problem is that fermion and antifermion numbers are separately conserved for the most obvious proposals for scattering amplitudes. This essentially due to the fact that gauge bosons correspond to fermion-antifermion pairs. Intuitively, fermion pair creation means that fermion turns backwards in time. If one considers fermions in classical background fields this turning back corresponds to a 2-particle vertex. Could pair creation in classical fields be a fundamental process rather than a mere approximation in the TGD framework. This would conform with the vision that classical physics is an exact part of quantum physics.\n\nThe turning back in time means a sharp corner of the fermion line, which is light-like elsewhere. M4 time coordinate has a discontinuous derivative with respect to the internal time coordinate of the line. I have propoeed (see this and this) that this kind of singularities are associated with vertices involving pair creation and that they correspond to local defects making the differentiable structure of X4 exotic. The basic problem of GRT would become a victory in the TGD framework and also mean that pair creation is possible only in 4-D space-time.\n\nOne can imagine two kinds of turning backs in time.\n1. The turning back in time could occur for a 3-D surface such as monopole flux tube and induce the same process the string world sheets associated with the flux tubes and for the ends of the string world sheets as fermion lines ending at the 3-D light-like orbits of partonic 2-surfaces.\n2. In the fusion of two 2-sheeted monopole flux tubes along their \"ends\" identifiable as partonic 2-surfaces wormhole contacts, the ends would fuse instantaneously (this process is analogous to \"join along boundaries\". The time reversal of this process would correspond to the splitting of the monopole flux tube inducing a turning back in time for a partonic 2-surface and for fermionic lines as boundaries of string world sheets at the partonic 2-surface.\n\nThis would be analogous to a creation of a fermion pair in a classical induced gauge field, which is electroweak. The same would occur for the partonic 2-surfaces as opposite wormhole throats and for the string world sheets having light-like boundaries at the orbits of partonic 2-suraces.\n\n3. The light-like orbit of a partonic 2-surface contains fermionic lines as light-like boundaries of string world sheets. A good guess is that the singularity is a cusp catastrophe so that the surface turns back in time in exactly the opposite direction. One would have an infinitely sharp knife edge.\nWhat one can say about the scattering amplitudes on the basis of this picture? Can one obtain the analog for the 2-vertex describing a creation of a fermion pair in a classical external field?\n1. The action for a geometric object of a given dimension defines modified gamma matrices in terms of canonical momentum currents as Γα= TαkΓk, Tαk= ∂ L/∂(∂α hk). By hermiticity, the covariant divergence DαΓα of the vector defined by modified gamma matrices must vanish. This is true if the field equations are satisfied. This implies supersymmetry between fermionic and bosonic degrees of freedom.\n\nFor space-time surfaces, the action is Kähler action plus volume term. For the 3-D light-partonic orbits one has Chern-Simons-Kähler action. For string world sheets one has area action plus the analog of Kähler magnetic flux. For the light-boundaries of string world sheets defining fermion lines one has the integral ∫ Aμdxμ. The induced spinors are restrictions of the second quantized spinors fields of H=M4× CP2 and the argument is that the modified Dirac equation holds true everywhere, except possibly at the turning points.\n\n2. Consider now the interaction part of the action defining the fermionic vertices. The basic problem is that the entire modified Dirac action density is present and vanishes if the modified Dirac equation holds true everywhere. In perturbative QFT, one separates the interaction term from the action and obtains essentially ΨbarΓα DαΨ. This is not possible now.\n\nThe key observation is that the modified Dirac equation could fail at the turning points! QFT vertices would have purely geometric interpretation. The gamma matrices appearing in the modified Dirac action would be continuous but at the singularity the derivative ∂μΨ= ∂μmkkΨ of the induced free second quantized spinor field of H would become discontinuous. For a Fourier mode with momentum pk, one obtains\n\nμΨp= pkμ mkΨp == pμΨp.\n\nThis derivative changes sign in the blade singularity. At the singularity one can define this derivative as an average and this leaves from the action Ψbar Γα DαΨ only the term ΨbarΓα AαΨ. This is just the interaction part of the action!\n\n3. This argument can be applied to singularities of various dimensions. For D=3, the action contains the modified gamma matrices for the Kähler action plus volume term. For D=2, Chern-Simons-Kähler action defines the modified gamma matrices. For string world sheets the action could be induced from area action plus Kähler magnetic flux. For fermion lines from the 1-D action for fermion in induced gauge potential so that standard QFT result would be obtained in this case.\nHow does this picture relate to perturbative QFT?\n1. The first thing to notice is that in the TGD framework gauge couplings do not appear at all in the interaction vertices. The induced gauge potentials do not correspond to A but to gA. The couplings emerge only at the level of scattering amplitudes when one goes to the QFT limit. Only the Kähler coupling strength and cosmological constant appear in the action.\n2. The basic implication is that only the electroweak gauge potentials appear in the vertices. This conforms with the dangerous looking intuition that also strong interactions can be described in terms of electroweak vertices but this is of course a potential killer prediction. One should be able to show that the presence of WCW degrees of freedom taken into account minimally in terms of fermionic color partial waves in CP2 predicts strong interactions and predicts the value of αs. Note that the restriction of spinor harmonics of CP2 to a homologically non-trivial geodesic sphere gives U(2) partial waves with the same quantum numbers as SU(3) color partial waves have.\n3. TGD approach differs dramatically from the perturbative QFT. Since 1/αs appears in the vertex, the increase of heff in the vertex increases it: just the opposite occurs in the perturbative QFT! This seems to be in conflict with QFT intuition suggesting a perturbation series in αs ∝ 1/ℏeff. The explanation is that 1/αK appears as a coupling parameter instead of αs.\n\nThis reminds of the electric-magnetic duality between perturbative and non-perturbative phases of gauge theories, where magnetic coupling strength is proportional to the inverse of the electric coupling strength. The description in terms of monopole flux tubes is therefore analogous to the description in terms of magnetic monopoles in the QFT framework. In TGD, it is the only natural description at the fundamental level. The decrease of αK by increase of heff would indeed correspond to the QFT type description reduction of αs.\n\nCould the description based on Maxwellian non-monopole flux tubes correspond to the usual perturbative phase without magnetic monopoles? In the Maxwellian phase there is huge vacuum degeneracy due to the presence of vacuum extremals with a vanishing Kähler form at the limit of vanishing volume action. Could this degeneracy allow path integral as a practical approximation at QFT limit.\n\n4. heff/h0 = n is proposed to correspond to the dimension of algebraic extension of rationals associated with the space-time surface and serve as a measure for algebraic complexity. The increase of algebraic complexity of the space-time region defining the strong interaction volume would also make interactions strong. In TGD, the fundamental coupling strength would be αK and the increase of αK for ordinary value of h would force the increase of h. This should happen below the electroweak scale or at least the confinement scales and make perturbation theory describing strong interactions possible. This description would involve monopole flux tubes and their reconnections.\n\nSee the article About Platonization of Nuclear String Model and of Model of Atoms or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Wednesday, October 25, 2023\n\n### About the identification of the  Schrödinger galaxy\n\nThe latest mystery related created by the observations of James Webb (see this and this).\n\nIt has been found that the determination of the redshift 1+ z = anow/aemit gives two possible space-time positions for the Schrödinger galaxy CEERS-1749 anow resp. aemit corresponds to the scale factor for the recent cosmology resp. cosmology when the radiation was emitted. Note that for not too large distances the recession velocity β satisfies the Hubble law β= HD. The nickname \"Schrödinger galaxy\" comes from the impression that the same galaxy could have existed in two different times in the same direction.\n\nAccordingly, CHEERs allows two alternative identifications: either as an exceptionally luminous galaxy with z≈ 17 or as a galaxy with exceptionally low luminosity with z≈ 5. Both these identifications challenge the standard view about galaxy formation based on Λ CDM cosmology.\n\n1. The first interpretation is that CHEERS is very luminous, much more luminous than the standard cosmology would suggest, and has the redshift z≈ 17, which corresponds to light with the age of 13.6 billion years. The Universe was at the moment of emission temit=220 million years old.\n\nIn the TGD framework, the puzzlingly high luminosity might be understood in terms of a cosmic web of monopole flux tubes guiding the radiation along the flux tubes. This would also make it possible to understand other similar galaxies with a high value of z but would not explain their very long evolutionary ages and sizes. Here the zero energy ontology (ZEO) of TGD could come in rescue (see this, this and this).\n\n2. Another analysis suggest that the environment of the CHEERS contains galaxies with redshift z≈ 5. The mundane explanation would be that CHEERS is an exceptionally dusty/quenched galaxy with the redshift z≈ 5 for which light would be 12.5 billion years old.\n\nCould TGD explain the exceptionally low luminosity of z≈ 5 galaxy? Zero energy ontology (ZEO) and the TGD view of dark matter and energy predict that also galaxies should make \"big\" state function reductions (BSFRs) in astrophysical scales. In BSFRs the arrow of time changes so that the galaxy would become invisible since the classical signals from it would propagate to the geometric past. This might explain the passive periods of galaxies quite generally and the existence of galaxies older than the Universe. Could the z≈ 5 galaxy be in this passive phase with a reversed arrow of time so that the radiation from it would be exceptionally weak.\n\nTGD seems to be consistent with both explanations. To make the situation even more confusing, one can ask whether two distinct galaxies at the same light of sight could be involved. This kind of assumption seems to be unnecessary but one can try to defend this question in the TGD framework.\n1. In the TGD framework space-times are 4-surfaces in M4× CP2. A good approximation is as an Einsteinian 4-surface, which by definition has a 4-D M4 projection. The scale factor a corresponds to the light-cone proper time assignable to the causal diamond CD with which the space-time surface is associated. a is a very convenient coordinate since it has a simple geometrical interpretation at the level of embedding space M4× CP2. The cosmic time t assignable to the space-time surface is expressible as t(a).\n2. Astrophysical objects, in particular galaxies, can form comoving tessellations (lattice-like structures) of the hyperbolic space H3, which corresponds to a=constant, and thus t(a) constant surfaces. The tessellation of H3 is expanding with cosmic time a and the values of the hyperbolic angle η and spatial direction angles for the points of the tessellation do not depend on the value of a. The direction angles and hyperbolic angle for the points of the tessellation are quantized in analogy with the angles characterizing the points of a Platonic solid and this gives rise to a quantized redshift.\n\nA tessellation for stars making possible gravitational diffraction and therefore channelling and amplification of gravitational radiation in discrete directions, could explain the recently observed gravitational hum (see this).\n\nThese tessellations could also explain the mysterious God's fingers, discovered by Halton Arp, as sequences of identical look stars or galaxies of hyperbolic tessellations along the line of sight (see this and this. Maybe something similar is involved now.\n\nThis raises two questions.\n1. Could two similar galaxies at the same line of sight be behind Schrödinger galaxy and correspond to the points of scaled versions of the tessellation of H3 having therefore different values of a and hyperbolic angle η? The spatial directions characterized by direction angle would be the same. Could one think that the tessellation consists of similar galaxies in the same way as lattices in condensed matter physics? The proposed explanation for the recently observed gravitational hum indeed assumes tessellation form by stars and most stars are very similar to our Sun (see this).\n\nThe obvious question is whether also the neighbours of the z≈ 5 galaxy belong to the scaled up tessellation. The scaling factor between these two tessellations would be a5/a17= 17/5. Could it be that the resolution does allow to distinguish the neighbors of the z≈ 17 galaxy from each other so that they would be seen as a single galaxy with an exceptionally high luminosity? Or could it be that the z≈ 5 galaxy is in a passive phase with a reversed arrow of time and does not create any detectable signal so that the signal is due to z≈ 17 galaxy.\n\n2. Could one even think that the values of hyperbolic angles are the same for the two galaxies in which case the z≈ 5 galaxy could correspond to z≈ 17 galaxy but in the passive phase with an opposite arrow of time? The ages of most galaxies are between 10 and 13.6 billion years so that this option deserves to be excluded. Could the hyperbolic tessellation explain why two similar galaxies could exist at the same line of sight in a 4-dimensional sense?\n\nThis option is attractive but is actually easy to exclude. The light arriving from the galaxies propagates along light-like geodesics. Suppose that a light-like geodesic connects the observer to the z≈ 17 galaxy. The position of the z≈ 5 galaxy would be obtained by scaling the H3 of the older galaxy by the ratio a(young)/a(old). Geometrically it is rather obvious that the geodesic connecting it to the observer cannot be lightlike but becomes space-like. If one approximates space-time with M4 this is completely obvious.\n\nFor more detailed analysis, see the article TGD view of the paradoxical findings of the James Webb telescope or the chapter TGD View of the Engine Powering Jets from Active Galactic Nuclei.\n\nFor a summary of earlier postings see Latest progress in TGD. For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Monday, October 23, 2023\n\n### Pollack effect as a universal energy transfer mechanism?\n\nThe proposal of the recent article Some New Aspects of the TGD Inspired Model of the Nerve Pulse is that nerve pulse generation relies on the flip-flop mechanism using the energy liberated in the reversal of Pollack effect at one side of cell membrane to induce Pollack effect at the opposite side. The liberate energy would be channelled along the pair of monopole flux tubes emerging by re-connection from two U-shaped monopole flux tubes. The flip-flop mechanism is highly analogous to a seesaw in which the gravitational binding energy at the first end of the seesaw is reduced and transforms to kinetic energy reducing gravitational binding energy at the second end of the seesaw.\n\nAll biochemical processes involve a transfer of metabolic energy. Could the flip-flop mechanism serve as a universal mechanism of energy transfer accompanying biochemical processes?\n\nThe first example is TGD based view of biocatalysis according to which a phase transition reduces the value of heff and thus the length for the monopole flux tube pair connecting the reactants liberates energy, which kicks the reactants over the potential energy wall and in this way increases dramatically the rate of the reaction. Also now, the liberated energy could propagate as dark photons along the flux tube pair raise the system above the reaction wall or at least reduce its height.\n\nAlso the ADP→ ADP process could involve the Pollack effect and its reversal. In this process 3 protons are believed to flow through the cell membrane and liberate energy given to the ADP so that the process ADP+Pi → ATP takes place. This system has been compared to an energy plant. This raises heretic questions. Does the flow of ordinary protons through the mitochondrial membrane really occur? Could the charge separation be also now between the cell interior and its magnetic body?\n\n1. The protons believed to flow through the mitochondrial membrane would be in the initial situation gravitationally dark and generated by Pollack effect for which the energy would be provided as energy liberated by biomolecules in a process which could be a time reversal for its storage in photosynthesis.\n2. The reverse Pollack effect inside the mitochondrial membrane could transform the dark protons to ordinary protons and liberate energy, which is carried through the membrane as dark photons to the opposite side. This would allow the high energy phosphate bond of ATP to form in the reaction ADP+Pi → ATP. According to the TGD proposal (see this and this), the liberated energy could be used to kick the proton to the gravitational monopole flux tube, which would have length of order Earth size scale so that gravitational potential energy would of the same order of magnitude as the metabolic energy quantum with a nominal value .5 eV. This dark proton would be the energy carrier in the mysterious high phosphate energy bond, which does not quite fit the framework of biochemistry.\n3. ATP would donate the phosphate ion P- for the target molecule, which would utilize this temporarily stored metabolic energy as the dark proton transforms to an ordinary one. Depending on the lifetime of the dark proton, this could occur as the target molecule receives P or later. In any case, this should involve the transformation P-→ Pi. This could correspond to the transformation of the gravitationally dark proton to ordinary proton so that the charge separation giving rise to P- would be between Pi and its magnetic body.\nIn the chemical storage of the metabolic energy in photosynthesis, ATP provides the energy for the biomolecule storing the energy. This process should be accompanied by the transformation of P- to Pi. It is instructive to consider two options that come immediately into mind.\n\nOption I: The realistic looking option is that the energy is stored as the energy of an ordinary chemical bond.\n\n1. Hydrogen bond, which can form between a proton and other electronegative atoms such as O or N, is a natural candidate. Hydrogen bond indeed has an energy, which is of the order of metabolic energy quantum .5 eV. The simplest option is that the metabolic energy provided by the gravitational flux tube of ATP is liberated and used to generate a hydrogen bond of the protein. The dark gravitational flux tube loop would be nothing but a very long hydrogen bond.\n2. For negatively charged molecules, the proton of a hydrogen bond could be gravitationally dark. For dark positively charged ions, some valence electrons could be gravitationally dark. In the electronic case the reduction of the gravitational binding energy would be roughly by a factor me/mp∼ 2-11 smaller and this leads to a proposal of electronic metabolic energy quantum (see this and this and this) for which there is some empirical support from the work of Adamatsky (see this.\nOption II: The less realistic looking option is that the molecule stores the metabolic energy permanently as a gravitationally dark proton. The motivation for its detailed consideration is that it provides insights to the Pollack effect.\n1. The dark proton associated with P- should become a dark proton associated with the molecule. In this case the length of hydrogen bond would become very long, increasing the ability to store metabolic energy.\n\nThe hydrogen bonded structure would be effectively negatively charged but this is just what happens in the EZ in Pollack effect! This supports the view that the Pollack effect for water basically involves the lengthening of the hydrogen bonds to U-shaped gravitational monopole flux tubes.\n\n2. The Pollack effect requires a metabolic energy feed since the value of hgr tends to decrease spontaneously. This suggests that the dark gravitational hydrogen bonds are not long-lived enough for the purpose of long term metabolic energy storage. Rather, they would naturally serve as a temporary metabolic energy storage needed in the transfer of metabolic energy. The temporary storage of the metabolic energy to ATP would be a quantum variant of the seesaw.\n3. The first naive guess for the scale of the life-time of the gravitationally dark proton would be given as a gravitational Compton time determined by the gravitational Compton length Λgr= GM/β0 =rS(M)/2β0. For the Earth with rS∼ 1 cm, one has Tgr=1.5 × 10-11 s corresponding to the energy .6 meV for the ordinary Planck constant and perhaps related to the miniature membrane potentials. For the Moon with mass MM=.01ME, this time is about Tgr∼ 1.5× 10-13 ns. For the ordinary Planck constant, this time corresponds to energy of .07 eV and is not far from the energy assignable to the membrane potential. For the Sun, one would gravitational Compton length is one half of the Earth's radius, which gives Tgr= .02 s, which corresponds to 50 Hz EEG frequency.\n\nNote that the rotation frequency for the ATP synthase analogous to a power plant is around 300 Hz which is the cyclotron frequency of the proton in the endogenous magnetic field .2 Gauss interpreted in TGD as the strength of the monopole fluz part of the Earth's magnetic field.\n\nSee the article Some New Aspects of the TGD Inspired Model of the Nerve Pulse or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Sunday, October 22, 2023\n\n### Could a new kind of period of exploration save our civilization from collapse?\n\nEveryone knows Columbus and other heros from the period of exploration journeys when we learned the geography of our home planet. After Newton emerged planetary physics and we started to learn of our planetary system. Eventually astrophysics and cosmology emerged. Now the James Webb telescope is shattering our views about the foundations of cosmology and astrophysics and a profound revolution of the world of physics seems unavoidable. This revolution also relates to our view of time. Also in biology, brain science, and science of consciousness, we are at the verge of revolution. After half a century our recent physics based world view might be regarded as childish as the world of Flat-Earthers.\n\nIn a sharp contrast to this progress in science and technology, our civilization has fallen in a state of stagnation. Its materialism based world view has deprived us of ethics and morals and the highest goal of modern man is to consume even more. Success in our society means money, fame and power. The uncontrolled application of various technological breakthroughs have created lethal looking environmental problems and social structures are breaking down. It is quite possible that our civilization is doomed to collapse. This is nothing new and actually fits with the fact that all complex systems are born, flourish, and eventually die. It is good to remember that after the collapse of the Roman empire there was a period of stagnation of about 1500 years before the development of mathematics created in antiquity could continue. In fact, they almost discovered calculus and computers before the collapse.\n\nIn this kind of situation one can wonder whether a new exploration period could be possible and give a deeper meaning for the existence of our western civilization and make us something more than consumers? Could the striking findings of the James Webb telescope and equally striking discoveries from other branches of natural science, which do not gain attention of popular media, inspire a new period of exploration of our world allowing us to update our lethally wrong world view?\n\n### What could be the big question?\n\nWhat could be the question catching the attention of adventurous minds in the future? What dark matter and energy really are? This is certainly one of the deepest mysteries of recent day science. Could the attempt to understand dark matter give meaning to the existence of the society?\n\nDark matter is indeed an excellent candidate for the problem of the next century. The mainstream science knows only of the existence of dark matter and energy. The particle physics inspired models have repeatedly failed the tests and also the halo model for galactic dark matter relying on particle physics models is in deep difficulties. Same can be said about the o MOND model, which denies the existence of dark matter and postulates that Newtonian gravitation fails for weak fields. This is a rather paradoxical looking assumption, which very few can consider seriously.\n\n### TGD answer to the big question\n\nThe TGD explanation of dark matter relies on a new view of both space-time and quantum theory. TGD predicts the existence of a dark matter hierarchy as phases of the ordinary matter labelled by the values heff of the (effective) Planck constant, which is a multiple of its minimal value. Dark matter would be simply ordinary matter in a phase with a nonstandard value of Planck constant. If the value of the heff is large enough, this phase of matter is quantum coherent in even macroscopic scales. This would explain the mysterious ability of living matter to behave coherently in macroscales, impossible to understand in the biology as nothing but chemistry approach. The quantum coherence of the dark matter would induce the ordinary coherence of biomatter.\n\nThis view also revolutionizes the views about elementary particle-, hadron-, nuclear-, atomic- and molecular physics. The same basic topological mechanisms appear in all these physics and a lot of new physics is predicted (see this ). The dark matter would reside at space-time sheets (a new notion forced by the TGD view of space-time) characterized by the value of heff. The value of heff would characterize algebraic complexity of the space-time sheet, which in turn is a natural measure for the capacity to represent conscious information. The heff hierarchy would define an evolutionary hierarchy.\n\nThe most natural candidates for the space-time sheet carrying dark matter would be what I call magnetic bodies. The TGD view of space-time predicts that the electromagnetic fields of a system define a kind of field body of the system as a well-defined geometric anatomy and having body parts, motor actions, etc... In particular, the magnetic body consisting of monopole flux tubes would serve as the \"boss\", controller of the system because its IQ characterized by the high value of heff would be high.\n\nThe predicted values of Planck constant are largest for the monopole flux tubes mediating quantum gravitation. This conforms with the facts that gravitation has infinite range, is un-screened and the fact that quantum coherence scale increases with heff. The highest values of Planck constant would be associated with gravitational monopole flux tubes of Earth, Moon, other planets, Sun, and even galaxies. The unavoidable prediction is that the magnetic bodies of these astrophysical objects could play a key role in the quantum biology of the Earth. Horoscopes make no sense but astrologers might have not been completely wrong. Hard science must rely on numbers and the number of numerical miracles supporting this view has been accumulating (see for instance this, this, this).\n\n### The field bodies as the target of the new period of exploration?\n\nThese considerations suggest that the new period of exploration could have the electromagnetic environment of the Earth as its target. What do the magnetic and electric bodies of Earth, planets, Sun, galaxy,... look like? How dow they interact? This would be also exploration of the inner world, not only ours: the prediction is that life and consciousness are universal. This is so because the heff hierarchy plays a central role in understanding of conscious experience and intelligence.\n\n## Friday, October 20, 2023\n\n### Could the predicted new atomic physics kill the Platonic vision?\n\nThe Platonic vision connecting hadron physics, nuclear physics and atomic physics predicts a lot of new atomic physics and this could turn out to be fatal. I hasten to confess that the following speculations reflect my rudimentary knowledge of details of atomic physics. The new conceptual element are flux tubes, which can be regarded as springs with mass and elastic constant (string tension).\n\nThe first question concerns electric fields in the flux tube picture.\n\n1. If there are only flux tubes present, the electric fluxes must run along them (a more conservative option is that fluxes flow to a large space-time sheet). Perhaps the most natural interpretation is that the localization of electric fluxes to flux tubes induces a constraint force due to the space-time geometry, something completely new. If so, one can argue that the dynamics for the flux tubes carrying also electric flux automatically describes the repulsive Coulomb force subject to geometrodynamic constraints.\n\nAn important implication is that the Hamiltonian cycles of j-blocks must reconnect to the Hamiltonian cycles of other j-blocks and to the nucleus. The Hamiltonian cycles of the entire atom must fuse to a single large cycle, which can be closed for a neutral atom, and would correspond to closed monopole flux tube starting from the atomic nucleus. Each charge along the cycle contributes to the electric flux flowing in the monopole flux tube.\n\nIt has been proposed (see this) that molecular bonds could be interpreted as electric flux tubes. This proposal is discussed from TGD point of view in \\cite{allb/qcritdark3}. If the atoms of the molecule are ionized the Hamiltonian cycles of atoms must be reconnect by U-shaped tentacles and ionic bonds would correspond to flux tubes and presumably all chemical bonds.\n\nConsider next the mass of the flux tube.\n1. Flux tubes connecting neighboring charges could be p-adically scaled electropions with mass smaller than the mass 1 MeV of electropions and would contribute to the mass of the atom. In the case of nuclei scaled hadronic pions between nucleons having mass of order MeV are replaced by p-adically scaled elctropions. Note that electropions have mass of 1 MeV. In the case of atoms, their scaled variants should have a considerably smaller mass, which would naively correspond to the atomic p-adic length scale and mass scale of 1-10 keV. Note that 10 keV would be the scale of proposed nuclear excitation energies supported by nuclear physics X-ray anomalies. One can argue that the mass corresponds to the atomic p-adic length scale L(137) as a natural length scale for the flux tube gives and would be of order m\\sim keV.\n2. One the other hand, one could argue that the mass should be very small because, to my best knowledge, standard atomic physics works very well. However, the additive contribution of these masses does not affect the electronic bound state energies but only the total mass of the system. I do not know whether anyone has studied the possible dependence of the total mass of atom on the number of electrons? Does it contain an additive contribution increasing by one unit at each step along the row of the periodic table as an additional flux tube appears to the Hamilton cycle. These contributions could be also interpreted as contributions of the repulsive interactions of electrons to the energy.\nAs in the case of nuclei, the atomic flux tubes would act as springs, i.e. harmonic oscillators. This predicts a spectrum of excited states with scale determined by the elastic constant k or equivalent ground state oscillation frequency ω0.\n1. If ω0 is large enough, the excitation energies would be greater than the ionization energy and there would be no detectable effects. The naive argument that ω0 corresponds to the atomic length scale L(137) as a natural length scale for the flux tube gives ω0\\sim 1 keV. This energy scale would be for light atoms with Z≤ 9 (Oxygen) larger than the ionization energy E= Z2× 13.7 eV so that photons causing excitation would cause ionization.\n2. An equally naive scaling from nuclear scale to atomic scale would suggest that the value of ω0 is scaled from ℏω0= 1 MeV by the ratio L(113/L(137)=2-12 of nuclear and atomic length scales to about ω0=.25 keV. This is not far from the above estimate.\n3. How to deal with atoms with a small number of electrons, in particular Helium with 2 electrons? j=2 j-blocks are special in the sense that they do not allow sub-Hamiltonian cycle. Could the flux tube connecting the electrons be absent in this case so that only the repulsive electronic contribution would be present? Note also that the repulsive interaction energy between electrons would be smaller than the attractive interaction energy of electrons for atoms with Z=2. If this picture is correct, new atomic physics would emerge when j-block contains more than 2 electrons.\n\nOne can also consider the possibility that the coupling to photons is weak enough, perhaps by the condition that the photon must transform first to dark photon. The behavior of multi-electron atoms in a radiation field whose photons have a low energy must have been studied.\n\nOne could also imagine that the flux tubes form a heff≥ h quantum coherent state, in which there are n=heff/h flux tubes forming the sub-tessellation of Platonic tessellation for a given j-block with vertices connected by flux tubs. Here n would be the number electrons in the j-block. The excitation energy E= ℏeffω0 is scaled by ℏeff/ℏ=n.\n1. If all flux tubes associated with atom were excited at once as a phase transition, the required excitation energy would be rather large for large enough n and the excitations by photons might be possible without ionizing the atom.\n2. The atoms at the left end of the row are the problem for this option and more generally, the atoms at the left end of each j-blocks. One expects that the flux tube length depends on the value of the principal quantum number N labelling the rows since the size of Platonic solid must increase with n like n2. Can one assume that the mass of the spring does not depend on the row? If the elastic constant k does not depend on the row, one could consider a simultaneous collective excitation of all flux tubes so that the binding energy could increase enough.\nSee the article About Platonization of Nuclear String Model and of Model of Atoms or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Thursday, October 19, 2023\n\n### Moon and neuroscience\n\nI have already suggested that the gravitational magnetic body of Moon could play key role in the model of nerve pulse. The following model for the commutations between neuronal membrane and gravitational magnetic body by cyclotron resonance demonstrates that the expectation was correct. Moon would play a key role in neuroscience! Conformistic colleague can hardly imagine a crazier sounding statement and can happily concluded that I am a crackpot after all!\n\nQuantum gravitation favors the communications between cell membranes as dark Josephson junction and corresponding MBs carrying dark charged particles. The variations of the membrane voltage induce the modulation of the Josephson frequency and resonant receival induces a sequence of pulses coding the variations of the membrane potential to a sequence of pulses.\n\n1. The cyclotron energies\n\nEc= ℏgr ZeB/m= GMZeB/β0= rs/[2lB2 β0]\n\ndo not depend on the mass m of the charged particle and are therefore universal. Same is true for the gravitational Compton length Lgr =rs/2β0 of the particle (rS denotes Schwartchild radius).\n\n2. Josephson frequencies are given by ZeV/2\\pi ℏgr and is inversely proportional to the mass of the charged particle. In the case of ions this means the 1/A-proportionality and ordering of Josephson frequency scales as subharmonics.\n3. The frequency resonance condition fJ= EJ/hgr = fc= ZeB/m is equivalent to the energy resonance condition\n\nEJ=ZeVmem= ℏgr fc= rs/[2lB2β0] = rS/[2β0]×eB/ℏ .\n\nThis condition fixes the relation between the voltage of the Josephson junction and the strength B of the magnetic field.\n\neB= ZeVmem × 2Zβ0/rS .\n\nFor Vmem= .05 V,Z=2, rS= rS,E= 1 cm and β0=1, and using the fact that B=1 Tesla corresponds to magnetic length lB= (ℏ/eB)1/2=64 nm, this gives B= 184 nT.\n\nIt came as a surprise that this field strength is about 2.3× 10-3 weaker than the endogenous magnetic field .2× 10-4 Tesla at the surface of Earth. The strengths of the magnetic fields outside the inner magnetosphere are of order nTesla. Does this mean that the EEG signals from the cell membrane are received by charged particles at the flux tubes of the magnetosphere for which the field is much weaker than at the surface of Earth. This is indeed proposed in the model of EEG.\n\nHow could one get rid of the problem?\n\n1. The expression for B is proportional to β0 ≤ 1 and to 1/rS. For the Moon the mass is .01ME so that the value of the B would be scale by factor 100 so that it would be by factor .92 weaker than the nominal value of Bend. As proposed already earlier, the gravitational MB of Moon could be involved with the dynamics of the cell membrane and the endogenous magnetic field of Blackman could be assignable to Moon!\n2. The proportionality of B to eVmem allows us to consider the possibility that also DNA involves Josephson junctions. In fact, the TGD inspired model for the Comorosan effect assumes that biomolecules quite generally involve them. By a naive dimensional argument one expects that the value of ZeV is scaled up by factor of order 100 as one scales the membrane thickness 10 nm to 1 Angstrom. This would give Bend for the gravitational flux tubes of the Earth.\nThe possibility of simultaneous frequency and energy resonance means universal cyclotron resonance irrespective of the mass of the charged particle. Josephson frequencies are however inversely proportional to the mass of the charged particle appearing both in the cell membrane and the receiving flux tube. The resonance mechanism therefore makes it possible to use the same information for receivers with different masses. Each of them generates a different sequence of pulses at times for which modulated Josephson frequency equals the cyclotron frequency defining a specific kind of information characterized by the scale defined by Josephson period. Electron mass, proton mass and ion masses define characteristic frequency scales. For Bend, the cyclotron frequencies are in EEG range for ions which also favours the Moon option.\n\nSee the article Some New Aspects of the TGD Inspired Model of the Nerve Pulse or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n### A model for the generation of nerve pulse based on Pollack effect\n\nThe following view of what might happen in the generation of nerve pulse is only one of the many variants that I have imagined during years and can be only defended as the simplest one found hitherto. In this model Pollack effect for water plays a key role and Hodgkin-Huxley model would be simply wrong. Ionic currents would not cause the nerve pulse but would be caused by it.\n\nBackground observations\n\nLet us consider the following assumptions.\n\n1. The fact that the sign of the membrane potential changes sign temporarily but preserves its magnitude, suggests that the charge densities associated with the interior and exterior are changed so that the voltage changes the sign. There are many ways to achieve this and one should identify the simplest mechanism.\n2. Hodgkin-Huxley model for nerve pulse involves dissipation. Nerve pulse could be generated as the failure of gravitational quantum coherence. This could also make possible Ohmic currents between axonal interior (AI) and axonal exterior (NE) but this, and even the loss of quantum gravitational coherence, might not be necessary. This is mildly suggested by the model of nerve pulse based on Josephson junction in which the pulse corresponds to a temporary change of the direction of rotation for the analogs of gravitational penduli.\n3. In the Hodgkin-Huxley model the notions of channels and pumps are of course central for the recent biology. There are however puzzling observations challenging these notions and suggesting that the currents between cell interior and exterior have quantum nature and are universal in the sense that they do not depend on the cell membrane at all. One of the pioneers in the field has been Gilbert Ling, who has devoted for more than three decades to the problem, developed ingenious experiments, and written several books about the topic. The introduction of the book \"Gells and cells\" of Gerald Pollack gives an excellent layman summary about the paradoxical experimental results. I have discussed these findings also from the TGD point of view (see this).\n4. In the TGD framework Pollack effect (PE) could induce the membrane potential and PE and its reversal (RPE) could be important. In the model to be discussed this is the case and the model differs dramatically from the Hodgkin-Huxley model in that ionic currents do not cause the nerve pulse but is caused by it.\nThe model of nerve pulse based on Pollack effect and its reversal\n\nThe simplest model for the generation of the nerve pulse is based on PE and RPE. In the following I will talk about neuronal interior (NI) and neuronal exterior (NE).\n\n1. Sol-gel phase transition is known to accompany nerve pulse. This suggests that PE and RPE are involved. PE transforms gel phase to sol phase and generates a negatively charged exclusion zone (EZ).\n\nThe TGD based model for PE involves the transformation of protons of water molecules to dark protons at the MB of the system with a large size so that the region of water becomes negatively charged EZ and transforms to a gel phase generating a potential. Since the flux tubes of gravitational MB have much larger size than the system, the protons/ions are effectively lost from the system.\n\nThis corresponds to a polarization but not in the usual sense. Rather, the ends of the dipole correspond to EZ and MB. The charge separation is not between NI and NE but between NI (NE) and its MB.\n\n2. An open question is whether PE could generalize also to other positive biologically important atoms which would become dark ions assignable to MB and leave behind electrons.\n3. PE can take place for the water in NI. The transfer of charges to MB could also occur for the axonal microtubules but this transfer might be involved with the control of cell membrane and neuronal membrane, for instance MT could control the generation of nerve pulse.\n4. The simplest model for how PE and RPE could be involved with nerve pulse generation is as follows. Before nerve pulse the water in NI (near to membrane) forms a negatively charged EZ since dark protons are at the MB outside the system. The water in NE is in gel phase and neutral. The negative charge of EZ gives rise to the membrane potential and ionic charges could give only small corrections to it.\n5. The dark protons tend to transform to ordinary protons. Metabolic energy feed is needed to kick them back to the MB. The nerve pulse is generated by the RPE by stopping the metabolic energy feed for a moment. This induces a RPE as BSFR. In RPE dark protons are transformed to ordinary ones and return to the neuronal interior and gel→sol phase transition is induced. RPE liberates free energy, which in turn induces PE in NE and a negatively charged EZ is generated there. The sign of the membrane potential changes. The system is a kind of flip-flop in which RPE induces PE.\n6. The reconnection of U-shaped flux tubes at the two sides of the neuronal membrane to form a flux tube pairs connecting NI and NE and associated with the ionic channels and pumps acting as Josephson junctions, would make possible an almost dissipation free transfer of the energy liberated in RPE to the opposite side of the membrane. The transfer of the liberated energy as a radiation from NI to NE and from NE to NI takes place along flux tube pairs associated with different membrane proteins, that is channels and pumps, which would therefore be channels for radiation rather than ions. Ionic Ohmic currents could be caused by the reversal of the membrane potential rather than causing it.\n7. Contrary to the original guess, the nerve pulse would involve 4 BSFRs, which correspond to RPE in NI reducing the membrane potential Vi to V= 0 and liberating energy generating PE in NE changing the sign of the membrane potential: V=0→ -Vi. This PE is followed by RPE taking V=-Vi to V=0 and liberating energy generating PE in NI so that V=0 is transformed to V= Vi and the situation is returned back to the original. The times for the occurrences of BSFRs and changes of the arrow of time correspond to V=0, V= -Vi, V=0 and V= Vi.\n8. What could be the role of microtubules? Quantum critical dynamics of axonal microtubules would make them ideal control tools of the dynamics at the level of cell membrane, in particular controllers of the nerve pulse generation and conduction. An attractive assumption is that the gravitational MBs of microtubules carry dark charges. Also the MBs associated with the cell exterior and inner and outer lipid layers could carry dark charges. Due to the large size of gravitational flux tubes, the charges transferred to the MBs (at least the microtubular MB) are effectively outside the axonal interior (AI) and exterior (NE) so that the charges of NI and NE are affected. This could bring the membrane potential below the threshold for the generation of the nerve pulse by the proposed mechanism. MB would be the boss using microtubules as control tools and water would do the hard work.\nSee the article Some New Aspects of the TGD Inspired Model of the Nerve Pulse or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Monday, October 16, 2023\n\n### Some New Aspects of the TGD Inspired Model of the Nerve Pulse\n\nDuring year 2023 a considerable progress in the understanding of the TGD inspired model of nerve pulses has taken place.\n1. Nerve pulses relate closely to the communications from the cell membranes to the magnetic body (MB) of the system using dark, frequency modulated Josephson radiation inducing at MB a sequence of cyclotron resonances serving as control signals and eventually giving rise to nerve pulse patterns. This would generalize the \"right brain signs-left brain talks\" metaphor. Also the model of meV spikes appearing in preneural systems is discussed.\n2. Quantum gravitation in the TGD sense a can assign the needed huge values of heff to the gravitational magnetic bodies. Quantum gravitational flux tubes assignable to the Sun, Earth, and perhaps also other planets and even the Moon could be highly relevant for the living cell and the brain.\n3. The connection with microtubular level is considered and the transfer of charged particles between microtubules and very long gravitational flux tubes assignable to them allows to induce membrane oscillations and even nerve pulse.\n4. Zero Energy Ontology (ZEO) and Negentropy Maximization Principle (NMP) could allow computers to become effectively living intelligent systems able to reach goals by an analog of trial and error process. This requires the failure of quantum statistical determinism. This is the case if the gravitational Compton time defining a lower bound for the gravitational quantum coherence time is longer than the clock period of the computer. MB would play a key role also in the case of living computers and dark Josephson radiation could serve as a communication tool. Superconducting computers have Josephson junctions as basic active elements and are more promising than transistor based computers.\n5. Also the recent finding that the neuronal system is in a certain sense 11-dimensional is discussed in the TGD framework. The basic observation is that the 12-neutron system, with neurons assignable to the 12 vertices of icosahedron and defining 11-D simplex, could be involved. Icosahedron and tetrahedron appear also with the TGD based model of bioharmony serving also as a model of the genetic code.\nSee the article Some New Aspects of the TGD Inspired Model of the Nerve Pulse or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Friday, October 13, 2023\n\n### New insights about Langlands duality\n\nGary Ehlenberg sent an URL of a very interesting QuantaMagazine article, which discusses a work related to Langlands program.\n\nLanglands duality relates number theory and geometry. At the number theory side one has representations of Galois groups. On the geometry side one has automorphic forms associated with the representations of Lie groups. For instance, in coset spaces of hyperbolic 3-space H3 in the case of the Lorentz group.\n\nThe work could be highly interesting from the TGD perspective. In TGD, the M8-H duality generalizes momentum-position duality so that it applies to particles represented as 3-surfaces instead of points. M8-H duality also relates physics as number theory and physics as geometry. Much like Langlands duality. The problem is to understand M8-H duality as an analog of Langlands duality.\n\n1. H=M4×CP2 is the counterpart of position space and particle corresponds to 3-surface in H. Physics as (differential) geometry applies at this side.\n\nThe orbit of 3-surface is a 4-D space-time surface in H and holography, forced by 4-D general coordinate invariance, implies that space-time surfaces are minimal surfaces irrespective of the action (general coordinate invariant and determined by induced geometry) . They would obey 4-D generalization of holomorphy and this would imply universality.\n\nThese minimal surfaces are also solutions of a nonlinear geometrized version of massless field equations. Field-particle duality has a geometrized variant: minimal surface represents in its interior massless field propagation and as an orbit of 3-D particles the generalization of a light-like geodesic. Hence a connection with electromagnetism mentioned in the popular article, actually metric and all gauge fields of the standard model are geometrized by induction procedure for geometry.\n\n2. M8, or rather its complexification M8c (complexification is only with respect to mass squared as coordinate,not hyperbolic and other angles) corresponds to momentum space and here the orbit of point-like particle in momentum space is replaced with a 4-surface in M8, or actually its complexification M8c.\n\nThe 3-D initial data for a given extension of rationals could correspond to a union of hyperbolic 3-manifolds as a union of fundamental regions for a tessellation of H3 consistent with the extensions, a kind of hyperbolic crystal. These spaces relate closely to automorphic functions and L-functions.\n\nAt the M8 side polynomials with rational coefficients determine partially the 3-D data associated with number theoretical holography at M8-side. The number theoretical dynamical principle states that the normal space of the space-time surface in the octonionic M8c is associative and initial data correspond to 3-surfaces at mass shells H3c ⊂ M4c ⊂ M8c determined by the roots of the polynomial.\n\n3. M8-H duality maps the 4-surfaces in M8c to space-time surfaces in H. At the M8 side one has polynomials. At the geometric H-side one has naturally the generalizations of periodic functions since Fourier analysis or its generalization is natural for massless fields which space-time surfaces geometrize. L-functions represent a typical example of generalized periodic functions. Are the space-time surfaces at H-side expressible in terms of modular function in H3?\nHere one must stop and take a breath. There are reasons to be very cautious! The proposed general exact solution of space-time surfaces as preferred extremals realizing almost exact holography as analogs of Bohr orbits of 3-D surfaces representing particles relies on a generalization of 2-D holomorphy to its 4-D analog. The 4-D generalization of holomorphic functions (see for instance this) assignable to 4-surfaces in H do not correspond to modular forms in 3-D hyperbolic manifolds assignable to the fundamental regions of tessellations of hyperbolic 3-space H3 (analogs of lattice cells in E3). Fermionic holography reduces the description of fermion states as wave functions at the mass shells of H3 and their images in H under M8-H duality, which are also hyperbolic 3-spaces.\n1. This brings the modular forms of H3 naturally into the picture. Single fermion states correspond to wave functions in H3 instead of E3 as in the standard framework replacing infinite-D representations of the Poincare group with those of SL(2,C). The modular forms defining the wave functions inside the fundamental region of tessellation of H3 are analogs of wave functions of a particle in a box satisfying periodic boundary conditions making the box effectively a torus. Now it is replaced with a hyperbolic 3-manifold. The periodicity conditions code invariance under a discrete subgroup Γ of SL(2,C) and mean that H3 = SL(2,C)/U(2) is replaced with the double coset space Γ\\SL(2,C)/U(2).\n\nNumber theoretical vision makes this picture more precise and suggests ideas about the implications of the TGD counterpart of the Langlands duality.\n\n2. Number theoretical approach restricts complex numbers to an extension of rationals. The complex numbers defining the elements SL(2,C) and U(2,C) matrices are replaced with matrices in discrete subgroups SL(2,F) and U(2,F), where F is the extension of rationals associated with the polynomial P defining the number theoretical holography in M8 inducing holography in H by M8-H duality. The group Γ defining the periodic boundary conditions must consist of matrices in SL(2,F).\n3. The modular forms in H3 as wave functions are labelled by parameters analogous to momenta in the case of E3: in the case of E3 they characterize infinite-D irreducible representations of SL(2,C) as covering group of SO(1,3) with partial waves labelled by angular momentum quantum numbers and spin and by the analog of angular momentum associated with the hyperbolic angle (known as rapidity in particle physics): infinitesimal Lorentz boost in the direction of spin axis.\n\nThe irreps are characterized by the values of a complex valued Casimir element of SL(2,C) quadratic in 3 generators of SL(2,C) or equivalently by two real Casimir elements of SO(1,3). Physical intuition encourages the shy question whether the second Casimir operator could correspond to the complex mass squared value defining the mass shell in M8. It belongs to the extension of rationals considered as a root of P.\n\nThe construction of the unitary irreps of SL(2,C) is discussed in Wikipedia article. The representations are characterized by half integer j0=n/2 and imaginary real number j1= iν.\n\nThe values of j0 and j1 must be restricted to the extension of rationals associated with the polynomial P defining the number theoretic holography.\n\n4. The Galois group of the extension acts on these quantum numbers. Angular momentum quantum numbers are quantized already without number theory and are integers but the action on the hyperbolic momentum is of special interest. The spectrum of hyperbolic angular momentum must consist of a union of orbits of the Galois group and one obtains Galois multiplets. The Galois group generates from an irrep with a given value of j1 a multiplet of irreps.\n\nA good guess is that the Galois action is central for M8-H duality as a TGD analog of Langlands correspondence. The Galois group would act on the parameter space of modular forms in Γ\\SL(2,F)/U(2,F), F and extension of complex rationals and give rise to multiplets formed by the irreps of SL(2,F).\n\nTo sum up, M8-H duality is a rather precisely defined notion.\n1. At the M8 side one has polynomials and roots and at the H-side one has automorphic functions in H3 and \"periods\" are interpreted as quantum numbers. What came first in my mind was that understanding of M8 duality boils down to the question about how the 4-surfaces given by number theoretical holography as associativity of normal space relate to those given by holography (that is generalized holomorphy) in H.\n2. However, it seems that the problem should be posed in the fermionic sector. Indeed, above I have interpreted the problem as a challenge to understand what constraints the Galois symmetry on M8 side poses on the quantum numbers of fermionic wave functions in hyperbolic manifolds associated with H3. I do not know how closely this problem relates to the problem that Ben-Zvi, Sakellaridis and Venkatesh have been working with.\nSee for instance the article Some New Ideas Related to Langlands Program viz TGD.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Thursday, October 12, 2023\n\n### Do cosmic strings with large string tension exist?\n\nThere is some empirical support for cosmic strings with a rather large string tension from gravitational lensing. Cosmic string tension T and string deficit angle Δ θ for lensing related bia the formula Δ θ = 8π× TG if general relativity is assumed to be a good description. The value of TGD deduced from data is TG= .05 and is very large and corresponds to an angle deficit Δ θ≈ 1.\n\nFor the ordinary value of Planck constant, TGD predicts the value of TG has upper bound in the range 10-7-10-6. The flat velocity spectrum for distant stars around galaxies determines the value of TG: one has v2=2TG from Kepler law so that the value of TG is determined from the measured value of the velocity v. The value of TG can be also deduced from the energy density of cosmic string-like objects predicted by TGD and is consistent with this estimate. If one takes the empirical evidence for a large value of TGseriously one must ask whether TGD can explain the claimed finding.\n\nCould a large value of heff solve the discrepancy? String tension T as the linear energy density of the cosmic string is determined by the sum of Kähler action and volume term. The contribution of Kähler action to T is proportional to 1/αK = gK2/4πℏ. If cosmic string represents dark matter in TGD sense, one must make the replacement ℏ→ ℏeff so that the Kähhler contribution to T is proportional to ℏeff/ℏ. If the two contributions are of same order of magnitude or Kähler contribution dominates, ℏeff/ℏ=n≈ 105 would give the needed large value TG. The physical interpretation would be that cosmic string is an n-sheeted structure with each sheet giving the same contribution so that the value of T is scaled up by n≈ 105.\n\nThe physical interpretation would be that the cosmic string is an n-sheeted structure with each sheet giving the same contribution so that the value of T is scaled up by n≈ 105. There are two options. The n-sheetedness is with respect to M4 so that one has a n-fold covering of M4 or with respect to CP2 in which case one quantum coherent structure consisting of n parallel flux tubes.\n\nIt is intereting to consider in more detail the quantum model for the particles in the gravitational field of cosmic string.\n\n1. The gravitational field of a straight cosmic string behaves like 1/ρ as a function of the radial distance ρ from string, and Kepler's law predicts a constant velocity v2= 2TG for circular orbits irrespective of their radius. This explains the flat velocity spectrum of stars rotating around galaxies.\n2. Nottale proposed that planetary orbits obey Bohr quantization for the value of gravitational Planck constant ℏgr= GMm/β0 assignable to a pair of masses M and M associated with the gravitational flux tube mediating the gravitational interaction between M and m.\n3. If the mass M corresponds to a cosmic string idealized as straight string with an infinite length, the definition of ℏgr is problematic since M diverges. Therefore the application of Nottale's quantization to a distant star rotating cosmic string is problematic.\n\nWhat is however clear that ℏgr should be proportional to m by Equivalence Principle and one should have ℏgr= GMeffm/β0 for the cosmic string. Meff= TLeff, where Leff is the effective length of the cosmic string is also a reasonable parametrization.\n\n4. Kepler law does not tell anything about the value of the radius r of the circular orbit. If the value of ℏgr is fixed somehow, one can apply the Bohr quantization condition ∮ pdq= nhgr of angular momentum to circular orbits to obtain vr= nGMeff giving\n\nrn=nr1,\nr1=rS,eff/[2(TG)1/2β0].\n\nA reasonable guess is that β0 and the rotation velocity v/c=(2TG)1/2 have the same order of magnitude. v/c= xβ0< 1 would give β0= (2TG)<1/2/x. The minimal value of the orbital radius would be r1=rS,eff/[2xβ02].\n\nAn interesting question relates to the size scale of the n-sheeted structure interpreted as a covering of CP2 by parallel cosmic strings or flux tubes. The gravitational Compton length Λgr= r_{S,eff}/2β0 could give an estimate for the size scale of this structure, which as flux tube bundle would be naturally 2-D. There would be about 105 flux tubes per gravitational Compton area with scale Λgr.\n\nSee the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Wednesday, October 11, 2023\n\n### Evidence for the TGD view of quasars\n\nI learned of an extremely interesting discovery providing additional support for the TGD view of quasars and galaxy formation (see this). Here is the abstract of the article published in Nature.\n\nQuasars feature gas swirling towards a supermassive black hole inhabiting a galactic centre. The disk accretion produces enormous amounts of radiation from optical to ultraviolet (UV) wavelengths. Extreme UV (EUV) emission, stemming from the energetic innermost disk regions, has critical implications for the production of broad emission lines in quasars, the origin of the correlation between linewidth and luminosity (or the Baldwin effect) and cosmic reionization.\n\nSpectroscopic and photometric analyses have claimed that brighter quasars have on average redder EUV spectral energy distributions (SEDs), which may, however, have been affected by a severe EUV detection incompleteness bias.\n\nHere, after controlling for this bias, we reveal a luminosity-independent universal average SED down to a rest frame of ≈ 500 Å for redshift z&asymp: 2 quasars over nearly two orders of magnitude in luminosity, contrary to the standard thin disk prediction and the Baldwin effect, which persists even after controlling for the bias.\n\nFurthermore, we show that the intrinsic bias-free mean SED is redder in the EUV than previous mean quasar composite spectra, while the intrinsic bias-free median SED is even redder and is unexpectedly consistent with the simply truncated wind model prediction, suggesting prevalent winds in quasars and altered black hole growth. A microscopic atomic origin is probably responsible for both the universality and redness of the average SED. What does TGD say?\n\n1. In the standard accretion disk theory inner luminosity is determined by the mass of the accretion disk entering into the blackchole. What is however found that the spectral energy distribution of light from quasar does not depend on the inner luminosity at all in the extreme UV (EUV) range! It can even decrease when the intrinsic luminosity increases! These paradoxical findings challenge the standard accretion disk theory.\n2. TGD based view of quasars (see for instance this, this, this, and this) suggests an explanation of the anomaly. The galactic matter would be formed as dark energy and dark matter from a cosmic string like objects thickening to a monopole flux tube with smaller string tension emits dark particles transforming to ordinary matter forming the galaxy. Cosmic strings would be transversal to the galactic plane and the gravitational field created by their dark energy energy predicts the flat velocity spectrum of galaxies.\n3. The flow of radiation from the thickened flux tube (rather than from the energy liberated as matter of the accretion disk falls into the blackhole) would give rise to the spectral energy distribution in EUV and the inner luminosity at longer wavelengths would be determined by the accretion disk emission. Also the article suggests that galactic wind explains the energy spectrum: galactic wind would correspond to this EUV radiation from the monopole flux tube. This energy spectrum would be universal in the sense that it would reflect only the properties of the thickening cosmic string and universality is indeed claimed.\nThe model of the quasar as a portion of a cosmic string thickened to a flux tube tangle and emitting dark energy and matter transforming to ordinary matter challenges the standard model as a blackhole. The outflowing matter would create an accretion disk as a kind of traffic jam and at least part of the luminosity of the accretion disk would be due to the heating of the accretion disk caused by the flow of the particles colliding with the accretion disk. Also now the gravitational field of the cosmic string and of the flux tangle associated with it is present and a natural classical expectation is that the matter in the accretion disk tends to flow back to the quasar.\n\nIn atomic physics the quantization prevents the fall of electron to atomic nucleus. Could the same happen now and prevent the fall of matter from the accretion disk back to the quasar.\n\n1. One can argue that a realistic quantum model for the matter around quasar is based on the treatment of the flux tube tangle as spherically symmetric mass distribution with the mass of the blackhole assigned to the quasars. Indeed, the straight portion of cosmic strings gives a large contribution to the gravitational force only at large distances so that the contribution of the tangle dominates.\n2. The mechanism preventing the fall of matter to blackholes would be identical with that in the case of atoms. Also in the accretion disk model, the angular momentum of rotating matter in the accretion disk tends to prevent the fall into the blackhole and the angular momentum must be transferred away.\n3. The orbital radii would be given by the Nottale model for planetary orbits with rn = n2agr, where agr=4π GM/β02= 2π rS02 is gravitational Bohr radius. The ratio M/MSun of the mass M of the quasar blackhole to solar mass is estimated to be in the range [107,3× 109] predicting that the Schwartschild radius rS is in the range 3× 107-1010 km. The radius of racc should be larger than agr: agr<aacc. Note that the size of the accretion disk is in some cases estimated to be few light-days: 1 light-day ≈ 1010 km whereas the visible size of quasar is measured in light years.\n4. The condition agr<racc gives the condition 2π/β02<racc/rS giving for β0 an upper bound in the range β0⊂ [.02,.2]. The values of β0 in this range are considerably larger than the value β0≈ 2-11 predicted by the Bohr model for the orbits of inner planets. Note that for the Earth the estimate for β0 is β0≈ 1.\nSee the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Monday, October 09, 2023\n\n### About uniform tessellations of hyperbolic, Euclidean, spherical space\n\nIt has become clear that 3-D hyperbolic tessellations are probably very important in the TGD framework. The 3-D hyperbolic space H3 is realized as a mass shell or cosmic time constant hyperboloid, and an interesting conjecture is that particles with a fixed value of mass squared are associated with the vertices of a hyperbolic tessellation. This would give rise to a quantization of momenta consistent with the number theoretic discretization. Hyperbolic tessellations could also appear in cosmological scales. So-called icosatetrahedral tessellation seems to provide a model of genetic code and DNA and suggests that genetic code is not restricted in biology but universal and realized in all scales Euclidean.\n\nUniform tilings/tessellations/honeycombs can be identified by their vertex configuration given a list n1.n2.n3... for the numbers n of vertices of regular n-polygons associated with the vertex. Uniform tilings can be regular, meaning that they are both vertex- and edge-transitive. Quasi-regular tessellations are only vertex-transitive and semiregular tessellations are neither vertex- nor face-transitive. Non-regular tessellations give, for instance, Archimedean solids obtained from Platonic solids by operations like truncation. In this case, the vertices are obtained by symmetries from each other but not the faces, which need not be identical anymore.\n\nThere exists an extremely general construction of tessellation of hyperbolic, Euclidean, and spherical tessellations which works at least in 2- and 3-D cases known as Wyuthoff construction. In the 2-D case, this construction is based on the so-called Schwarz triangles associated with a fundamental region of tessellation in the 2-D case and so-called Goursat tetrahedrons in the 3-D case. A natural generalization is that in n-dimensional one has n-simplexes. One would have what topologists call triangulation, which is very special in the sense that it utilizes the symmetries of the tessellation. These very special simplices are also consistent with the number theoretical constraints in the angles between n-1-faces correspond to angles defined by the roots of unity.\n\nIn the 2-D case, the angles between the edges of the fundamental triangle are rational multiples of π so that the cosines and sines of the angle are algebraic numbers, which are natural for a tessellation whose points in natural coordinates (momenta) have components that are numbers in an algebraic extension of rationals. In the 2-D case, the fundamental triangle is obtained by drawing from center points of the 2-D unit cell, say a regular polygon, connecting it to its vertices. In the 3-D case, the same is done for the 3-D unit cells of the fundamental region. Note that the tessellation can have several different types of unit cells and this is indeed true in the case of icosatetrahedral tessellations.\n\n#### 2-dimensional case\n\nIn the 2-D case, the angles between the edges of the triangle are given as (1/p,1/r,1/s)-multiples of π. p, r, and s are the orders of discrete rotation groups assignable to the vertices. They are generated by the reflections si with respect to edges of the triangle in one-to-one correspondence with opposite vertices. They satisfy the conditions si2=1 as reflections and the reflections si and si+j, j>1, commute and si si+1 generates a rotation with respect to the third vertex of the triangles with order determined by one of the numbers p, r, s. The conditions can be summed up to si2=1 and (si sj)mij=1, mij=2 for j ≠ i+/-1 and mij> 2 for j=i+/-1.\n\nThe conditions can be expressed in a concise way by using Coxeter-Dynkin diagrams having 3 vertices connected by edges. For mij=2, there is no edge, and for mij> 2, there is an edge and a number telling the order of the cyclic group in question.\n\nAll these 3 spaces are constant curvature spaces with positive, vanishing, or negative curvature, which is reflected as properties of the angle sum of the geodesic Schwartz triangle (note that these spaces also occur in cosmology). In the spherical case, the sum is larger than π and one has 1/p+1/r+1/s≥1. In the Euclidean case, the sum of the angles of the Schwarz triangle is π, which gives the condition 1/p+1/r+1/s=1. In the hyperbolic case, the angle sum is smaller than π and one has 1/p+1/r+1/s/le;1. Note that in the hyperbolic plane, the angles of infinitely sized Schwartz triangle can vanish (ideal triangle).\n\nFor the 2-sphere, these conditions give only Platonic solids as regular (vertex- and face-transitive) tessellation (no overlap between triangles). For the plane, the non-compactness implies that the conditions are not so restrictive as for the sphere. The most symmetric tessellations are regular tessellations: they involve only one kind of polygon and are vertex-, edge-, and face-transitive. For the Euclidean plane, there are regular tessellations by triangles, squares, and hexagons. If one weakens the transitivity conditions to say vertex-transitivity, more tessellations are possible and involve different kinds of regular polygons.\n\nThe Wikipedia article about the uniform tilings of the hyperbolic plane gives a good overall view of the uniform tessellations of the hyperbolic plane. For the hyperbolic tessellations, the conditions are the least restrictive. Intuitively, this is due to the fact that the angle sum can be small, and this allows small angles between edges and more degree of freedom at vertices. For a spherical tessellation, the situation is just the opposite. Uniform tilings of hyperbolic plane H2 are by definition vertex-transitive and have a constant distance between neighboring vertices. This condition is physically natural and would correspond to mechanical equilibrium in which vertices are connected by springs of the same string tension. Each symmetry (p, r, s) allows 7 uniform tilings characterized by Wythoff symbol or Coxeter diagram. These tiling, in general, contain several kinds of geodesic polygons. Families with r=2 (right triangle) contain hyperbolic regular tilings.\n\n#### The 3-dimensional case\n\nThere is a Wikipedia article about the uniform tessellations/honeycombs in the 3-D case, obtained by Wyuthoff construction, is a generalization from the 2-D case. Schwarz triangle is replaced with Goursat tetrahedron, and reflections are now in tetrahedral faces opposite to the vertices of the tetrahedron so that there are 4 reflections si satisfying si2=1 and (si sj)mij=1, mij=2 for j ≠ i +/-1. The cyclic subgroups act as rotations of faces meeting at the edges, and the angles defining the cyclic groups are dihedral angles. There are 9 compact Coxeter groups, and they define uniform tessellations with a finite fundamental domain. What is interesting is that the cyclic subgroups involved do not have order larger than 5.\n\nThe conditions are expressible in terms of Coxeter-Dynkin diagram with 4 vertices. The 2-D conditions are satisfied for the Schwarz triangles defining the faces of the tetrahedron. Besides the angle parameters defining the triangular phases of the tetrahedron, there are angle parameters defining the angles between the faces. All these angles are rational multiples of π and define subgroups of the symmetries of the tessellation. What is so beautiful is that the construction is generalized to higher dimensions and is recursive/hierarchical.\n\nThe hyperbolic character of the geometry allows Schwarz triangles and Goursat tetrahedra which in Euclidian case would not be possible due to the condition that the edges have the same length and faces have the same area.\n\n#### Could hyperbolic, Euclidean, and spherical tessellations be realized in TGD space-time\n\nAn interesting question is whether the hyperbolic, Euclidean, and spherical tessellations could be realized in the TGD framework as induced 3-D geometry or rather, as slicing of space-time surface by time parameter such that each slice represents hyperbolic, Euclidean or spherical geometry locally allowing the tessellation.\n\nHyperbolic tessellations can be realized on the cosmic time constant hyperboloids and Euclidean tessellations on the Minkowski time constant hyperplanes of M4 and possibly partially on 3-surfaces which have hyperbolic 3-space as M4 projection.\n\nThe question boils down to a construction of a model of Robertson-Walker cosmology for which the induced metric of a=constant 3-surface is that of H3, E3, or S3 corresponding to the cosmologies with subcritical, critical and overcritical mass densities. The metric of H3 is proportional to a2 scale factor. The simplest ansatz is a geodesic circle at geodesic sphere S2⊂CP2 with metric ds2= -R22-sin2(θ)dΦ2. The ansatz (sin(θ)=a/a0,Φ=f(r)) gives in Robertson-Walker coordinates the induced metric\n\nds2= [1- R2 (dθ/da)2] da2 -a2 (1/(1+r2)+ (R/a0)2(df/dr)2) dr2 + r22\n\nThis gives the flat metric of E3 if the condition\n\n(df/dr)2= (a0/R)2 r2/(1+r2)\n\nThis condition is satisfied for all values of r.\n\nFor S3 metric one obtains the condition\n\n(df/dr)2= (a0/R)2 2r2/(1-r4)\n\nr=1 corresponds to singularity. For r=1, one has rM= ar= a, which gives t= 21/2a. One can construct the S3 by gluing together the hemispheres corresponding to the 2 roots for df/dr so that it seems that one obtains the tessellations. The divergence of df/dr tells that the half-spheres become orthogonal to H3 at the gluing points.\n\nFor both E3 and S3 option, the component gaa of the induced metric is equal to\n\ngaa= 1-(R/a0)2 1/(1-(a/a0)2)\n\ngaa diverges at a=a0 so that the cosmic time would run infinitely fast. gaa changes sign for a=a0 so that for a>a0 the signature of the induced metric becomes Euclidean. Unless one allows Euclidean signature in long scales, one must assume a0. Note that the action defined as the sum of Kähler action and volume action. If S2 corresponds to the homologically trivially geodesic sphere of CP2, the action reduces to volume action for these surfaces. The densities of Noether currents for volume action vanish at a= a0 since they are proportional to the factor (gaa)1/2gaa and thus approach to zero like [1-(a/a0)2]1/2. This is true also for the contribution of Kähler action present for homologically non-trivial geodesic sphere of CP2. Very probably, this surface is not a minimal surface although the volume is finite. This is suggested by the fact that the volume element increases in comparison to hyperbolic volume element giving rise to minimal volume increases as the parameter a increases.\n\nSee the chapter More about TGD and Cosmology.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Saturday, October 07, 2023\n\n### The realization of the notions of assembly and tensegrity in the TGD Universe\n\nIn the TGD framework one ends up with an amazingly simple engineering principle resembling so called assembly theory and applying to atoms, nuclei, and hadrons (see this). Since TGD Universe is fractal, this principle is expected to apply in all scales.\n1. The considerations of the above article relate closely to the observation that j-block consisting of parts of electron of atoms or nucleon shells of nuclei with fixed value of total angular momentum j=l+/- 1/2 and l=9 (at least) correspond to Platonic solids for l≤ 5 in the sense that different angular momentum eigenstates correspond to the vertices of the Platonic solid. If one assumes the presence of a Hamiltonian cycle going through all V vertices of the Platonic solid as a tessellations of sphere, one has F-2 free edges (F is the number of faces) besides the V edges of the cycle and one can also add particles to the middle points of the free edges. In the proposed model of atomic nuclei, one would have neutrons at the vertices and protons at the middle points or vice versa. Also the larger values of l appearing in highly deformed nuclei can be treated in the same way. If the unit of angular momentum increases to heff=nh, also these states can be assigned a Platonic solid.\n2. The space-time surfaces assignable to all atoms, nuclei, and hadrons can be constructed by connecting the electrons, nucleons, or quarks at the vertices of Platonic solid or at the middle points of the free edges with flux tubes serving as analogs of springs stabilizing the structure and having interpretation as analogs of mesons. Tensegrity is the appropriate notion here.\n3. In the case of hadrons, the predictions of the resulting mass formulas are satisfied within a few percent. This involves the predictions of TGD based mass calculations for fermion masses based on p-adic thermodynamics. This leads to an interpretation of the non-perturbative aspects of strong interaction in terms of a dark variant of weak interactions for which perturbation theory converges! The basic problem of QCD disappears in the TGD Universe. The same would apply to nuclear strong interactions but meson-like particles would have different p-adic length scales.\n\n2 and electroweak symmetries to the holonomies of CP2 so that a very close relationship between these interactions must exist. One can say that a unification of strong and weak interactions analogous to that provided by Maxwell electrodynamics for electric and magnetic fields takes place. For a given p-adic length scale (several fractally scaled variants of hadron physics are predicted) one can regard mesons as weak bosons predicted by TGD to have the entire spectrum of exotics. For this there is already support (see this, this and this). Ordinary hadron physics would correspond to dark weak interactions for p-adic length scale defined by Mersenne prime M107 and weak interactions to hadron physics for M89!\n\n4. In the case of nuclei, the MeV scale for excitation energies is correctly predicted and also a new 10 keV scale supported by various anomalies of nuclear physics is predicted. Besides this also Z^0 force is predicted to be significant and atom-like structures involving and having size scale 10 nm, which is a fundamental scale in biology, are predicted.\n\nThe j-blocks (angular momentum) consisting of energy degenerate states with 2j states have as space-time correlates Platonic solids with Hamiltonian cycle as a closed flux tube, nuclear string connecting the vertices of the solid.\n\n5. In atomic physics the same picture applies, and led to a realization that in the standard model the repulsive classical interaction energy of electrons goes like Z4 whereas the interaction energy nucleus goes like Z2! The question is whether quantum mechanics can really guarantee the stability of many electron atoms or is this just an assumption. In the TGD framework, the flux tubes would stabilize the atoms with several electrons. This predicts new atomic physics related to the oscillations of the flux tubes which in nuclear physics give justification for the harmonic oscillator model of nucleus.\nSee the article Neil Gersching's vision of self-replicating robots from TGD point of view or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD. For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.\n\n## Tuesday, October 03, 2023\n\n### Vision about unification of strong and weak interactions\n\nThe considerations of the article About Platonization of Nuclear String Model and of Model of Atoms) inspire a unified vision about strong and weak interactions.\n\n1. At the level of H=M4 × CP2, the color group SU(3) acting as isometries of CP2 would describe the perturbative aspects of color interaction and give rise to color confinement. The non-perturbative aspects of strong interactions would correspond to the holonomy group of CP2 and weak interactions for weak bosons which are either dark or p-adically scaled variants of ordinary weak bosons and massless below the scaled up Compton length.\n\nThe large value of heff would make the perturbation theory for these weak interactions convergent (see this). Strong isospin can be identified as weak isospin. Both p-adic and heff hierarchies of length scales are required in the proposed vision.\n\n2. At the level of M8 = M4 × E4, SU(3) corresponds to a subgroup of octonionic automorphisms and U(2) could be identified as a subgroup of isometries leaving invariant the number theoretic inner product in E4. This inspired the proposal that strong isospin corresponds to U(2) and hadron-parton duality corresponds to M8-H duality basically.\n\nThis picture explains various poorly understood aspects of strong interactions.\n\n1. In the good old times, when strong interactions were not yet \"understood\" and it was also possible to think instead of merely computing, strange connections between strong and weak interactions were observed. The already mentioned conserved vector current hypothesis (CVC) and partially conserved axial current hypothesis (PCAC) were formulated and successful quantitative predictions emerged.\n\nStrong isospin is equal to weak isospin for nucleons but heavier quarks did not fit the picture. (c,s) and (t,b) dublets were assigned quantum numbers such as strangeness and charm, and they are not quantum numbers of weak interactions.\n\nWhen perturbative QCD became the dominating science industry, low energy hadron physics was forgotten. Lattice QCD was thought to describe hadrons but the successes were rather meager. Lattice QCD has even mathematical problems such as the description of quarks and the strong CP problem which lead to postulate the existence of axions, which have not been found.\n\n2. In TGD these connections can be understood elegantly.\n\n1. The topological description of family replication phenomenon implies that strangeness and charm are not fundamental quantum numbers and the identification of weak and strong isospins makes sense.\n2. Strong interactions in long length scales for hadrons become p-adically scaled dark weak interactions. The flux tubes correspond to possibly p-adically scaled mesons or equivalently weak bosons in a generalized sense predicted by the TGD based explanation of family replication phenomenon. Tensegrity is the basic construction principle for hadrons and nuclei and even atoms, for which color octet excitations of leptons define the counterparts of mesons.\n\nAlso the fractality inspired ideas related to p-adically scaled up variants of strong and weak interactions organize to a beautiful picture.\n\n1. p-Adic fractality inspired the idea that both strong and interaction physics appear as p-adically scaled variants. In particular, M89 hadron physics would be a p-adically scaled up version of the ordinary hadron physics assignable with M107 and would correspond to the same p-adic length scale as weak bosons. Various forgotten anomalies support this proposal (see this and this).\n\nBut why both weak and strong interaction physics with the same p-adic length scale (or actually scales)? Both weak bosons and mesons would be described as string-like entities. How can one distinguish between these?\n\n2. There is no need for both! Weak bosons and their predicted exotic counterparts implied by the family replication phenomenon are nothing but the mesons of M89 hadron physics. TGD explanation of the family replication phenomenon indeed predicts the analog of family replication phenomenon for weak bosons basically similar to that for mesons. From the known spectrum of mesons of ordinary mesons one can predict masses of both M89 mesons, or equivalently the masses of ordinary and exotic weak bosons. There is already evidence for the dark counterparts of M89 mesons with scaled up Compton length equal to that for M107 mesons. Also M89 baryons are predicted.\n\n3. Higgs would be the counterpart of sigma meson. There is evidence of the pseudoscalar counterpart of Higgs identifiable as a counterpart of M89 pion. Weak bosons would be counterparts of ρ meson. Also axial vector weak mesons are predicted as counterparts of ω.\n\nThe exotic weak mesons as counterparts of kaon, charmed mesons, etc.., are predicted but their p-adic length scale is shorter. Also for these there is some evidence (see this and this). In particular, there are indications for the existence of Higgs-like states decaying into e-μ pair (see this). This particle might correspond to kaon, which is pseudoscalar rather than scalar. All masses can be predicted from hadron physics by scaling apart from the p-adic prime defining the mass scale and satisfying the p-adic length scale hypothesis.\n\nSee the article About Platonization of Nuclear String Model and of Model of Atoms or the chapter with the same title.\n\nFor a summary of earlier postings see Latest progress in TGD.\n\nFor the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8990558,"math_prob":0.92504585,"size":8667,"snap":"2023-40-2023-50","text_gpt3_token_len":1874,"char_repetition_ratio":0.122590326,"word_repetition_ratio":0.011502516,"special_character_ratio":0.18380062,"punctuation_ratio":0.061964404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96435773,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T20:43:23Z\",\"WARC-Record-ID\":\"<urn:uuid:790f37ed-03cc-4f58-9a45-7e617b6a2e6a>\",\"Content-Length\":\"276485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60e3ba0e-0e17-401e-be3a-c5db049a619d>\",\"WARC-Concurrent-To\":\"<urn:uuid:16a6bda8-95d6-4b1b-a776-d1ae6e7444ee>\",\"WARC-IP-Address\":\"172.253.115.132\",\"WARC-Target-URI\":\"https://matpitka.blogspot.com/2023/10/\",\"WARC-Payload-Digest\":\"sha1:4WASYNXZLAPAP6DS5XSU7OF65US6QVKB\",\"WARC-Block-Digest\":\"sha1:GYOIEANP33RIITLQJRZ6A4PM5G5I7UQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099942.90_warc_CC-MAIN-20231128183116-20231128213116-00027.warc.gz\"}"}
https://www.mathway.com/examples/algebra/factoring-polynomials/determining-if-the-polynomial-is-a-perfect-square?id=706
[ "# Algebra Examples\n\nDetermine if the Expression is a Perfect Square\nStep 1\nA trinomial can be a perfect square if it satisfies the following:\nThe first term is a perfect square.\nThe third term is a perfect square.\nThe middle term is either or times the product of the square root of the first term and the square root of the third term.\nStep 2\nPull terms out from under the radical, assuming positive real numbers.\nStep 3\nFind , which is the square root of the third term . The square root of the third term is , so the third term is a perfect square.\nStep 3.1\nRewrite as .\nStep 3.2\nPull terms out from under the radical, assuming positive real numbers.\nStep 4\nThe first term is a perfect square. The third term is a perfect square. The middle term is times the product of the square root of the first term and the square root of the third term .\nThe polynomial is a perfect square." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92315215,"math_prob":0.99414235,"size":831,"snap":"2023-40-2023-50","text_gpt3_token_len":196,"char_repetition_ratio":0.21039903,"word_repetition_ratio":0.5283019,"special_character_ratio":0.22743683,"punctuation_ratio":0.10614525,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99623203,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T19:31:17Z\",\"WARC-Record-ID\":\"<urn:uuid:9f18fccf-5e60-4985-972a-337cb51627a0>\",\"Content-Length\":\"108698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bf5c85b-4e54-4970-9277-7dc4f0153728>\",\"WARC-Concurrent-To\":\"<urn:uuid:29910912-3140-49d5-a333-d26b7cb2e87b>\",\"WARC-IP-Address\":\"3.162.125.20\",\"WARC-Target-URI\":\"https://www.mathway.com/examples/algebra/factoring-polynomials/determining-if-the-polynomial-is-a-perfect-square?id=706\",\"WARC-Payload-Digest\":\"sha1:TUKI5U4EZNGJSKMTWXI5D7AV3Y4YTG4H\",\"WARC-Block-Digest\":\"sha1:MAEH6TQ6Y67MIV2FPUVYPUTK2JB6MPHO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679516047.98_warc_CC-MAIN-20231211174901-20231211204901-00211.warc.gz\"}"}
https://www.shalom-education.com/courses/gcsemaths/lessons/ratio-proportion-and-rates-of-change-2/topic/reverse-percentages/
[ "# Reverse Percentages\n\nReverse percentages allow us to determine the original value of a quantity after a percentage increase or decrease has taken place. This concept can be applied to situations such as calculating original prices before discounts or determining an initial population before an increase or decrease.\n\nThe reverse percentage formula can be applied for both increases and decreases. For a percentage increase, the formula is:\n\nOriginal Value = Final Value ÷ (1 + Percentage Increase as a Decimal)\n\nFor a percentage decrease, the formula is:\n\nOriginal Value = Final Value ÷ (1 – Percentage Decrease as a Decimal)\n\n## Solving Reverse Percentage Problems\n\nWhen solving reverse percentage problems, follow these steps:\n\nStep 1: Understand the problem and identify the original and final values. Carefully read the problem and determine which value is the original and which is the final value after the percentage change.\n\nStep 2: Convert the percentage increase or decrease into a decimal. Divide the percentage by 100 to express the change as a decimal. For example, a 20% increase would be represented as 0.20.\n\nStep 3: Calculate the original value using the reverse percentage formula. Apply the reverse percentage formula, taking into account whether the problem involves an increase or a decrease.\n\nLet’s look at an example:\n\nAfter a discount of 25%, a pair of shoes is now priced at £45. What was the original price of the shoes before the discount?\n\n1. In this problem, we want to find the original price of the shoes (the original value) before the discount. We are given that the final price of the shoes after the discount is £45 (the final value).\n\n2. We are given that the price decreased by 25% due to the discount. To convert the percentage into a decimal, we divide the percentage by 100:", null, "3. For a decrease, the reverse percentage formula is:\n\nOriginal value = Final value / (1 – Percentage decrease as a decimal)\n\nPlugging in the given values, we get:\n\nOriginal value", null, "", null, "", null, "So, the original price of the shoes before the discount was £60.\n\n## Examples\n\nExample 1:\n\nA jacket’s price increased by 15% and now costs £69. Calculate the original price.\n\nPercentage increase as a decimal:", null, "Original Value", null, "The original price of the jacket was £60.\n\nExample 2:\n\nA book’s price decreased by 10% and now costs £18. Calculate the original price.\n\nPercentage decrease as a decimal:", null, "Original Value", null, "The original price of the book was £20.\n\nExample 3:\n\nA store increased the price of a product by 15% during a sale, and then decreased the new price by 10% after the sale ended. The final price of the product is £102.60. What was the original price of the product?\n\nLet the original price be £x.\n\nStep 1: Apply the first percentage change (15% increase). New price after 15% increase:", null, "Step 2: Apply the second percentage change (10% decrease) to the new price. Final price after 10% decrease:", null, "Step 3: Set up an equation and solve for the original price (x).", null, "Divide both sides by 1.035:", null, "The original price of the product was £99.13" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20114%2015'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2074%2027'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2046%2023'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2051%2012'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20134%2015'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20172%2025'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20114%2015'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20172%2023'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20173%2019'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20327%2019'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20123%2013'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20141%2022'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9114554,"math_prob":0.9882996,"size":2393,"snap":"2023-14-2023-23","text_gpt3_token_len":513,"char_repetition_ratio":0.20678107,"word_repetition_ratio":0.075980395,"special_character_ratio":0.22941914,"punctuation_ratio":0.114967465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99913794,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T08:54:12Z\",\"WARC-Record-ID\":\"<urn:uuid:1c3599e9-4c45-4086-8c27-d3ab34e0960a>\",\"Content-Length\":\"321137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a3bd9c7-76cb-4abb-9305-9417e1ca2143>\",\"WARC-Concurrent-To\":\"<urn:uuid:3db309b0-db75-4388-b1bc-9afa52a6aec3>\",\"WARC-IP-Address\":\"34.149.120.3\",\"WARC-Target-URI\":\"https://www.shalom-education.com/courses/gcsemaths/lessons/ratio-proportion-and-rates-of-change-2/topic/reverse-percentages/\",\"WARC-Payload-Digest\":\"sha1:Q5PBMVGW6TXUMS7PUPQSOQXQZQKLMKG7\",\"WARC-Block-Digest\":\"sha1:6VYUZ5VAFWXZVS5AONBR5QJHTVSORJNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653631.71_warc_CC-MAIN-20230607074914-20230607104914-00301.warc.gz\"}"}
https://docs.opencv.org/master/d4/d1b/tutorial_histogram_equalization.html
[ "", null, "OpenCV  4.5.4-dev Open Source Computer Vision\nHistogram Equalization\n\nPrev Tutorial: Affine Transformations\n\nNext Tutorial: Histogram Calculation\n\nOriginal author Ana Huamán\nCompatibility OpenCV >= 3.0\n\n## Goal\n\nIn this tutorial you will learn:\n\n• What an image histogram is and why it is useful\n• To equalize histograms of images by using the OpenCV function cv::equalizeHist\n\n## Theory\n\n### What is an Image Histogram?\n\n• It is a graphical representation of the intensity distribution of an image.\n• It quantifies the number of pixels for each intensity value considered.", null, "### What is Histogram Equalization?\n\n• It is a method that improves the contrast in an image, in order to stretch out the intensity range (see also the corresponding Wikipedia entry).\n• To make it clearer, from the image above, you can see that the pixels seem clustered around the middle of the available range of intensities. What Histogram Equalization does is to stretch out this range. Take a look at the figure below: The green circles indicate the underpopulated intensities. After applying the equalization, we get an histogram like the figure in the center. The resulting image is shown in the picture at right.", null, "### How does it work?\n\n• Equalization implies mapping one distribution (the given histogram) to another distribution (a wider and more uniform distribution of intensity values) so the intensity values are spread over the whole range.\n• To accomplish the equalization effect, the remapping should be the cumulative distribution function (cdf) (more details, refer to Learning OpenCV). For the histogram $$H(i)$$, its cumulative distribution $$H^{'}(i)$$ is:\n\n$H^{'}(i) = \\sum_{0 \\le j < i} H(j)$\n\nTo use this as a remapping function, we have to normalize $$H^{'}(i)$$ such that the maximum value is 255 ( or the maximum value for the intensity of the image ). From the example above, the cumulative function is:", null, "• Finally, we use a simple remapping procedure to obtain the intensity values of the equalized image:\n\n$equalized( x, y ) = H^{'}( src(x,y) )$\n\n## Code\n\n• What does this program do?\n• Convert the original image to grayscale\n• Equalize the Histogram by using the OpenCV function cv::equalizeHist\n• Display the source and equalized images in a window.\n\n## Explanation\n\n• Convert it to grayscale:\n\nAs it can be easily seen, the only arguments are the original image and the output (equalized) image.\n\n• Display both images (original and equalized):\n• Wait until user exists the program\n\n## Results\n\n1. To appreciate better the results of equalization, let's introduce an image with not much contrast, such as:", null, "which, by the way, has this histogram:", null, "notice that the pixels are clustered around the center of the histogram.\n\n2. After applying the equalization with our program, we get this result:", null, "this image has certainly more contrast. Check out its new histogram like this:", null, "Notice how the number of pixels is more distributed through the intensity range.\n\nNote\nAre you wondering how did we draw the Histogram figures shown above? Check out the following tutorial!" ]
[ null, "https://docs.opencv.org/master/opencv-logo-small.png", null, "https://docs.opencv.org/master/Histogram_Equalization_Theory_0.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Theory_1.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Theory_2.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Original_Image.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Original_Histogram.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Equalized_Image.jpg", null, "https://docs.opencv.org/master/Histogram_Equalization_Equalized_Histogram.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8178441,"math_prob":0.9907553,"size":1382,"snap":"2021-43-2021-49","text_gpt3_token_len":329,"char_repetition_ratio":0.1161103,"word_repetition_ratio":0.0,"special_character_ratio":0.24457309,"punctuation_ratio":0.12692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99686754,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T11:54:13Z\",\"WARC-Record-ID\":\"<urn:uuid:42f921eb-0098-442e-b718-098ee0f8f123>\",\"Content-Length\":\"30953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec3ac7ed-2f77-4581-acf6-f5d099c513d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd124d6a-611a-45b4-bbd6-38e1c214c838>\",\"WARC-IP-Address\":\"172.67.218.21\",\"WARC-Target-URI\":\"https://docs.opencv.org/master/d4/d1b/tutorial_histogram_equalization.html\",\"WARC-Payload-Digest\":\"sha1:S3HVG4RPYL7PGMT4MSAVUZCKZOXU7M7N\",\"WARC-Block-Digest\":\"sha1:CEHA7NASHKJXXCJ2MXWXOURFTOMB5ZPE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584567.81_warc_CC-MAIN-20211016105157-20211016135157-00466.warc.gz\"}"}
http://soft-matter.seas.harvard.edu/index.php/Dielectrophoretic_manipulation_of_drops_for_high-speed_microfluidic_sorting_devices
[ "# Dielectrophoretic manipulation of drops for high-speed microfluidic sorting devices\n\nOriginal entry: Scott Tsai, APPHY 226, Spring 2009\n\n\"Dielectrophoretic manipulation of drops for high-speed microfluidic sorting devices\"\n\nKeunho Ahn and Charles Kerbage, Tom P. Hunt, R.M. Westervelt, Darren R. Link, and D.A. Weitz\n\nApplied Physics Letters 88, 024104 (2006)\n\n## Soft Matter Keywords\n\nDroplets, microfluidics, PDMS, dielectrophoresis, droplet sorting, Stokes' Drag\n\n## Overview\n\nRefer to abstract of paper\n\n## Soft Matter Examples\n\nIt has already been shown that droplets in microfluidic devices can be used as micro-reactors. In one of the configurations of this system, the droplets are water-in-oil emulsions. For these droplets to work effectively as a means of directed evolution, accurate and fast screening is required. One example of this is the sorting of a typical library of $10^8 - 10^9$ genes, which requires a throughput of 1kHz so that they can be finished in a practical amount of time.\n\nIn this paper, the authors present a high-throughput microfluidic droplet sorting device that uses dielectrophoresis for actuation. Dielectrophoresis is the manipulation of dielectric particles using electric fields. The force acting on the particle is directly proportional to the gradient of the electric field, and is perpendicular in direction.\n\nThe authors describe their microfluidic device with a droplet generator, a Y-junction, and an electrode (Fig 1a). The water droplets are formed at the generator, where their size is a function of the velocity of the water stream and of the velocity of the transverse oil stream. After the droplets are formed, they flow down to the Y-junction, where without an applied electric field, they enter the shorter channel (waste stream) (Fig. 1b).\n\nThe droplets naturally flow into the shorter channel because it has a lower hydrodynamic resistance than the longer channel. So to pull the droplets into the longer channel, the electrode is charged, and an electric field is produced (Fig. 1c).\n\nThe transferse motion of the droplet here is described by a balance of forces. In the direction of the electrode, the dielectrophoretic force on the drop is $\\vec{F} = \\vec{m} \\cdot \\operatorname{grad} \\vec{E}$. Where $\\vec{m}$ is the dipole moment of the particle and $\\vec{E}$ is the electric field. For a spherical particle, the dipole moment is $\\vec{m} = 4 \\pi \\epsilon_{oil} Re[CM( \\omega )]r^3 \\vec{E}$. $Re[CM( \\omega) ]$ is the Claussius-Mossotti facto, and $\\epsilon_{oil}$ is the oil's dielectric permittivity.\n\nThis force is offset by the Stokes drag force $Fs = 6 \\pi \\eta_{oil} r \\vec{v}$.\n\nSo, the balance of forces gives:\n\n$m \\frac{d \\vec{v}} {dt} = 4 \\pi \\epsilon_{oil} Re[CM( \\omega )]r^3 \\vec{E} \\cdot \\vec{E} - 6 \\pi \\eta_{oil} r \\vec{v}$.\n\nSince the time for the drops to attain terminal velocity is approximately $t = v/a = \\frac{v}{[F/(\\delta \\rho 4 \\pi / 3 r^3)]} = 2 \\delta \\rho r^2 / 9 \\eta$, so for 1nN force acting on a 12 $\\mu m$ drop, it will accelerate to its terminal velocity in 5 $\\mu s$. This time scale is much shorter than others in this system, so it is possible to neglect the inertia term.\n\nSo, $4 \\pi \\epsilon_{oil} Re[CM( \\omega )]r^3 \\vec{E} \\cdot \\vec{E} - 6 \\pi \\eta_{oil} r \\vec{v} = 0$.\n\nSolving this equation, the authors obtain $\\vec{v} = \\epsilon_{oil} r^2 k V^2 / 3 \\eta_{oil}$ (The Claussius-Mossotti factor is 1 for water drops in oil and for frequencies less than several MHz).\n\nWith this model for velocity, the authors determined that for a 12 micron sized drop at 1kV, the force acting on the drop would be approximately 10nN, so the resuling maximum drop velocity is around 1 cm/s. This they also verified experimentally (Fig. 2)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81998867,"math_prob":0.986531,"size":3879,"snap":"2022-27-2022-33","text_gpt3_token_len":1088,"char_repetition_ratio":0.1396129,"word_repetition_ratio":0.056818184,"special_character_ratio":0.26166537,"punctuation_ratio":0.09878214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972572,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T09:30:19Z\",\"WARC-Record-ID\":\"<urn:uuid:ae378b8a-d14f-4f2a-b97b-8c0061ad73ee>\",\"Content-Length\":\"20150\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59d63228-4d25-46ed-b480-69703f9a0b7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4d1c1d1-231f-4121-a848-b013b17d0632>\",\"WARC-IP-Address\":\"54.165.123.1\",\"WARC-Target-URI\":\"http://soft-matter.seas.harvard.edu/index.php/Dielectrophoretic_manipulation_of_drops_for_high-speed_microfluidic_sorting_devices\",\"WARC-Payload-Digest\":\"sha1:TUM7FY6DULLNA4TWZKBQPQGCL42L5FAE\",\"WARC-Block-Digest\":\"sha1:6HCHTKGZBJLQ27JYH6R2FTGAKM6NN7MF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104542759.82_warc_CC-MAIN-20220705083545-20220705113545-00424.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-8-5-factoring-x-squared-bx-c-mixed-review-page-505/68
[ "## Algebra 1\n\n$x=\\frac{ad}{b}$\nWe start with the given equation: $\\frac{a}{b}=\\frac{x}{d}$ We cross multiply to remove fractions: $ad=bx$ We divide by $b$ on both sides of the equation: $x=\\frac{ad}{b}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8311718,"math_prob":1.0000086,"size":460,"snap":"2021-21-2021-25","text_gpt3_token_len":122,"char_repetition_ratio":0.10745614,"word_repetition_ratio":0.0,"special_character_ratio":0.2521739,"punctuation_ratio":0.07526882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T00:17:11Z\",\"WARC-Record-ID\":\"<urn:uuid:dbaf891c-e372-49c6-8e67-bab1d2eb74a8>\",\"Content-Length\":\"79576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0845bd5a-6a94-477c-bf74-7d89dbca2551>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbc55cf1-7400-4020-90a8-39d197c7f76d>\",\"WARC-IP-Address\":\"54.210.142.197\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-8-5-factoring-x-squared-bx-c-mixed-review-page-505/68\",\"WARC-Payload-Digest\":\"sha1:Z6V6OBLEKCO5NXLKSCFSVAS2YIRHL7EE\",\"WARC-Block-Digest\":\"sha1:62MAI7GPIYY4I5C2FPNOYOKI3MRHZI4S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991488.53_warc_CC-MAIN-20210515223209-20210516013209-00118.warc.gz\"}"}
https://searxiv.org/search?author=Toufik%20Mansour
[ "### Results for \"Toufik Mansour\"\n\ntotal 441took 0.11s\nGeneralizations of some identities involving the fibonacci numbersJan 15 2003In this paper we study the sum $$\\sum_{j_1+j_2+...+j_d=n}\\prod_{i=1}^d F_{k\\cdot j_i},$$ where $d\\geq2$ and $k\\geq1$.\n$q$-deformed conformable fractional Natural transformNov 06 2018In this paper, we develop a new deformation and generalization of the Natural integral transform based on the conformable fractional $q$-derivative. We obtain transformation of some deformed functions and apply the transform for solving linear differential ... More\nModelling x-ray tomography using integer compositionsAug 12 2015The x-ray process is modelled using integer compositions as a two dimensional analogue of the object being x-rayed, where the examining rays are modelled by diagonal lines with equation $x-y=n$ for non negative integers $n$. This process is essentially ... More\nSome recursive formulas for Selberg-type integralsDec 17 2009A set of recursive relations satisfied by Selberg-type integrals involving monomial symmetric polynomials are derived, generalizing previously known results. These formulas provide a well-defined algorithm for computing Selberg-Schur integrals whenever ... More\nA monotonicity property for generalized Fibonacci sequencesOct 25 2014Given k>1, let a_n be the sequence defined by the recurrence a_n=c_1a_{n-1}+c_2a_{n-2}+...+c_ka_{n-k} for n>=k, with initial values a_0=a_1=...=a_{k-2}=0 and a_{k-1}= 1. We show under a couple of assumptions concerning the constants c_i that the ratio ... More\nNew degenerated polynomials arising from non-classical Umbral CalculusNov 06 2018We introduce new generalizations of the Bernoulli, Euler, and Genocchi polynomials and numbers based on the Carlitz-Tsallis degenerate exponential function and concepts of the Umbral Calculus associated with it. Also, we present generalizations of some ... More\nThe 1/k-Eulerian polynomials and k-Stirling permutationsSep 23 2014In this paper, we establish a connection between the 1/k-Eulerian polynomials introduced by Savage and Viswanathan (Electron. J. Combin. 19(2012), P9) and k-Stirling permutations. We also introduce the dual set of Stirling permutations.\nWick's theorem for q-deformed boson operatorsMar 11 2007In this paper combinatorial aspects of normal ordering arbitrary words in the creation and annihilation operators of the q-deformed boson are discussed. In particular, it is shown how by introducing appropriate q-weights for the associated Feynman diagrams'' ... More\nWilf classification of triples of 4-letter patternsMay 16 2016We determine all 242 Wilf classes of triples of 4-letter patterns by showing that there are 32 non-singleton Wilf classes. There are 317 symmetry classes of triples of 4-letter patterns and after computer calculation of initial terms, the problem reduces ... More\nMotzkin numbers of higher rank: Generating function and explicit expressionApr 21 2007May 13 2007The generating function and an explicit expression is derived for the (colored) Motzkin numbers of higher rank introduced recently. Considering the special case of rank one yields the corresponding results for the conventional colored Motzkin numbers ... More\nA characterization of horizontal visibility graphs and combinatorics on wordsOct 09 2010An Horizontal Visibility Graph (for short, HVG) is defined in association with an ordered set of non-negative reals. HVGs realize a methodology in the analysis of time series, their degree distribution being a good discriminator between randomness and ... More\nDiffusion on an Ising chain with kinksJun 30 2008Jul 28 2009We count the number of histories between the two degenerate minimum energy configurations of the Ising model on a chain, as a function of the length n and the number d of kinks that appear above the critical temperature. This is equivalent to count permutations ... More\nApostol-Euler polynomials arising from umbral calculusFeb 13 2013In this paper, by using the orthogonality type as defined in the umbral calculus, we derive explicit formula for several well known polynomials as a linear combination of the Apostol-Euler polynomials.\nRestricted Stirling permutationsJul 20 2016In this paper, we study the generating functions for the number of pattern restricted Stirling permutations with a given number of plateaus, descents and ascents. Properties of the generating functions, including symmetric properties and explicit formulas ... More\nRecurrence relations in counting the pattern 13-2 in flattened permutationsOct 15 2014We prove that the generating function for the number of flattened permutations having a given number of occurrences of the pattern 13-2 is rational, by using the recurrence relations and the kernel method.\nCounting occurrences of 3412 in an involutionJan 18 2004We study the generating function for the number of involutions on $n$ letters containing exactly $r\\gs0$ occurrences of 3412. It is shown that finding this function for a given $r$ amounts to a routine check of all involutions on $2r+1$ letters.\nOn moments of the integrated exponential Brownian motionSep 20 2015Jun 24 2016We present new exact expressions for a class of moments for the geometric Brownian motion, in terms of determinants, obtained using a recurrence relation and combinatorial arguments for the case of a Ito's Wiener process. We then apply the obtained exact ... More\nA note on sum of k-th power of Horadam's sequenceFeb 02 2003Let $w_{n+2}=pw_{n+1}+qw_{n}$ for $n\\geq0$ with $w_0=a$ and $w_1=b$. In this paper we find an explicit expression, in terms of determinants, for $\\sum_{n\\geq0} w_n^kx^n$ for any $k\\geq1$. As a consequence, we derive all the previously known results for ... More\nSquaring the terms of an $\\ell^{th}$ order linear recurrenceMar 12 2003We find an explicit formula for the generating function for the squaring the terms of an $\\ell^{th}$ order linear recurrence.\nRestricted even permutations and Chebyshev polynomialsFeb 02 2003We study generating functions for the number of even (odd) permutations on n letters avoiding 132 and an arbitrary permutation $\\tau$ on k letters, or containing $\\tau$ exactly once. In several interesting cases the generating function depends only on ... More\nOn the Complementary Equienergetic GraphsJul 30 2019The energy of a simple graph $G$, denoted by $\\mathcal{E}(G)$, is the sum of the absolute values of the eigenvalues of $G$. Two $n$-vertex graphs with the same energies are called equienergetic graphs. A graph $G$ with the property $G\\cong \\overline{G}$ ... More\nRecursions for Excedance number in some permutations groupsFeb 15 2007Jun 03 2008The excedance number for S_n is known to have an Eulerian distribution. Nevertheless, the classical proof uses descents rather than excedances. We present a direct recursive proof which seems to be folklore and extend it to the colored permutation groups ... More\nBilinear Forms on Skein Modules and Steps in Dyck PathsNov 03 2010Jan 13 2011We use Jones-Wenzl idempotents to construct bases for the relative Kauffman bracket skein module of a square with n points colored 1 and one point colored h. We consider a natural bilinear form on this skein module. We calculate the determinant of the ... More\nCounting occurrences of a pattern of type (1,2) or (2,1) in permutationsOct 03 2001Babson and Steingr\\'{\\i}msson introduced generalized permutation patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. Claesson presented a complete solution for the number of permutations avoiding ... More\nEnumerating permutations avoiding a pair of Babson-Steingrimsson patternsJul 06 2001Mar 24 2010Babson and Steingr\\imsson introduced generalized permutation patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. Subsequently, Claesson presented a complete solution for the number of permutations ... More\nOn pattern-avoiding partitionsMar 29 2007A \\emph{set partition} of the set $[n]=\\{1,...c,n\\}$ is a collection of disjoint blocks $B_1,B_2,...c, B_d$ whose union is $[n]$. We choose the ordering of the blocks so that they satisfy $\\min B_1<\\min B_2<...b<\\min B_d$. We represent such a set partition ... More\nEnumerations of bargraphs with respect to corner statisticsAug 05 2018We study the enumeration of bargraphs with respect to some corner statistics. We find generating functions for the number of bargraphs that tracks the corner statistics of interest, the number of cells, and the number of columns. The bargraph representation ... More\nRefined Restricted Permutations Avoiding Subsets of Patterns of Length ThreeMar 30 2002Define $S_n^k(T)$ to be the set of permutations of $\\{1,2,...,n\\}$ with exactly $k$ fixed points which avoid all patterns in $T \\subseteq S_m$. We enumerate $S_n^k(T)$, $T \\subseteq S_3$, for all $|T| \\geq 2$ and $0 \\leq k \\leq n$.\nOn the number of combinations without certain separationsMay 09 2008In this paper we enumerate the number of ways of selecting $k$ objects from $n$ objects arrayed in a line such that no two selected ones are separated by $m-1,2m-1,...,pm-1$ objects and provide three different formulas when $m,p\\geq 1$ and $n\\geq pm(k-1)$. ... More\nAvoiding maximal parabolic subgroups of S_kJun 21 2000We find an explicit expression for the generating function of the number of permutations in S_n avoiding a subgroup of S_k generated by all but one simple transpositions. The generating function turns out to be rational, and its denominator is a rook ... More\nOn the normal ordering of multi-mode boson operatorsJan 25 2007Mar 19 2007In this article combinatorial aspects of normal ordering annihilation and creation operators of a multi-mode boson system are discussed. The modes are assumed to be coupled since otherwise the problem of normal ordering is reduced to the corresponding ... More\nPermutations avoiding 312 and another pattern, Chebyshev polynomials and longest increasing subsequencesAug 16 2018We study the longest increasing subsequence problem for random permutations from $S_n(312,\\tau)$, the set of all permutations of length $n$ avoiding the pattern $312$ and another pattern $\\tau$, under the uniform probability distribution. We determine ... More\nEnumeration of $(k,2)$-noncrossing partitionsAug 08 2008A set partition is said to be $(k,d)$-noncrossing if it avoids the pattern $12... k12... d$. We find an explicit formula for the ordinary generating function of the number of $(k,d)$-noncrossing partitions of $\\{1,2,...,n\\}$ when $d=1,2$.\nWords restricted by patterns with at most 2 distinct lettersOct 04 2001We find generating functions for the number of words avoiding certain patterns or sets of patterns on at most 2 distinct letters and determine which of them are equally avoided. We also find the exact number of words avoiding certain patterns and provide ... More\nExcedance numbers for permutations in complex reflection groupsApr 23 2007Recently, Bagno, Garber and Mansour studied a kind of excedance number on the complex reflection groups and computed its multidistribution with the number of fixed points on the set of involutions in these groups. In this note, we consider the similar ... More\nChebyshev Polynomials and Statistics on a New Collection of Words in the Catalan FamilyJul 14 2014Recently, a new class of words, denoted by L_n, was shown to be in bijection with a subset of the Dyck paths of length 2n having cardinality given by the (n-1)-st Catalan number. Here, we consider statistics on L_n recording the number of occurrences ... More\nInvolutions Restricted by 3412, Continued Fractions, and Chebyshev PolynomialsJan 18 2004We study generating functions for the number of involutions, even involutions, and odd involutions in $S_n$ subject to two restrictions. One restriction is that the involution avoid 3412 or contain 3412 exactly once. The other restriction is that the ... More\nOn Linear Differential Equations Involving a Para-Grassmann VariableJul 15 2009As a first step towards a theory of differential equations involving para-Grassmann variables the linear equations with constant coefficients are discussed and solutions for equations of low order are given explicitly. A connection to n-generalized Fibonacci ... More\nFinite automata and pattern avoidance in wordsSep 17 2003We say that a word $w$ on a totally ordered alphabet avoids the word $v$ if there are no subsequences in $w$ order-equivalent to $v$. In this paper we suggest a new approach to the enumeration of words on at most $k$ letters avoiding a given pattern. ... More\nBell Polynomials and $k$-generalized Dyck PathsMay 09 2008A {\\em k-generalized Dyck path} of length $n$ is a lattice path from $(0,0)$ to $(n,0)$ in the plane integer lattice $\\mathbb{Z}\\times\\mathbb{Z}$ consisting of horizontal-steps $(k, 0)$ for a given integer $k\\geq 0$, up-steps $(1,1)$, and down-steps $(1,-1)$, ... More\nCounting rises, levels, and drops in compositionsOct 14 2003A composition of $n\\in\\NN$ is an ordered collection of one or more positive integers whose sum is $n$. The number of summands is called the number of parts of the composition. A palindromic composition of $n$ is a composition of $n$ in which the summands ... More\nFive subsets of permutations enumerated as weak sorting permutationsFeb 16 2016We show that the number of members of S_n avoiding any one of five specific triples of 4-letter patterns is given by sequence A111279 in OEIS, which is known to count weak sorting permutations. By numerical evidence, there are no other (non-trivial) triples ... More\n$q$-Bernstein functions and applicationsFeb 09 2016We characterize of the $q$-Bernstein functions in terms of $q$-Laplace transform. Moreover, we present several results of $q$-completely monotonic, $q$-log completely monotonic and $q$-Bernstein functions.\nCounting paths in Bratteli diagrams for SU(2)_kJun 30 2008Mar 20 2009It is known that the Hilbert space dimensionality for quasiparticles in an SU(2)_k Chern-Simons-Witten theory is given by the number of directed paths in certain Bratteli diagrams. We present an explicit formula for these numbers for arbitrary k. This ... More\nOn avoidance of patterns of the form σ-τ by words over a finite alphabetMar 10 2014Vincular or dashed patterns resemble classical patterns except that some of the letters within an occurrence are required to be adjacent. We prove several infinite families of Wilf-equivalences for k-ary words involving vincular patterns containing a ... More\nGrid polygons from permutations and their enumeration by the kernel methodMar 09 2006A grid polygon is a polygon whose vertices are points of a grid. We define an injective map between permutations of length n and a subset of grid polygons on n vertices, which we call consecutive-minima polygons. By the kernel method, we enumerate sets ... More\nCounting occurences of 132 in a permutationMay 09 2001Aug 02 2001We study the generating function for the number of permutations on n letters containing exactly $r\\gs0$ occurences of 132. It is shown that finding this function for a given r amounts to a routine check of all permutations in $S_{2r}$.\nWords restricted by 3-letter generalized multipermutation patternsDec 27 2001We find exact formulas and/or generating functions for the number of words avoiding 3-letter generalized multipermutation patterns and find which of them are equally avoided.\nSome enumerative results related to ascent sequencesJul 16 2012An ascent sequence is one consisting of non-negative integers in which the size of each letter is restricted by the number of ascents preceding it in the sequence. Ascent sequences have recently been shown to be related to (2+2)-free posets and a variety ... More\nSeparable d-permutations and guillotine partitionsMar 24 2008We characterize separable multidimensional permutations in terms of forbidden patterns and enumerate them by means of generating function, recursive formula and explicit formula. We find a connection between multidimensional permutations and guillotine ... More\nEnumeration of small Wilf classes avoiding 1324 and two other 4-letter patternsMay 02 2017Nov 12 2017Recently, it has been determined that there are 242 Wilf classes of triples of 4-letter permutation patterns by showing that there are 32 non-singleton Wilf classes. Moreover, the generating function for each triple lying in a non-singleton Wilf class ... More\nGeneralized q-Calkin-Wilf trees and c-hyper m-expansions of integersMar 13 2015A hyperbinary expansion of a positive integer n is a partition of n into powers of 2 in which each part appears at most twice. In this paper, we consider a generalization of this concept and a certain statistic on the corresponding set of expansions of ... More\nIdentities involving Narayana polynomials and Catalan numbersMay 09 2008We first establish the result that the Narayana polynomials can be represented as the integrals of the Legendre polynomials. Then we represent the Catalan numbers in terms of the Narayana polynomials by three different identities. We give three different ... More\nDyck paths with coloured ascentsJan 25 2007We introduce a notion of Dyck paths with coloured ascents. For several ways of colouring, we establish bijections between sets of such paths and other combinatorial structures, such as non-crossing trees, dissections of a convex polygon, etc. In some ... More\nRestricted Motzkin permutations, Motzkin paths, continued fractions, and Chebyshev polynomialsOct 06 2006We say that a permutation $\\pi$ is a Motzkin permutation if it avoids 132 and there do not exist $a<b$ such that $\\pi_a<\\pi_b<\\pi_{b+1}$. We study the distribution of several statistics in Motzkin permutations, including the length of the longest increasing ... More\nCombinatorial Gray codes for classes of pattern avoiding permutationsApr 16 2007Jan 09 2008The past decade has seen a flurry of research into pattern avoiding permutations but little of it is concerned with their exhaustive generation. Many applications call for exhaustive generation of permutations subject to various constraints or imposing ... More\nEvaluation of spherical GJMS determinantsJul 23 2014An expression in the form of an easily computed integral is given for the determinant of the scalar GJMS operator on an odd--dimensional sphere. Manipulation yields a sum formula for the logdet in terms of the logdets of the ordinary conformal Laplacian ... More\n231-Avoiding Involutions and Fibonacci NumbersSep 19 2002We use combinatorial and generating function techniques to enumerate various sets of involutions which avoid 231 or contain 231 exactly once. Interestingly, many of these enumerations can be given in terms of $k$-generalized Fibonacci numbers.\nPermutations Which Avoid 1243 and 2143, Continued Fractions, and Chebyshev PolynomialsAug 06 2002Several authors have examined connections between permutations which avoid 132, continued fractions, and Chebyshev polynomials of the second kind. In this paper we prove analogues of some of these results for permutations which avoid 1243 and 2143. Using ... More\nA Digital Binomial Theorem for Sheffer SequencesOct 29 2015We extend the digital binomial theorem to Sheffer polynomial sequences by demonstrating that their corresponding Sierpi\\'nski matrices satisfy a multiplication property that is equivalent to the convolution identity for Sheffer sequences.\n132-avoiding Two-stack Sortable Permutations, Fibonacci Numbers, and Pell NumbersMay 19 2002In 1990 West conjectured that there are $2(3n)!/((n+1)!(2n+1)!)$ two-stack sortable permutations on $n$ letters. This conjecture was proved analytically by Zeilberger in 1992. Later, Dulucq, Gire, and Guibert gave a combinatorial proof of this conjecture. ... More\nRestricted Permutations, Fibonacci Numbers, and k-generalized Fibonacci NumbersMar 21 2002A permutation $\\pi \\in S_n$ is said to {\\it avoid} a permutation $\\sigma \\in S_k$ whenever $\\pi$ contains no subsequence with all of the same pairwise comparisons as $\\sigma$. For any set $R$ of permutations, we write $S_n(R)$ to denote the set of permutations ... More\nA $q$-Digital Binomial TheoremJun 26 2015We present a multivariable generalization of the digital binomial theorem from which a q-analog is derived as a special case.\nRestricted Dumont permutations, Dyck paths, and noncrossing partitionsOct 06 2006We complete the enumeration of Dumont permutations of the second kind avoiding a pattern of length 4 which is itself a Dumont permutation of the second kind. We also consider some combinatorial statistics on Dumont permutations avoiding certain patterns ... More\nCounting triangulations of some classes of subdivided convex polygonsApr 11 2016We compute the number of triangulations of a convex $k$-gon each of whose sides is subdivided by $r-1$ points. We find explicit formulas and generating functions, and we determine the asymptotic behaviour of these numbers as $k$ and/or $r$ tend to infinity. ... More\nFinite automata, probabilistic method, and occurrence enumeration of a pattern in words and permutationsMay 14 2019The main theme of this paper is the enumeration of the occurrence of a pattern in words and permutations. We mainly focus on asymptotic properties of the sequence $f_r^v(k,n),$ the number of $n$-array $k$-ary words that contain a given pattern $v$ exactly ... More\nA comment on Ryser's conjecture for intersecting hypergraphsSep 20 2007Let $\\tau(\\mathcal{H})$ be the cover number and $\\nu(\\mathcal{H})$ be the matching number of a hypergraph $\\mathcal{H}$. Ryser conjectured that every $r$-partite hypergraph $\\mathcal{H}$ satisfies the inequality $\\tau(\\mathcal{H}) \\leq (r-1) \\nu (\\mathcal{H})$. ... More\nStaircase patterns in words: subsequences, subwords, and separation numberAug 02 2019We revisit staircases for words and prove several exact as well as asymptotic results for longest left-most staircase subsequences and subwords and staircase separation number, the latter being defined as the number of consecutive maximal staircase subwords ... More\nOn ballistic deposition process on a stripMar 29 2019Jun 19 2019We revisit the model of the ballistic deposition studied in \\cite{bdeposition} and prove several combinatorial properties of the random tree structure formed by the underlying stochastic process. Our results include limit theorems for the number of roots ... More\nNoncrossing normal ordering for functions of boson operatorsJul 11 2006Apr 03 2007Normally ordered forms of functions of boson operators are important in many contexts in particular concerning Quantum Field Theory and Quantum Optics. Beginning with the seminal work of Katriel [Lett. Nuovo Cimento, 10(13):565--567, 1974], in the last ... More\nPartial transpose of permutation matricesSep 21 2007Mar 22 2008The partial transpose of a block matrix M is the matrix obtained by transposing the blocks of M independently. We approach the notion of partial transpose from a combinatorial point of view. In this perspective, we solve some basic enumeration problems ... More\nOn ballistic deposition process on a stripMar 29 2019We revisit the model of the ballistic deposition studied in \\cite{bdeposition} and prove several combinatorial properties of the random tree structure formed by the underlying stochastic process. Our results include limit theorems for the number of roots ... More\nOn Multiple Pattern Avoiding Set PartitionsJan 28 2013Jan 29 2013We study classes of set partitions determined by the avoidance of multiple patterns, applying a natural notion of partition containment that has been introduced by Sagan. We say that two sets S and T of patterns are equivalent if for each n, the number ... More\nPartially ordered patterns and compositionsOct 01 2006A partially ordered (generalized) pattern (POP) is a generalized pattern some of whose letters are incomparable, an extension of generalized permutation patterns introduced by Babson and Steingrimsson. POPs were introduced in the symmetric group by Kitaev ... More\nOn the group of alternating colored permutationsJan 22 2014The group of alternating colored permutations is the natural analogue of the classical alternating group, inside the wreath product $\\mathbb{Z}_r \\wr S_n$. We present a 'Coxeter-like' presentation for this group and compute the length function with respect ... More\nA generalization of boson normal orderingAug 09 2006Dec 12 2006In this paper we define generalizations of boson normal ordering. These are based on the number of contractions whose vertices are next to each other in the linear representation of the boson operator function. Our main motivation is to shed further light ... More\nRestricted ascent sequences and Catalan numbersMar 27 2014Ascent sequences are those consisting of non-negative integers in which the size of each letter is restricted by the number of ascents preceding it and have been shown to be equinumerous with the (2+2)-free posets of the same size. Furthermore, connections ... More\nCongruence successions in compositionsJul 28 2013A \\emph{composition} is a sequence of positive integers, called \\emph{parts}, having a fixed sum. By an \\emph{$m$-congruence succession}, we will mean a pair of adjacent parts $x$ and $y$ within a composition such that $x\\equiv y(\\text{mod} m)$. Here, ... More\nExcedance number for involutions in complex reflection groupsDec 07 2006We define the excedance number on the complex reflection groups and compute its multidistribution with the number of fixed points on the set of involutions in these groups. We use some recurrence formulas and generating functions manipulations to obtain ... More\nPassing through a stack $k$ timesApr 13 2017Jul 02 2018We consider the number of passes a permutation needs to take through a stack if we only pop the appropriate output values and start over with the remaining entries in their original order. We define a permutation $\\pi$ to be $k$-pass sortable if $\\pi$ ... More\nCounting descents, rises, and levels, with prescribed first element, in wordsDec 31 2006May 30 2007Recently, Kitaev and Remmel [Classifying descents according to parity, Annals of Combinatorics, to appear 2007] refined the well-known permutation statistic `descent'' by fixing parity of one of the descent's numbers. Results in that paper were extended ... More\nOn the degeneracy of $SU(3)_k$ topological phasesSep 01 2010The ground state degeneracy of an $SU(N)_k$ topological phase with $n$ quasiparticle excitations is relevant quantity for quantum computation, condensed matter physics, and knot theory. It is an open question to find a closed formula for this degeneracy ... More\nHeight of records in partitions of a setAug 02 2019We study the restricted growth function associated with set partitions, and obtain exact formulas for the number of strong records with height one, the total of record heights over set of partitions, and the number of partitions with a given maximal height ... More\nIndependent sets in certain classes of (almost) regular graphsOct 23 2003We enumerate the independent sets of several classes of regular and almost regular graphs and compute the corresponding generating functions. We also note the relations between these graphs and other combinatorial objects and, in some cases, construct ... More\nNormal ordering problem and the extensions of the Stirling grammarAug 01 2013The purpose of this paper is to investigate the connection between context-free grammars and normal ordering problem, and then to explore various extensions of the Stirling grammar. We present grammatical characterizations of several well known combinatorial ... More\nSome combinatorial arrays related to the Lotka-Volterra systemApr 02 2014The purpose of this paper is to investigate the connection between the Lotka-Volterra system and combinatorics. We study several context-free grammars associated with the Lotka-Volterra system. Some combinatorial arrays, involving the Stirling numbers ... More\nCounting descent pairs with prescribed colors in the colored permutation groupsSep 17 2007We define new statistics, (c, d)-descents, on the colored permutation groups Z_r \\wr S_n and compute the distribution of these statistics on the elements in these groups. We use some combinatorial approaches, recurrences, and generating functions manipulations ... More\nPassing through a stack $k$ times with reversalsAug 13 2018We consider a stack sorting algorithm where only the appropriate output values are popped from the stack and then any remaining entries in the stack are run through the stack in reverse order. We identify the basis for the $2$-reverse pass sortable permutations ... More\nA notion of graph likelihood and an infinite monkey theoremApr 12 2013We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present ... More\nNonlinear differential equation for Korobov numbersApr 15 2016In this paper, we present nonlinear differential equations for the generating functions for the Korobov numbers and for the Frobenuius-Euler numbers. As an application, we find an explicit expression for the nth derivative of 1/ log(1 + t).\nThe descent statistic on signed simsun permutationsMay 09 2016May 17 2016In this paper we study the generating polynomials obtained by enumerating signed simsun permutations by number of the descents. Properties of the polynomials, including the recurrence relations and generating functions are studied.\nRecurrence relations for patterns of type $(2,1)$ in flattened permutationsJun 14 2013We consider the problem of counting the occurrences of patterns of the form $xy-z$ within flattened permutations of a given length. Using symmetric functions, we find recurrence relations satisfied by the distributions on $\\mathcal{S}_n$ for the patterns ... More\nCounting subwords in flattened permutationsJul 13 2013In this paper, we consider the number of occurrences of descents, ascents, 123-subwords, 321-subwords, peaks and valleys in flattened permutations, which were recently introduced by Callan in his study of finite set partitions. For descents and ascents, ... More\nCombinatorics of Dumont differential system on the Jacobi elliptic functionsMar 02 2014In this paper, we relate Jacobi elliptic functions to several combinatorial structures, including the longest alternating subsequences, alternating runs and descents. The Dumont differential system on the Jacobi elliptic functions is defined by $D(x)=yz,~D(y)=xz,~D(z)=xy$. ... More\nSmooth words and Chebyshev polynomialsSep 03 2008A word $\\sigma=\\sigma_1...\\sigma_n$ over the alphabet $[k]=\\{1,2,...,k\\}$ is said to be {\\em smooth} if there are no two adjacent letters with difference greater than 1. A word $\\sigma$ is said to be {\\em smooth cyclic} if it is a smooth word and in addition ... More\nOn the X-rays of permutationsJun 16 2005The X-ray of a permutation is defined as the sequence of antidiagonal sums in the associated permutation matrix. X-rays of permutation are interesting in the context of Discrete Tomography since many types of integral matrices can be written as linear ... More\nNew equivalences for pattern avoiding involutionsAug 10 2007Jan 22 2008We complete the Wilf classification of signed patterns of length 5 for both signed permutations and signed involutions. New general equivalences of patterns are given which prove Jaggard's conjectures concerning involutions in the symmetric group avoiding ... More\nMatchings Avoiding Partial PatternsApr 17 2005We show that matchings avoiding certain partial patterns are counted by the 3-Catalan numbers. We give a characterization of 12312-avoiding matchings in terms of restrictions on the corresponding oscillating tableaux. We also find a bijection between ... More\nOrion Routing Protocol for Delay-Tolerant NetworksMay 10 2012In this paper, we address the problem of efficient routing in delay tolerant network. We propose a new routing protocol dubbed as ORION. In ORION, only a single copy of a data packet is kept in the network and transmitted, contact by contact, towards ... More\nFamily of Subharmonic Functions and Separately Subharmonic FunctionsMay 31 2016Jul 31 2016We prove that a separately subharmonic function is subharmonic outside a closed set whose projections are closed nowhere dense with no bounded components. It generalizes a result due to U. Cegerell and A. Sadullaev. Then, given such a set, we construct ... More" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85543853,"math_prob":0.9840496,"size":20228,"snap":"2019-35-2019-39","text_gpt3_token_len":4736,"char_repetition_ratio":0.15456884,"word_repetition_ratio":0.0591133,"special_character_ratio":0.226666,"punctuation_ratio":0.10775259,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9927344,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T01:49:38Z\",\"WARC-Record-ID\":\"<urn:uuid:16697007-459b-43aa-84b0-3075fb0a8f23>\",\"Content-Length\":\"307657\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0411d47e-0c85-437a-a987-86e50ce661e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc01a9a6-f936-4aa7-860c-76e01dade85c>\",\"WARC-IP-Address\":\"54.230.193.123\",\"WARC-Target-URI\":\"https://searxiv.org/search?author=Toufik%20Mansour\",\"WARC-Payload-Digest\":\"sha1:6SRXYTGWGAVZFLTDUX2ZDTN7RC5VP5JO\",\"WARC-Block-Digest\":\"sha1:HUYQPOVK3BP3EXXRTKIWM5KVRPWPEJ5O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330913.72_warc_CC-MAIN-20190826000512-20190826022512-00352.warc.gz\"}"}
https://wiki.bi0s.in/basics/python/basics/
[ "# Basics¶\n\n## Input-Output Methods¶\n\nFor taking input in python 2, we use raw_input().\n\n$python >>>raw_input() this is taken as input 'this is taken as input' In python3, there is no need for we use input(). $ python3\n>>>input()\nthis is taken as input\n'this is taken as input'\n\nBy default input is taken as string.\n\n>>>n=input()\n3\n>>>print(type(n))\n<class 'str'>\n\nTo convert the datatype, we need to typecast it separately.\n\nTypecast: Converting the datatype of a variable from one to another.\n\n>>>n=int(input())\n3\n>>>print(type(n))\n<class 'int'>\n\nIn a similar way we can change the data type from int to float and many more.\n\n• In all the above cases, if python 2 is used then, in place of input() use raw_input().\n\n• For print statememt in python3, the synatx is print(\"Secure\"). While in python 2, there is no need of parenthesis () but it won't show any error even if we use parenthesis (). It is print \"Secure\" or print (\"Secure\" ) .\n\n## Operations¶\n\n>>> a=5\n>>> b=6\n>>>a + b\n11\n>>>a - b\n-1\n>>>a * b\n30\n>>>a / b\n0.8333333333333334\n>>>a // b\n0\n>>>a % b\n5\n>>>a ** b\n15625\n\n• // shows the quotient.\n• % show the remainder.\n• ** is a to the power b or a raised to b.\n\nIn python 2 when '/ ' operation is used, the integer is printed while in case python 3 it will show value after decimal place also. So to obtain the float value in python 2 you need to specify at least one, either denominator or numerator as float.\n\n$python >>> 3/2 1 >>>3.0/2 1.5 ## Comparison operators:¶ • a == b Equal to condition. • a < b Less than condition. • a > b Greater than condition. • a <= b Less than or equal to condition. • a >= b Greater than or equal to condition. • a != b Not equal to condition. ## Assignment Operator '='¶ a=b the value of b is assigned to a. ## Binary Operations¶ To understand this, use python built in function bin(n). It will show the binary form of number n. >>> bin(3) '0b11' >>> bin(2) '0b10' #### a&b bitwise AND of a,b:¶ >>> 3&2 2 Each bit is taken taken and AND operation is performed. so the result we get is 0b010, which is 2.", null, "#### a|b bitwise OR of a,b:¶ >>> 3|2 3 Each bit is taken and OR operation is performed. so the result we get is 0b11, which is 3.", null, "#### a^b bitwise XOR of a,b:¶ >>> 3^2 1 Each bit is taken and XOR operation is performed. In XOR, if both bits are same then the result is 0, else 1. So 1^1 is 0 while 1^0 is 1. Hence, the result we get for 3^2 (0b11^0b10) is 0b001, which is 1.", null, "### a<<b Left Shift¶ a<<b it will shift the bits of a in binary format to left, this shift is done b times: >>> 3<<2 12 >>>bin(12) 0b1100 Binary representation of 3 is 0b11, which is shifted twice to left. So the result is 0b1100 that is 12.", null, "### a>>b Right Shift¶ a>>b it will shift the bits of a in binary format to right, this shift is done b times: >>> 3>>1 1 Binary representation of 3 is 0b11, which is shifted once to right. So the result is 0b1 that is 1.", null, "Python can also understand logical operations when written in english : - and - or - not - in ## Conditional Statements¶ ### if statements¶ If condition statements are to be used when you have a set of statements which is to be executed when a particular condition is satisfied. For example if a person's age is above 18, he is eligible to vote. If not, he is not eligible to vote. Example 1 $ python3\n>>>age=int(input())\n32\n>>>if(age>=18):\n... print(“you are eligible to vote”)\n...else:\n... print(“you are not eligible to vote”)\n\nPress enter button twice for output.\n\nOutput\n\nyou are eligible to vote\n>>>\n\nExample 2\n\n$python >>>age=int(raw_input()) 32 >>>if age>=18: ... print “you are eligible to vote” ...else: ... print “you are not eligible to vote” Press enter button twice for output. Output you are eligible to vote >>> Example 3 If we have more than one condition to check we can use elif statements. $ python3\n>>>age=int(input())\n32\n>>>if(age<=12):\n... print(“you are a kid”)\n...elif(age>12 and age <= 19):\n... print (“you are a teenager”)\n...elif(age >19 and age < 30 ):\n...else:\nprint(“you are a senior citizen”)\n\nPress enter button twice for output.\n\nOutput\n\nyou are a senior citizen\n>>>\n\nExample 4\n\n$python >>>age=int(raw_input()) 32 >>>if age<=12 : ... print “you are a kid” ...elif age>12 and age <= 19 : ... print “you are a teenager” ...elif age >19 and age < 30 : ... print “you are an adult” ...else: print “you are a senior citizen” Press enter button twice for output. Output you are a senior citizen >>> ## Loops ### For loop It is used when you need to execute a set of statements n times. Syntax : sh for variable in range(start,end,incrementation): statements If not mentioned then starting is taken as 0 and incrementation as 1 to the specified end point. This code is valid for both python 3 and python 2. Example 1 $ python\n>>>for i in range(3):\n... print(“welcome to the world of security”)\n\n\nPress enter button twice for output. Output\n\nwelcome to the world of security\nwelcome to the world of security\nwelcome to the world of security\n>>>\n\nExample 2\n\nThis code is valid for both python 3 and python 2.\n\n$python3 >>>i = \"teambi0s\" >>>for i in c: ... print(i) Press enter button twice for output. Output t e a m b i 0 s >>> ### While loop¶ It is used to execute a set of statements until the condition is satisfied and hence the loop will end as the condition becomes false. Syntax : while(condition): statements Example 1 This code is valid for both python 3 and python 2. $ python3\n>>>flag=0\n>>>while(flag!=3):\n... print(flag)\n... flag=flag+1\n\nPress enter button twice for output.\n\nOutput\n\n0\n1\n2\n>>>\n\n\n### Nested Loops:¶\n\nLoop inside a loop is termed as nested.\n\nThis code is valid for both python 3 and python 2.\n\n\\$ python3\n>>>for i in range (3):\n... for j in range (2):\n... print(“this is nested”)\n\nPress enter button twice for output.\n\nOutput\n\nthis is nested\nthis is nested\nthis is nested\nthis is nested\nthis is nested\nthis is nested\n>>>\n\n\nIt will print 6 times." ]
[ null, "https://wiki.bi0s.in/basics/python/img/AND.png", null, "https://wiki.bi0s.in/basics/python/img/OR.png", null, "https://wiki.bi0s.in/basics/python/img/XOR.png", null, "https://wiki.bi0s.in/basics/python/img/LeftShift.png", null, "https://wiki.bi0s.in/basics/python/img/RightShift.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7701564,"math_prob":0.95657694,"size":4773,"snap":"2021-43-2021-49","text_gpt3_token_len":1406,"char_repetition_ratio":0.14091004,"word_repetition_ratio":0.14827202,"special_character_ratio":0.3400377,"punctuation_ratio":0.15525554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9753615,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T02:56:30Z\",\"WARC-Record-ID\":\"<urn:uuid:dabf7a8e-b5c4-4df7-958d-6e10b02ecfc0>\",\"Content-Length\":\"87295\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb54939d-00a3-44ac-85cc-2a868f95de1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:58e5d155-8097-45a6-bb03-fdcadd50fc99>\",\"WARC-IP-Address\":\"104.21.14.171\",\"WARC-Target-URI\":\"https://wiki.bi0s.in/basics/python/basics/\",\"WARC-Payload-Digest\":\"sha1:OMAPIXMAFK2AI7ILSYCBY3POYN2KS5WX\",\"WARC-Block-Digest\":\"sha1:MXBKRIHGNOMFMEUC2ERQY2PWDG2OJ743\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585450.39_warc_CC-MAIN-20211022021705-20211022051705-00669.warc.gz\"}"}
http://bartleylawoffice.com/interesting/which-laws-can-be-combined-to-form-the-ideal-gas-law.html
[ "# Which laws can be combined to form the ideal gas law?\n\n## What laws are combined to make the ideal gas law?\n\nThe gas laws consist of three primary laws: Charles’ Law, Boyle’s Law and Avogadro’s Law (all of which will later combine into the General Gas Equation and Ideal Gas Law).\n\n## Which law can be derived from the ideal gas law?\n\nIdeal Gas Laws\n\nBoyles Law – states that for a given mass of gas held at a constant temperature the gas pressure is inversely proportional to the gas volume. Charles Law – states that for a given fixed mass of gas held at a constant pressure the gas volume is directly proportional to the gas temperature.\n\n## When would you use the ideal gas law instead of the combined gas law?\n\nWhenever it gives you conditions for one gas, and asks for conditions of another gas, you’re most likely going to use this Law. The Ideal Gas Law is a bit more advanced and deals with the kinetic molecular theory (conditions of an ideal gas). It may explicitly say “An ideal gas” or it may give you moles.\n\n## What 3 things does the combined gas law show relationships between?\n\nThe combined gas law shows the relationships among temperature, volume, and pressure.\n\n## What are the 5 gas laws?\n\nThe Gas Laws: Pressure Volume Temperature Relationships\n\n• Boyle’s Law: The Pressure-Volume Law.\n• Charles’ Law: The Temperature-Volume Law.\n• Gay-Lussac’s Law: The Pressure Temperature Law.\n• The Combined Gas Law.\n\n## Who discovered the ideal gas law?\n\nBenoît Paul Émile Clapeyron\n\n## What units are used in PV NRT?\n\nIn SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has the value 8.314 J/(K.\n\nYou might be interested:  What is law of conservation of energy\n\n## Why is it called ideal gas law?\n\nAn ideal gas is a gas that conforms, in physical behaviour, to a particular, idealized relation between pressure, volume, and temperature called the ideal gas law. … A gas does not obey the equation when conditions are such that the gas, or any of the component gases in a mixture, is near its condensation point.\n\n## What does the ideal gas law describe?\n\nthe law that the product of the pressure and the volume of one gram molecule of an ideal gas is equal to the product of the absolute temperature of the gas and the universal gas constant.\n\n## How does the combined gas law work?\n\nThe combined gas law combines the three gas laws: Boyle’s Law, Charles’ Law, and Gay-Lussac’s Law. It states that the ratio of the product of pressure and volume and the absolute temperature of a gas is equal to a constant. … The constant k is a true constant if the number of moles of the gas doesn’t change.6 мая 2019 г.\n\n## Why is the combined gas law important?\n\nThe combined gas law allows you to derive any of the relationships needed by combining all of the changeable peices in the ideal gas law: namely pressure, temperature and volume.\n\n## What is r in PV NRT?\n\nThe units of the universal gas constant R is derived from equation PV=nRT . It stands for Regnault.25 мая 2014 г.\n\n## What is a real life example of combined gas law?\n\nAs an example, if you were to increase the pressure of a gas while keeping the volume constant, the temperature would increase. A pressure cooker takes advantage of gaseous behavior. It is a sealed container that prepares foods faster by cooking them at higher pressures." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91136354,"math_prob":0.9534075,"size":3288,"snap":"2023-40-2023-50","text_gpt3_token_len":742,"char_repetition_ratio":0.17326431,"word_repetition_ratio":0.0237691,"special_character_ratio":0.22110705,"punctuation_ratio":0.10814815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9694198,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T07:41:31Z\",\"WARC-Record-ID\":\"<urn:uuid:be64f5c1-4bab-4c7b-bb95-a982cbc919cd>\",\"Content-Length\":\"66542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7dcb632c-35ec-4d59-a02d-9e8cd7b61265>\",\"WARC-Concurrent-To\":\"<urn:uuid:0cf04364-b3e7-4e2f-b654-7a059ac50643>\",\"WARC-IP-Address\":\"172.67.157.116\",\"WARC-Target-URI\":\"http://bartleylawoffice.com/interesting/which-laws-can-be-combined-to-form-the-ideal-gas-law.html\",\"WARC-Payload-Digest\":\"sha1:6N3ULYJ5OVXU7YFMAJ7CEWTNPUEHC6KW\",\"WARC-Block-Digest\":\"sha1:TMLB2WCFC2DSRXQRYYJLGAHVQ5Y2OSM4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100172.28_warc_CC-MAIN-20231130062948-20231130092948-00850.warc.gz\"}"}
https://stats.stackexchange.com/questions/481079/calculating-effect-size-from-wilcoxon-w-value-how-to-get-the-z-value
[ "# Calculating Effect Size from Wilcoxon W value (how to get the z value)\n\nIm carrying out a meta-analysis where I have had to carry out my own wilcoxon test on the data. I am now calculating the independant effect sizes for each study to manually input to my excel spreadsheet.\n\nWhen I have carried out my Wilcoxon test this is my input:\n\nrear<-read.csv(file.choose())\nnames(rear)\nshapiro.test(rear$PI) wilcox.test(rear$PI~rear\\$ï..RearingCond)\n\n\nand my output just provides W = 82, p-value = 0.26\n\nhow do i now calculate my effect size and variances for this?\n\nThe simplest solution since you have the raw data is to carry out a Student's $$t$$ test and use the mean difference and its standard error from that as your effect size." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84909,"math_prob":0.87541,"size":480,"snap":"2020-34-2020-40","text_gpt3_token_len":125,"char_repetition_ratio":0.09453782,"word_repetition_ratio":0.0,"special_character_ratio":0.23333333,"punctuation_ratio":0.116504855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98283446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T06:07:36Z\",\"WARC-Record-ID\":\"<urn:uuid:f2945ea1-c915-4bdd-8bb1-95107c05b0a6>\",\"Content-Length\":\"145239\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89a49174-6c4d-4eb0-a1c9-ccc3d994f26a>\",\"WARC-Concurrent-To\":\"<urn:uuid:79d1627e-e52a-4d47-9e41-657bc6747416>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/481079/calculating-effect-size-from-wilcoxon-w-value-how-to-get-the-z-value\",\"WARC-Payload-Digest\":\"sha1:XDXOGJQA5BLC64H3FPIVNR2NYRLBMQOA\",\"WARC-Block-Digest\":\"sha1:36IGMG3ITF6CLY3ALKPJBFIEZF76ZOCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740679.96_warc_CC-MAIN-20200815035250-20200815065250-00123.warc.gz\"}"}
https://open.library.ubc.ca/cIRcle/collections/ubctheses/831/items/1.0060930
[ "# Open Collections\n\n## UBC Theses and Dissertations", null, "## UBC Theses and Dissertations\n\n### Studies in LEED crystallography Hengrasmee, Sunantha 1980\n\nMedia\n831-UBC_1980_A1 H36.pdf [ 8.88MB ]\nJSON: 831-1.0060930.json\nJSON-LD: 831-1.0060930-ld.json\nRDF/XML (Pretty): 831-1.0060930-rdf.xml\nRDF/JSON: 831-1.0060930-rdf.json\nTurtle: 831-1.0060930-turtle.txt\nN-Triples: 831-1.0060930-rdf-ntriples.txt\nOriginal Record: 831-1.0060930-source.json\nFull Text\n831-1.0060930-fulltext.txt\nCitation\n831-1.0060930.ris\n\n#### Full Text\n\n`STUDIES IN LEED CRYSTALLOGRAPHY by SUNANTHA HENGRASMEE B.Sc.(Hons), The Un i v e r s i t y of Otago, 1971 M.Sc. , The Un i v e r s i t y of Otago, 1972 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY i n THE FACULTY OF GRADUATE STUDIES (Department of Chemistry) We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA J u l y , 1980 SUNANTHA HENGRASMEE, 1980 In presenting this thesis in partial fulfilment of the requirements f o r an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of t h i s t he s i s for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or p u b l i c a t i o n of this thesis for financial gain shall not be allowed without my written permission. Department of ^trnii-^-^y The University of British Columbia 2075 Wesbrook P l a c e Vancouver, Canada V6T 1W5 Date Abstract This thesis i s involved with the use of low-energy electron d i f f r a c t i o n (LEED) for determining the geometrical structures of well-characterized surfaces of s i n g l e c r y s t a l s . S p e c i f i c applications are to surfaces of rhodium, both clean and when containing adsorbed species. A preliminary problem concerned discrepancies reported previously i n the d e t a i l s of the geometrical structures for the clean (100) and (111) surfaces when using rhodium po t e n t i a l s from either a band structure c a l c u l a t i o n or from the l i n e a r superposition of charge density procedure for a metal c l u s t e r . A correction has now been made i n the c a l c u l a t i o n of phase s h i f t s for the band structure p o t e n t i a l , and r e i n v e s t i g a t i o n s of the (100), (110) and (111) surface of rhodium with t h i s p o t e n t i a l resolve the discrepancies. These r e s u l t s now support the suggestion, as shown previously i n t h i s laboratory for C u ( l l l ) , that the superposition p o t e n t i a l provides a good approximation to a band struc-ture p o t e n t i a l for the purpose of LEED crystallography. In the s t r u c t u r a l determinations made here, the degree of correspondence between i n t e n s i t y versus energy curves for d i f f e r e n t beams from experiment and from m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s were assessed with the r e l i a b i l i t y -index r ^ proposed by Zanazzi and Jona. A new aspect considered involved the use of t h i s index for determining the non-structural parameters required i n the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . Included i n the l a t t e r f or R h ( l l l ) are v a r i a t i o n s of the imaginary part of the constant p o t e n t i a l (V ^) between the muffin-tin spheres and the surface Debye temperature (B^ s u r £ ) • S t r u c t u r a l conclusions from are compared with v i s u a l analyses wherever p o s s i b l e , and t h i s work generally supports the use of the Zanazzi-Jona index i n LEED crystallography. The experimental part of t h i s study involved the (100) and (110) surfaces of rhodium. A se r i e s of d i f f r a c t i o n patterns were observed for the chemi-sorption of 0^ and H^S. Intensity versus energy curves were measured f o r the a v a i l a b l e d i f f r a c t e d beams for the surface structures designated Rh(100)-(3xl)-0, Rh(100)-p(2x2)-S and Rh(110)-c(2x2)-S. The l a t t e r two systems were analyzed by m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s (using the renormalized forward s c a t t e r i n g and layer-doubling methods) and surface structures determined. In each case S atoms adsorb on the centre s i t e s ; on Rh(100) S bonds to four o neighbouring Rh atoms at a distance of 2.30 A (very close to the Pauling o o single-bond value 2.29 A), and on Rh(110) each S atom i s 2.12 A from the Rh o atom d i r e c t l y below i n the second layer and 2.45 A from the four neighbouring Rh atoms i n the top m e t a l l i c layer. An i n v e s t i g a t i o n was also made for the use i n LEED crystallography of the quasidynamical method recently proposed by Van Hove and Tong. This scheme includes i n t e r l a y e r m u l t i p l e - s c a t t e r i n g properly , but neglects multiple-s c a t t e r i n g within i n d i v i d u a l layers, and has the p o t e n t i a l for considerable savings i n computing time and core storage. This method was investigated for the clean and sulphur-adsorbed (100) and (110) surfaces, and r e s u l t s compared with the more-complete m u l t i p l e - s c a t t e r i n g methods. The quasi-dynamical method appears to have some promise for making i n i t i a l selections of the most s i g n i f i c a n t t r i a l structures p r i o r to the more-detailed t e s t i n g With f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . - i v -Table of Contents Page Abstract i i Table of Contents : i v L i s t of Tables v i i L i s t of Figures i x Acknowledgement x v i i Chapter 1: Introduction 1 1.1 Modern Surface Science 2 1.2 Introduction to Low Energy Electron D i f f r a c t i o n 4 1.3 Surface Crystallography 13 1.4 Auger Electron Spectroscopy 17 1.5 Aims of Thesis 20 Chapter 2: C a l c u l a t i o n of LEED I n t e n s i t i e s 22 2.1 C h a r a c t e r i s t i c s of 1(E) curves 23 2.2 Physical Parameters required i n LEED Theory 24 2.3 T-Matrix Method 32 2.4 Bloch Wave Method : 34 2.5 Perturbation Methods 39 (a) Layer Doubling Method 40 (b) Renormalized Forward Scattering Method 42 2.6 Further M u l t i p l e S c a t t e r i n g Methods 45 2.7 General Aspects of Computations 47 (a) S t r u c t u r a l Parameters and Use of Symmetry 47 (b) Program Flow 51 - V -Table of Contents Page 2.8 Evaluation o f Results • 53 (a) Introduction 53 (b) Zanazzi and Jona's Proposals • 54 (c) Further Developments 56 Chapter 3: Preliminary Work 60 3.1 General Experimental Procedures 61 (a) LEED Apparatus 61 (b) C r y s t a l Preparation 65 (c) Detection of Surface Impurities 68 (d) Intensity Measurements 71 3.2 S t r u c t u r a l Determinations of Low Index Surfaces of Rhodium 75 (a) Previous LEED Intensity Calculations f o r Rhodium Surfaces 75 (b) Further Studies 77 3.3 Studies with the R e l i a b i l i t y index of Zanazzi and Jona 82 (a) Introduction 82 (b) Relations between R e l i a b i l i t y Index and the Imaginary Poten t i a l 82 (c) R e l i a b i l i t y Index and the V a r i a t i o n of Surface Debye Temperature 89 3.4 Studies o f Adsorption o f some Gaseous Molecules on Rhodium Surfaces 95 (a) Bibliography of Overlayer Structures on Rhodium Surfaces 95 (b) Adsorption of 0? on Rh(100) 97 - v i -Table of Contents Page Chapter 4: LEED Analysis of Rh(100)-p(2x2)-S Surface Structure 101 4.1 Introduction 102 4.2 Adsorption o f . ^ S on Rh(100) — 102 4.3 Computational Scheme 107 4.4 Results 108 4.5 Discussion 117 Chapter 5: LEED Analysis of the Rh(110)-c(2x2)-S Surface Structure 125 5.1 Introduction 126 5.2 Experimental . 126 5.3 Calculations 131 5.4 Results 134 5.5 Discussion 141 Chapter 6: Studies with the Quasidynamical Method 145 6.1 Introduction 146 6.2 Calculations 148 6.3 Results and Discussion 150 (a) Rh(110) and Rh(110)-c(2x2)-S - — - 150 (b) Rh(100) and Rh(100)-p(2x2)-S 161 6.4 Concluding Remarks : 168 References 171 Appendices 179 - v i i -L i s t of Tables Page 2.1 Numbers of symmetrically-inequivalent beams a c t u a l l y used i n c a l c u l a t i o n of various surface structures. The models for the overlayer structures are designated as i n figure 1.8 and 2.8. 50 3.1 Observed and calculated Auger t r a n s i t i o n energies f o r rhodium. 70 3.2 S t r u c t u r a l determination of low index surfaces of rhodium. (Watson et a l . ) 76 3.3 S t r u c t u r a l determination of low index surfaces of rhodium. (This work.) 76 3.4 Conditions f o r best agreement between experimental 1(E) curves at normal incidence for R h ( l l l ) and curves calculated with the r MJWT p o t e n t i a l ] according to the r e l i a b i l i t y indices r ^ and r for d i f f e r e n t values of a. 86 m 3.5 Surface structures reported for adsorption of small gaseous molecules on low index surfaces of rhodium. 96 4.1 Conditions f o r minima of r f o r d i f f e r e n t models of r Rh(100)-p(2x2)-S. 116 4.2 E f f e c t i v e r a d i i of chemisorbed sulphur atoms on various metal surfaces. 122 4.3 Comparisons of M-X bond distances for chalcogen atoms adsorbed on (100) surfaces of fee metals with Pauling's s i n g l e bond lengths . 123 6.1 Comparisons of conditions for minimum r ^ for various surface structures obtained from evaluating experimental 1(E) curves with corresponding curves from m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s and from quasi-dynamical c a l c u l a t i o n s . 151 - v i i i -L i s t of Tables Page 6 . 2 A demonstration of the correspondence between peak posit i o n s i n 1(E) curves calculated with the quasidynamical method for the four models of Rh ( 1 1 0)-c ( 2 x 2)-S at the s p e c i f i e d S-Rh i n t e r l a y e r spacing and those given by experiment and by the corresponding f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . In the entries f or each beam, the denominator s p e c i f i e s the number of s i g n i f i c a n t peaks i n the relevant 1(E) curve from experiment or from the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s , and the numerator gives the number of those peaks that are matched to within 7 eV by the quasidynamical c a l c u l a t i o n s . 15S 6 . 3 A demonstration of the correspondence between peak posit i o n s i n 1(E) curves calculated with the quasidynamical method for the four models of Rh ( 1 0 0 )-p ( 2 x 2 )-S at the s p e c i f i e d S-Rh i n t e r l a y e r spacing and those given by experiment and by the corresponding f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . In the entries f or each beam, the denominator s p e c i f i e s the number of s i g n i f i c a n t peaks i n the relevant 1(E) curve from experiment or from the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s , and the numerator gives the number of those peaks that are matched to within 7 eV by the quasidynamical c a l c u l a t i o n s . 1 6 ' - i x -L i s t of Figures Page 1.1 Schematic diagram of the mean free path length L (A) of electrons i n a m e t a l l i c s o l i d as a function of energy (eV). 5 1.2 Schematic energy d i s t r i b u t i o n N(E) of back-scattered electrons for a primary beam of energy E q . 5 1.3 (a) Schematic diagram of the LEED experiment. (b) The p r i n c i p l e of the formation of a d i f f r a c t i o n pattern i n LEED experiment. 8 1.4 Conventions f o r the incident angle of an electron beam on a surface; 6 i s a polar angle r e l a t i v e to a surface normal and 4> an azimuthal angle r e l a t i v e to a major c r y s t a l l o g r a p h i c axis i n the surface plane. 11 1.5 1(E) curves for the specular beam from Ni(100) at 6=3°. The bars i n d i c a t e energies where primary Bragg conditions are s a t i s f i e d (after Andersson ). 12 1.6 A schematic comparison of overlayer and substrate regions, both of which are d i p e r i o d i c i n the x and y d i r e c t i o n s . 12 1.7 Schematic d i f f r a c t i o n patterns of clean and overlayer structures. 15 1.8 Four possible s t r u c t u r a l models for Rh(110)-c(2x2)-S which are consistent with the observed d i f f r a c t i o n pattern. The adsorbed sulphur atoms are represented by the f i l l e d c i r c l e s . 16 1.9 The production of an ^TV Auger electron i n aluminum. X-ray energy levels are indicated r e l a t i v e to the Fermi l e v e l . 18 1.10 Auger spectrum of a heavily contaminated Rh(110) surface, E Q = 1.5 keV, I q = 10 microamps 19 -X-L i s t of Figures Page 2.1 M u f f i n - t i n p o t e n t i a l (a) i n cross-section as contours, (b) along xx'. V q i s the constant intersphere p o t e n t i a l . 25 2.2 I l l u s t r a t i o n of the r e l a t i o n s h i p between energies measured with respect to the vacuum l e v e l and those measured with respect ,to the lowest l e v e l of the conduction band. 25 2.3 M u f f i n - t i n model of an adsorbate covered surface (a f t e r Marcus et a l . ). 28 2.4 Schematic representation of a set of plane wave incident from the l e f t and multiply scattered by a plane of ion-cores. 35 2.5 Schematic diagram of transmission and r e f l e c t i o n matrices at the a subplane. The broken l i n e s are the c e n t r a l l i n e s between the subplanes. ; 35 2.6 Stacking of planes to form a c r y s t a l slab and i l l u s t r a t e the layer-doubling method. Planes A and B are f i r s t stacked to form the two-layer slab C; the process i s continued to form a four-layer slab. (After Tong ). 41 2.7 (a) I l l u s t r a t i o n of the renormalized forward s c a t t e r i n g method. V e r t i c a l l i n e s represent layers. Each t r i p l e t of arrows represents the complete set of plane waves that t r a v e l from layer to layer. (b) Propagation steps of the inward-travelling waves. (c) Propagation steps of the outward-travelling waves. (After Van Hove and Tong [ 8 l ] . ) 43 2.8 Schematic diagram of three simple models for Rh(100)-p(2x2)-S. In r e c i p r o c a l space, sets of symmetrically equivalent beams are indicated by a common symbol. 48 - x i -L i s t of Figures Page 2.9 Flowchart showing p r i n c i p a l steps i n a mu l t i p l e - s c a t t e r i n g LEED c a l c u l a t i o n , using the RFS or layer doubling programs. 52 2.10 Plots for C u ( l l l ) of (r ). for 9 i n d i v i d u a l beams versus Ad% r i with V = -9.5 eV. The dashed l i n e shows the reduced r e l i a -or b i l i t y index (r ) for the t o t a l 16 beams. (After Watson et a l . ). 57 2.11 Contour p l o t f or C u ( l l l ) of r versus Ad% and V . * K r or (After Watson et a l . ). 59 3.1 (a) Schematic of the Varian FC12 UHV chamber. (b) Diagramatic representation of the pumping system: IP = Ion Pump; TSP= Titanium Sublimation Pump; SP = Sorption Pump. 62 3.2 (a) Schematic diagram of the electron optics used f o r LEED experiments. (b) Diagram showing sample mounted on a tantalum supporting r i n g . (c) Electron bombardment sample heater. Hatched l i n e s represent s t a i n l e s s s t e e l parts while the s t i p p l e pattern indicates the ceramic i n s u l a t o r . 63 3.3 Auger spectra of clean Rh(110) surface as a function of c r y s t a l temperature i n d i c a t i n g carbon concentrated around the surface region at 200°C and d i f f u s e d back into the bulk at 300°C. 67 3.4 Schematic diagram of LEED optics used as a retarding f i e l d analyzer for Auger electron spectroscopy: MCA = multichannel analyzer. 69 3.5 Schematic diagram of the apparatus used to analyse the photo-graphic negatives of LEED patterns. 74 - x i i -L i s t of Figures Page 3.6 Energy dependence of rhodium phase s h i f t s (£=0-7) for the po t e n t i a l [ V ^ W ] . 78 r'3.7 (a) Schematic diagrams of the (100), (110) and (111) surfaces of rhodium. The dotted c i r c l e s represent rhodium atoms i n the second layer, (b) The corresponding LEED patterns i n d i c a t i n g the beam notation as used i n text. 80 3.8 The experimental 1(E) curve f o r the (01) beam at normal incidence from the R h ( l l l ) surface compared with f i v e corresponding curves calculated with the p o t e n t i a l [V^j^j and Ad% = -2.5% for the parameter a varying from 1.17 to 2.34. 84 3.9 Contour p l o t of r versus 6_ _ and V f o r normal incidence r r D,surf or data from R h ( l l l ) where the ca l c u l a t i o n s use the p o t e n t i a l [Vo^ W3 w i t h ot=1.76 and 6 n . ,,=480 K. 90 Rh D,bulk 3.10 Contour p l o t of r versus 6^ _ and Ad% for normal incidence r r D,surf data from R h ( l l l ) where the calcu l a t i o n s use the p o t e n t i a l [ V ™ ] with a=1.76 and 6 n . ,.=480 K. 91 L Rh J D,bulk 3.11 The experimental 1(E) curve f o r the (01) beam at normal incidence from the R h ( l l l ) surface compared with f i v e corresponding curves calculated with the p o t e n t i a l J , Ad% = -2.5%, and a = 1.76 for the parameter 6^ s u r f varying from 200 to 600 K. 93 3.12 Photographs of some p(2><2) and (3x1) LEED patterns observed at normal incidence from the adsorption of oxygen on a Rh(100) surface. (a) Rh(100)-p(2x2)-0 at 70 eV; (b) Rh(100)-(3xl)-0, s i n g l e domain at 174 eV; (c) Rh(100)-(3xl)-0, 2 equally populated domains at 100 eV; (d) Rh(100)-(3xl)-0, 2 equally populated domains at 152 eV. 99 - x i i i -L i s t of Figures Page 4.1 Photographs of LEED patterns observed at normal incidence from adsorption of S on Rh(100) surface. (a) Rh(100)-c (2x2)-S at 80 eV; (b) Rh(100)-p (2x2)-S at 72 eV; (c) Rh(100)-p (2x2)-S at 114 eV; Cd) Rh(100)-p (2x2)-S at 168 eV. 103 4.2 Auger spectra of Rh(100) surfaces with 1.5 keV and 10 micro-amp beam at d i f f e r e n t stages during the preparation of Rh(100)-p (2x2)-S. 104 4.3 Beam notation for the LEED pattern of Rh(100)-p (2x2)-S structure. 106 4.4 Comparison for the (-^ j) and (01) beams of 1(E) curves from two d i f f e r e n t experiments measured at normal incidence. 109 4.5 Comparison of experimental 1(E) curves for various i n t e g r a l -and f r a c t i o n a l - o r d e r d i f f r a c t e d beams from Rh(100)-p (2x2)-S with the calculated curves for S adsorbed on-the 4F, 2F and IF s i t e s at the topmost Rh-S i n t e r l a y e r spacing indicated f o r each curve. 111 1 11 4.6 Comparison of experimental 1(E) curves f o r the (0-^) and (— -) beams from the Rh(100)-p (2x2)-S surface with those calculated for S adsorbed on the 4F s i t e f o r a range of topmost Rh-S i n t e r l a y e r spacings. 115 4.7 Contour pl o t s of f f o r Rh(100)-p (2x2)-S versus V and Rh-S i n t e r l a y e r spacing for (a) 4F model, (b) 2F model, and (c) IF r model. Error bars indi c a t e standard errors as defined i n chapter 2. 118 -xiv-L i s t of Figures Page 5 . 1 Auger spectra for a Rh(llO) surface when cleaned and when containing a c ( 2 x 2 ) overlayer of sulphur. 1 2 8 5 . 2 Photographs of LEED patterns observed at normal incidence from adsorption of S on Rh(llO) surface. (a) Rh(llO) at 144 eV; (b) R h ( 1 1 0 ) - c ( 2 x 2 ) - S at 7 8 eV; (c) R h ( 1 1 0 ) - c ( 2 x 2 ) - S at 102 eV; (d) Rh ( 1 1 0)-c ( 2 x 2)-S at 1 5 0 eV. 1 2 9 5 . 3 Beam notation f o r the LEED pattern from the Rh(l l O ) - c ( 2 x 2 ) - S surface structure. 1 3 0 5 . 4 Experimental 1(E) curves for two sets of beams which are expected to be equivalent f o r the R h ( l l O ) - c ( 2 x 2 ) - S structure. 1 3 2 5 . 5 Comparison of some experimental 1(E) curves from R h ( l l O ) - c ( 2 x 2 ) - S with those calculated f o r the four s t r u c t u r a l models over a range of topmost i n t e r l a y e r spacings: (a) ( 0 1 ) beam, (b) ( 1 0 ) beam, 31 and (c) (—j) beam. 1 3 5 5 . 6 Comparison of experimental 1(E) curves f o r some i n t e g r a l - and fr a c t i o n a l - o r d e r beams from R h ( l l O ) - c ( 2 x 2 ) - S with those c a l -o culated f o r the 4 F model with sulphur either 0 . 7 5 or 0 . 8 5 A above the topmost rhodium layer. 139 5 . 7 Contour plo t s of r for R h ( 1 1 0 ) - c ( 2 x 2 ) - S versus V and Rh-S r r or i n t e r l a y e r spacing for four d i f f e r e n t s t r u c t u r a l models. 1 4 0 5 . 8 Schematic s p e c i f i c a t i o n of interatomic distances i n the v i c i -n i t y of an overlayer sulphur atom i n the surface structure Rh ( 1 1 0)-c ( 2 x 2)-S. Distances i n Angstrom. 1 4 3 5 . 9 Interatomic distances, for the s p e c i f i c a t i o n of hard sphere r a d i i i n the neighbourhood of an oxygen atom i n the o F e ( 1 0 0 ) - ( l x l ) - 0 structure. Distances i n Angstrom. (After Legg et a l . [ 1 5 3 ] ) . 1 4 3 - X V -L i s t of Figures Page 6.1 Comparison of experimental 1(E) curves f o r normal incidence on Rh(llO) with those calculated with the quasidynamical method and the f u l l m u l t i p l e - s c a t t e r i n g method when the topmost i n t e r l a y e r spacing equals the bulk value (0%) and when i t i s contracted by 10%. 152 6.2 Contour plots of r for Rh(110)-c(2x2)-S versus V and the Rh-S i n t e r l a y e r spacing for four d i f f e r e n t s t r u c t u r a l models calculated with the quasidynamcial method. 154 33 6.3 Comparison of 1(E) curves measured f or the (01) and (-^j) d i f f r a -cted beams for normal incidence on Rh(110)-c(2x2)-S with those calculated by the quasidynamical method and by the f u l l m u l t i p l e - s c a t t e r i n g method f or the four s t r u c t u r a l models descr-ibed i n text. 157 6.4 Comparisons of some experimental 1(E) curves f o r f r a c t i o n a l -order beams f o r normal incidence on Rh(110)-c(2x2)-S and Rh (100) -p(2><2) -S with those calculated f o r the centre adsorption s i t e s with the quasidynamical method and with the f u l l multiple-s c a t t e r i n g method. The topmost Rh-S i n t e r l a y e r spacings i n the o o quasidynamical c a l c u l a t i o n s are 1.15 A and 1.3 A for Rh(110)-c(2x2)-S and Rh(100)-p(2x2)-S r e s p e c t i v e l y ; the corresponding values f o r the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s o o are 0.75 A and 1.3 A. 159 6.5 Comparisons of some experimental 1(E) curves f o r normal incidence on Rh(100) with those calculated with the quasidynamical method and with the f u l l m u l t i p l e - s c a t t e r i n g method. 162 6.6 Contour pl o t s of r for Rh(100)-p(2x2)-S versus V and the Rh-S r r or i n t e r l a y e r spacing f o r the 4F and 2F s t r u c t u r a l models calculated by the quasidynamical method: (a) comparisons with a l l i n t e g r a l -and f r a c t i o n a l - o r d e r beams; (b) comparisons with f r a c t i o n a l -order beams only. 164 - x v i -L i s t of Figures Page 13 6.7 Comparisons of 1(E) curves measured for the (01) and (-^j) d i f f r a c t e d beams for normal incidence on Rh(100)-p(2x2)-S with those calculated by the quasidynamical method and by the f u l l m u l t i p l e - s c a t t e r i n g method for three po s s i b l e s t r u c t u r a l models. 166 - x v i i -Acknowledgement It has been a rewarding experience to work under Professors K.A.R. M i t c h e l l and D.C. Frost during the course of t h i s work. Their guidance and encouragement have provided invaluable support, and f o r t h i s I give them my sincere thanks. I am very g r a t e f u l to Dr. C.W. Tucker (General E l e c t r i c Corporation) for providing a Rh(100) c r y s t a l , Dr. E. Zanazzi and Professor F. Jona (State University of New York, Stony Brook) for providing t h e i r r e l i a b i l i t y - i n d e x programs and to Dr. M.A. Van Hove (University of C a l i f o r n i a at Berkeley) and Dr. S.Y. Tong (University of Wisconsin) for copies of t h e i r m u l t i p l e - s c a t t e r i n g and quasidynamical computer programs. I would l i k e to acknowledge the contributions of every member of the surface science group. In the past, Dr. R.W. Streater and at present, T.W. Moore and Dr. S.J. White for experimental assistance, stimulating discussion and i n p a r t i c u l a r f o r t h e i r comments during the preparation of t h i s t h e s i s . Among these, I owed a s p e c i a l thank to Dr. F.R. Shepherd and Dr. P.R. Watson who had ass i s t e d and collaborated i n t h i s work throughout the duration of time they were here. I am indebted to many members of the mechanical and e l e c t r i c a l workshops who have contributed so much i n maintaining the working conditions of the instruments. I am very g r a t e f u l to B i l l Ng for support and useful suggestions and e s p e c i a l l y for h i s professional job i n typing t h i s t h e s i s . F i n a l l y , but foremost, a deep sense of gratitude and love i s dir e c t e d toward my husband, D h i t i Hengrasmee, who has been concerned with my progress and s p i r i t u a l l y supported me throughout the course o f my study. To him, I dedicate t h i s t h e s i s . CHAPTER 1 Introduction -2-1.1 Modern Surface Science Studies of the properties of s o l i d surfaces have assumed great i n t e r e s t over the past decade, i n part because such surfaces have dominant roles i n various technological processes (e.g. f r i c t i o n and wear, e l e c t r o n i c devices and heterogeneous c a t a l y s i s ) [ l , 2 ] . T r a d i t i o n a l research emphasized the properties of r e a l surfaces, usually of p o l y c r y s t a l l i n e materials, which could not be well-characterized at the atomic l e v e l , However, modern surface science has introduced the \"clean surface\" approach where c a r e f u l l y characterized surfaces are studied with the objective of developing p r i n -c i p l e s which can lead to better understandings of the atomistic aspects of surface processes, including those of technological i n t e r e s t [4,5], In the clean surface approach, s i n g l e c r y s t a l s are used and the properties of surfaces corresponding to well-defined c r y s t a l l o g r a p h i c planes are studied under conditions such that the surface i s not contaminated by unwanted impurities. This requires experiments to be c a r r i e d out under u l t r a - h i g h _9 vacuum (<10 t o r r ) . The necessity f o r t h i s p r o v i s i o n follows from the k i n e t i c theory which predicts that, for an ambient pressure of 10 ^ t o r r , a surface can be covered by an adsorbed monolayer i n 1 second, assuming that a l l c o l l i d i n g molecules s t i c k to the surface. With the a v a i l a b i l i t y of ul t r a - h i g h vacuum f a c i l i t i e s , many experi-mental techniques have been developed r e c e n t l y f o r the ch a r a c t e r i z a t i o n of s o l i d surfaces with regard to chemical composition, geometrical and el e c t r o n i c structure as well as chemical bonding, v i b r a t i o n a l structure and energy exchange with impinging molecules. Among the techniques a v a i l a b l e , -3-Auger electron spectroscopy (AES) i s commonly used for q u a l i t a t i v e chemical analyses of surfaces, whereas u l t r a v i o l e t photoemission spectroscopy (UPS) i s popular for i n d i c a t i n g e l e c t r o n i c structure and low energy electron d i f f r a c t i o n (LEED) gives information on geometrical structure. R e f l e c t i o n high energy electron d i f f r a c t i o n (RHEED) and the s c a t t e r i n g of molecular and ion beams [8,9] also have high p o t e n t i a l s f o r surface studies. Research on well-defined surfaces with a v a r i e t y of techniques has established that surface properties depend not only on the p a r t i c u l a r material involved, but also on the s p e c i f i c c r y s t a l l o g r a p h i c plane exposed [io]. For example, chemisorption and molecular beam sca t t e r i n g studies have shown that s t i c k i n g p r o b a b i l i t i e s and r e a c t i o n rates can be very d i f f e r e n t on stepped surfaces of platinum compared with low-index surfaces of the same metal [ l l ] . At the present time LEED appears as the most d i r e c t technique for the determination of surface geometrical structure. This p o t e n t i a l was recognized in 1927 when the experiment was f i r s t performed by Davisson and Germer . However the development of t h i s technique to i t s f u l l p o t e n t i a l was i n h i b i t e d by many t h e o r e t i c a l and experimental d i f f i c u l t i e s , and i t was only during the 1970's that these problems were overcome s u f f i c i e n t l y f o r some surface structures to be determined. In current LEED studies i t i s considered adv-antageous, i f not e s s e n t i a l , to u t i l i z e other techniques simultaneously for characterizing the surface. The most commonly-used complementary technique - i s AES. H i s t o r i c a l l y , electrons produced by the Auger process were discovered in 1925 , and although t h e i r p o t e n t i a l i n surface analysis was recognized by Lander i n 1953, i t was not u n t i l the l a t e 1960's that they could be -4-detected r o u t i n e l y i n surface experiments [15-17]. Indeed the development of AES as a method f o r q u a l i t a t i v e surface analysis encouraged the develop-ment of reproducible LEED experiments, and i n turn the development of adequate theories f o r LEED. The existence of the l a t t e r represented a necessary requirement for the development of LEED crystallography ( i . e . , the determination o f surface geometrical structure by LEED). 1.2 Introduction to Low Energy Electron D i f f r a c t i o n A LEED experiment involves d i r e c t i n g a beam of low-energy electrons ( t y p i c a l energy <500eV) with known angles of incidence onto a well-defined surface of a c r y s t a l l i n e s o l i d and observing the i n t e n s i t y d i s t r i b u t i o n of electrons which are e l a s t i c a l l y back-scattered from the surface. The de o Broglie hypothesis r e l a t e s electron energy (E i n eV) to wavelength (X i n A) according to * =JM3 ; (1.1, electrons i n the low-energy range therefore have wavelengths which are comparable with i n t e r l a y e r spacings i n the s o l i d . Low-energy electrons are p a r t i c u l a r l y \"surface s e n s i t i v e \" because they experience strong i n -e l a s t i c scatterings i n s o l i d s . A h e l p f u l parameter f o r discussing i n e l a s t i c s c a t t e r i n g i s the electron mean free path length (L) which can be expressed i n terms o f I = I Q exp [ J / L ] , (1.2) where the incident i n t e n s i t y I at a p a r t i c u l a r energy i s attenuated to I on passage through a distance £. The general form of the dependence of the mean free path length on electron energy i s shown i n f i g u r e 1.1. Electrons -5-1,000-1 ioo-4 °< Figure 1.1: 100,000 Schematic diagram of the mean free path length L (A) of electrons i n a m e t a l l i c s o l i d as a function of energy (eV). Ul Z true secondary elastic peak E N E R G Y Figure 1,2: Schematic energy d i s t r i b u t i o n N(E) of back-scattered electrons for a primary beam of energy E Q . -6-i n the low-energy range are associated with values of L of just a few o Angstroms, and therefore they are i d e a l l y suited f o r i n v e s t i g a t i o n of the top few layers of a s o l i d . Further information on electron mean free path lengths has been reviewed by Brundle , Ibach and Powell . A monoenergetic beam of low-energy electrons incident upon a s o l i d surface t y p i c a l l y gives an energy d i s t r i b u t i o n for the back-scattered electrons of the type shown in figure 1.2. The narrow \" e l a s t i c peak\" on the ri g h t hand side involves the electrons which are studied i n the conventional LEED experiment. This peak includes the genuinely e l a s t i c a l l y - s c a t t e r e d electrons, as well as those electrons which have undergone phonon s c a t t e r i n g with small energy changes ( ^ 0.1 eV ). This l a t t e r group of electrons can be r e f e r r e d to as q u a s i e l a s t i c electrons. T y p i c a l l y only 1-5% of the incident electrons contribute to the \" e l a s t i c peak\". Most electrons experience strong i n e l a s t i c s c a t t e r i n g , associated e s p e c i a l l y with s i n g l e - e l e c t r o n and plasmon excitations [21,22], and those excitations contribute to the comparatively short mean free path length indicated i n figure 1.1. The emission of Auger -12 electrons, which t y p i c a l l y corresponds to a current of ~10 A on a back-_7 ground of -10 A, appears as small peaks superimposed on a slowly-varying background i n the intermediate range of figu r e 1.2. Peaks due to Auger electrons can be distinguished from loss peaks due to plasmon or s i n g l e - e l e c t r o n excitations because the former occur at energies which are independent of the - primary electron energy. The large peak at low energy i n figure 1.2 involves the so-called \"true secondary\" electrons which are associated with a serie s of i n e l a s t i c scatterings i n a cascade-type process . -7-The p r i n c i p l e of the LEED experiment i s i l l u s t r a t e d i n fig u r e 1.3a. The incident electrons are scattered by the surface region and the e l a s t i c a l l y back-scattered electrons are separated from others by energy s e l e c t i n g g r i d s . The e l a s t i c a l l y scattered waves i n t e r f e r e c o n s t r u c t i v e l y to give d i f f r a c t e d beams along c e r t a i n d i r e c t i o n s , and each beam shows as a bright spot when these electrons are accelerated onto a fluorescent screen. The d i s t r i b u t i o n of these spots i s r e f e r r e d to as the LEED pattern. Because of strong i n e l -a s t i c s c a t t e r i n g , the e l a s t i c a l l y - s c a t t e r e d electrons do not normally experi-ence a regular p e r i o d i c i t y normal to the c r y s t a l surface and consequently the region probed by the LEED electrons i s d i p e r i o d i c ( i . e . , i t can be characterized by two unit t r a n s l a t i o n vectors a^ and a^). The corresponding d i f f r a c t i o n pattern (figure 1.3b) involves the associated t r a n s l a t i o n a l vectors in r e c i p r o c a l space, namely a^ f and £i* defined by a *z a x z ^2 ~ ~1 ^ a* = 2TT , a* = 2TT (1.3) ~1 ~z a,, a xz a 0. a / z where £ i s the unit vector perpendicular to a^ and a. . Pendry has given a d e t a i l e d analysis showing how a LEED pattern i s a d i r e c t consequence of the surface t r a n s l a t i o n a l symmetry. Assuming the incident electrons can be described by a plane wave V = B exp[ik +. r ] , (1.4) o r ~o — where B i s an appropriate normalization constant, r i s a general p o s i t i o n . vector and k + i s the incident wave vector which r e l a t e s to electron energy ~o through ~ |k +| 2 , (1.5) 2m '~o1 ' -8-Figure 1.3: a) Schematic diagram of the LEED experiment. b) The p r i n c i p l e of the formation of a d i f f r a c t i o n pattern i n LEED experiment. -9-then wave vectors k f o r the d i f f r a c t e d electrons are determined by conserva-t i o n of energy E(k\") = E(k^) (1.6) and by the conservation of momentum p a r a l l e l to the surface k~ = k+,, + g(hk) , (1.7) where g(hk) = ha* + ka* , (1.8) h and k being integers. As i l l u s t r a t e d , i n f i g u r e 1.3b, the d i r e c t i o n of + each d i f f r a c t e d beam (wave vector k ,,.-.) i s determined by E, k and g. ~g(hk) ~o ~ For given values of lc* and E, each spot i n a d i f f r a c t i o n pattern i s asso-ciated with a p a r t i c u l a r g, and hence may be i d e n t i f i e d with the indices (hk). For a given energy, only a li m i t e d number of beams can reach the screen; i f |g| i s s u f f i c i e n t l y large k~ becomes complex and corresponds to ~ ~g an evanescent (or surface) wave which cannot escape from the s o l i d . The (00) beam i s made up of electrons which have interacted with the surface without momentum transfe r p a r a l l e l to the surface Qc |(=J<+||)» and i t i s frequently c a l l e d the \"specular beam\". The d i r e c t i o n of the specular beam remains constant as E changes, as long as the electrons move i n a f i e l d - f r e e space outside the c r y s t a l and the d i r e c t i o n of the incident beam i s f i x e d . With increasing energy, more d i f f r a c t e d beams are observed, the non-specular beams move towards the (00) beam, the symmetry of the LEED pattern remains unchanged, but the beam i n t e n s i t i e s vary continuously. In p r a c t i c e , incident electron beams i n LEED are coherent only over 2 ° r e s t r i c t e d distances (-10 A) , and t h i s l i m i t s the range over which -10-surface order can be recognized i n the d i f f r a c t i o n experiment. Some disorder i s i n e v i t a b l y present at surfaces, and t h i s can a f f e c t spot patterns by broadening the d i f f r a c t e d beams, by introducing streaks, rings and spot s p l i t t i n g s , and by increasing the background i n t e n s i t y . Frequently LEED patterns are affected by domain structure i n which two or more equi-valent orientations of the structure are possible on the surface. In the presence of domain structure, provided that the dimensions of the domains are greater than the coherence width associated with the incident electron beam, observed LEED patterns represent d i r e c t superpositions of the patterns from the i n d i v i d u a l domains. This can be p a r t i c u l a r l y important for adsorption systems, and examples are given l a t e r . For a given surface, the i n t e n s i t i e s of the d i f f r a c t e d beams vary with the electron energy E, the d i r e c t i o n of incidence ( s p e c i f i e d by angles 0, cf>; see f i g u r e 1.4) and the temperature. Most often i n t e n s i t y data are presented as a function of energy ( i . e . , as 1(E) curves for each d i f f r a c t e d beam) with a l l other parameters being held constant. A t y p i c a l example of 1(E) curves i s given i n f i g u r e 1.5. Davisson and Germer , at the time of the f i r s t LEED experiment, r e a l i z e d beam i n t e n s i t i e s contain information on surface bond distances, but nearly 50 years elapsed before d e t a i l e d surface geometries could be extracted from measured i n t e n s i t i e s . The basic method u t i l i z e d at the present time involves the t r i a l - a n d - e r r o r approach i n which J(E) curves are calculated for d i f f e r e n t possible surface geometries, and a search i s made for that geometry which allows the best match up with the ex-perimental 1(E) curves for the various d i f f r a c t e d beams. The main content of t h i s thesis i s involved with the a p p l i c a t i o n of t h i s approach to LEED crystallography. -11--z direction of incident bear Figure 1.4: Conventions for the incident angle of an electron beam on a surface; 8 i s a polar angle r e l a t i v e to a surface normal and <(> an azimuthal angle r e l a t i v e to a major c r y s t a l l o g r a p h i c axis i n the surface plane. -12-1 2 ELECTRON ENERGY (eV) Figure 1.5: 1(E) curve for the specular beam from Ni(100) at 9=3°. The bars i n d i c a t e energies where primary Bragg conditions are s a t i s f i e d (after Andersson ). Figure 1.6: A schematic comparison of overlayer and substrate regions, both of which are d i p e r i o d i c in the x and y d i r e c t i o n s . -13-1.3 Surface Crystallography The d e f i n i t i o n of surface i s very much a function of the p a r t i c u l a r probe used to study i t . For LEED from a c r y s t a l l i n e s o l i d , i t i s convenient to r e f e r to the \"surface region\" as the region probed by the LEED electrons ( i . e . , over the range o f mean free path length corresponding to the e l a s t i c a l l y -scattered e l e c t r o n s ) . Figure 1.6 also indicates the \"substrate\" whose structure i s generally known from X-ray crystallography and i s that f o r which the bulk t r i p e r i o d i c i t y i s established. The objective of surface c r y s t a l l o -graphy i s then to determine the p o s i t i o n of a l l atoms beyond the substrate surface ( i . e . , the topmost substrate plane). The surface region may involve an overlayer whose d i p e r i o d i c t r a n s l a t i o n a l symmetry i s d i f f e r e n t from that of a substrate plane. The appropriate p e r i o d i c t r a n s l a t i o n a l symmetry for LEED i s that f o r the o v e r a l l surface region, and i s described by the unit t r a n s l a t i o n a l vectors a- and a_. These vectors may r e s u l t from the combination of the d i p e r i o d i c symmetries of the substrate and the overlayer. The vectors and a define a unit mesh which i s analogous to the unit c e l l of t r i p e r i o d i c crystallography. The vector t = ma, + na_ (1.9) translates from one point i n a surface region to another with an i d e n t i c a l environment, and a two-dimensional net can be generated from a l l i n t e g r a l values o f m and n; t h i s i s the d i p e r i o d i c analogue of the t r i p e r i o d i c l a t t i c e used i n X-ray crystallography. Five types of d i p e r i o d i c nets are possible and they are analogous to the 14 Bravais l a t t i c e s i n t r i p e r i o d i c c r y s t a l l o -graphy. There are 17 possible space groups i n d i p e r i o d i c crystallography, and they are d e t a i l e d i n the International Tables f o r X-ray Crystallography . -14-Adsorption on clean surfaces t y p i c a l l y gives increased surface perio-d i c i t i e s and therefore extra LEED spots, as shown i n figure 1.7. Such extra spots are frequently c a l l e d \" f r a c t i o n a l order\" spots when the same notation i s used for corresponding beams from the adsorption structure as for the clean surface structure. Generally i t i s convenient to use a nota-t i o n f o r surface structures and d i f f r a c t e d beams which i s based on the substrate. For example i n Wood's nomenclature , a surface i s designated where (a^a^) and (Jb^.b^) are the unit d i p e r i o d i c vectors of the surface region and substrate r e s p e c t i v e l y , and G i s the angle of r o t a t i o n between the surface and substrate unit meshes (for more complex surfaces, where such an angle of r o t a t i o n i s not applicable, a matrix notation has been introduced and discussed further by Estrup and McRae ) . With Wood's notation, the symbols p or c a r e frequently added to indicate whether the surface mesh i s p r i m i t i v e (one atom per unit mesh) or centred (with an extra atom at the centre of the unit mesh), r e s p e c t i v e l y . For the examples of S adsorbed on (100) and (110) surfaces of rhodium (figure 1.7) the structures obtained are designated as Rh(100)-p(2x2)S and Rh(110)-c(2x2)S r e s p e c t i v e l y ; the l a t t e r could a l t e r n a t i v e l y be designated as Rh(110)-(/3x/3/2)54-S although the f i r s t i s always used for s i m p l i c i t y . A d i f f r a c t i o n pattern usually allows a s p e c i f i c a t i o n of the surface p e r i o d i c i t y , but never of the actual surface structure. The l a t t e r requires analysis of beam i n t e n s i t i e s . For S adsorbed on the (110) surface of rhodium, there are four p a r t i c u l a r l y important locations for the S atoms. These are -15-rea l space reciprocal space ft % Rh (100) oTi ffi ir2 oo; 10 20 9 ft 9 I2 Rh(100)- p(2 x 2)S ft-®\"} o 0 0 o 0 OCDOO OCXBOO COBOO CXX)QO 012 Rh(110) 112 00 20 Rh(1 10)-c(2x2)S On 00 O O 2 2 o Ti Figure 1.7: Schematic d i f f r a c t i o n patterns of clean and overlayer structures. -16-On-top(IF) model Centre ( 4 F ) model Short-bridge (2 SB) Long-bridge ( 2 LB) model model Four possible structural models for Rh(llO)-c(2*2)S which are consistent with the observed d i f f r a c t i o n pattern. Th adsorbed sulphur atoms are represented by the f i l l e d c i r c -17-shown i n fi g u r e 1.8, and a l l are consistent with the c(2x2) d i f f r a c t i o n pattern. The adsorption s i t e s are designated as centre or four-fold(4F) s i t e s , on-top or one-fold (IF) s i t e s , short-bridge (2SB) s i t e s or long-bridge (2LB) s i t e s . To determine the actual adsorption s i t e i t i s necessary to c a l c u l a t e the 1(E) curves of the d i f f r a c t e d beams f o r the various models and compare them with the experimental 1(E) curves to assess which model gives the best agreement. 1.4 Auger Electron Spectroscopy The Auger process i s depicted i n f i g u r e 1.9. It i s i n i t i a t e d by the i o n i s a t i o n of a core electron either by electron impact or by photon i n t e r -action. An electron from a higher energy l e v e l then drops down to f i l l the inner vacancy, and t h i s process releases energy either by photon production (e.g. X-ray fluorescence) or by e j e c t i o n of an Auger electron whose k i n e t i c energy depends d i r e c t l y on the energy le v e l s involved i n the process [23,3l]. Generally Auger emission i s the more probable process i f the i n i t i a l i o n i s a -t i o n involves an electron whose binding energy i s less than -2keV. The key point for surface analysis i s that the k i n e t i c energies of Auger electrons are c h a r a c t e r i s t i c of the p a r t i c u l a r element from which the electrons o r i g i n a t e ; chemical s h i f t e f f e c t s are observed, but these e f f e c t s are small compared with the differences between d i f f e r e n t elements [32,33]. Q u a l i t a t i v e analysis i n p r a c t i c e involves comparing the energies of observed Auger peaks with the l i s t e d values [34-36], Most elements, with the exception of hydrogen and helium, can be detected uniquely even i f several are present i n a surface region. A t y p i c a l example of an Auger spectrum from- t h i s work i s shown i n f i g u r e 1.10; t h i s i s for a Rh(110) surface contaminated with sulphur, -18-Figure 1.9: The production of an L 2 VV Auger electron i n aluminum. X-ray energy levels are indicated r e l a t i v e to the Fermi l e v e l . - 1 9 -T r l r r 100 200 300 4 0 0 ENERGY/eV Figure 1.10: Auger spectrum of a heavily contaminated Rh(llO) surface, E Q = 1.5 keV, I = 10 microamps. -20-carbon and phosphorus. The spectrum i s presented in the d e r i v a t i v e form ( dN(E)/dE ) to enhance the weak Auger features. Using standard LEED optics as a retarding f i e l d analyzer [16,17], amounts of around 1-5% of a monolayer can be detected for most elements; higher s e n s i t i v i t i e s are possible with a c y l i n d r i c a l mirror analyzer . The f l u x of Auger electrons produced depends e s p e c i a l l y on the i o n i z a t i o n cross-section of i n d i v i d u a l elements, and t h i s generally varies with energy. In t h i s t h e s i s , AES i s used only for q u a l i t i a t i v e chemical a n a l y s i s , although there are continuing attempts to develop t h i s technique for quanti-t a t i v e analysis [38,39]. With s u i t a b l e c a l i b r a t i o n s , t h i s technique can give important information on surface k i n e t i c s . It also has p o t e n t i a l value for assessing aspects of surface band structures . 1.5 Aims of Thesis The o v e r a l l objective of t h i s thesis i s to contribute to an increase i n knowledge associated with LEED crystallography, both by determining some unknown surface structures and by assessing possible new or modified procedures. The c a t a l y t i c aspects of rhodium have been well known for a long time but the crystallography of i t s surfaces has not been thoroughly inves-tigated. In e a r l i e r work, Watson et a l . [43,44] reported discrepancies i n the geometrical structures of the (100) and (111) surfaces of rhodium, asso-ciated with the use of atomic p o t e n t i a l s from two d i f f e r e n t sources which were expected to give e s s e n t i a l l y equivalent r e s u l t s . These discrepancies are resolved i n t h i s t h e s i s . -21-An important recent emphasis i n LEED crystallography involves the development of s u i t a b l e r e l i a b i l i t y factors for making routine comparisons between experimental and calculated 1(E) curves. The most complete R-factor appears to be that introduced by Zanazzi and Jona . This r e l i a b i l i t y f a c tor i s studied here both i n actual surface structure determinations and by assessing i t s value f o r f i x i n g some non-geometrical parameters required i n the multiple s c a t t e r i n g c a l c u l a t i o n s of LEED i n t e n s i t i e s . In the experimental parts of t h i s t h e s i s , the adsorptions of oxygen and sulphur on the (100) and (110) surfaces of rhodium have been studied and d i f f r a c t e d beam i n t e n s i t i e s measured for various structures. Complete LEED cr y s t a l l o g r a p h i c analyses with f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s have been made for the surface structures designated Rh(110)-c(2x2)S and Rh(100)-p(2x2)S. These structures have proved useful for gaining some insights into surface chemical bonding. A problem with the present schemes for c a l c u l a t i n g LEED i n t e n s i t i e s concerns the large computational times arid computer core storages required. A simpler scheme c a l l e d the quasidynamical method has recently been proposed by Tong and Van Hove ; i t i s f a s t e r and requires much less core storage than the complete methods. I n i t i a l studies i n d i c a t e that i t could be useful for systems of weak scatterers [46,47], and further investigations are reported here, p a r t i c u l a r l y for structures involving sulphur adsorbed on surfaces of rhodium. -22-CHAPTER 2 C a l c u l a t i o n of LEED I n t e n s i t i e s -23-2.1 C h a r a c t e r i s t i c s of 1(E) curves A t y p i c a l 1(E) curve has already been i l l u s t r a t e d i n figure 1.5; t h i s i s s p e c i f i c a l l y f o r the specular beam d i f f r a c t e d from a Ni(100) surface. Such a curve shows considerable structure, that i s the i n t e n s i t y exhibits a number of maxima and minima as the energy i s varied. Also as noted i n section 1.2, the e l a s t i c r e f l e c t i v i t y corresponds to only a few percent of the t o t a l incident electrons. Early attempts to explain 1(E) curves i n LEED based on the kinematical theory (which i s applicable when scattering cross-sections are very low e.g. X-ray d i f f r a c t i o n ) were unsuccessful. For a surface whose structure corresponds to that of the bulk, the kinematical theory predicts peaks i n 1(E) curves f o r the t r i p e r i o d i c d i f f r a c t i o n condition where g(hkil) i s a vector of .the r e c i p r o c a l l a t t i c e . For the (hk) beam i n LEED, equation (2.1a) becomes equivalent to k\" = k + + g(00il) . (2.1b) ~g ~o ~ Peaks i n 1(E) curves which s a t i s f y (2.1b) are termed \"primary Bragg peaks\" and may be designated by the index I. Further relevant observations from 1(E) curves of the type i n figure 1.5 are as follows: 1) Peaks i n experimental 1(E) curves which are close to s a t i s f y i n g the primary Bragg condition (equation 2.1b) are generally found at lower energies than expected. This suggests an inner p o t e n t i a l correction i s necessary as - a consequence of the reduced p o t e n t i a l experienced by an electron inside the c r y s t a l . 2) Often more peaks are observed i n experimental 1(E) curves than expected -24-from equation 2.1. This suggests m u l t i p l e - s c a t t e r i n g i s s i g n i f i c a n t ; t h i s i s consistent with the cross-sections f or s c a t t e r i n g of low-energy electrons being of the order of unit mesh areas (and hence several orders of magnitude greater than those for the s c a t t e r i n g of X-rays). 3) Peaks i n 1(E) curves generally show increasing widths with increasing energy . Peak widths are r e l a t e d to uncertainties i n energy and hence to f i n i t e l i f e - t i m e s v i a the uncertainty p r i n c i p l e ; the average l i f e -time can be interpreted as the time for the electron to traverse the mean free path length (L) introduced i n section 1.2. 4) The d i f f r a c t e d beam i n t e n s i t i e s decrease with increasing temperature often i n an exponential manner [53,54]. Such observations suggest that the LEED process i s a dynamical process i n which the non-geometrical parameters play an important r o l e i n i t s d e s c r i p t i o n . The f i x i n g of these parameters, together with m u l t i p l e - s c a t t e r i n g of electrons through ordered surface regions, represent complications f o r an analysis of the d i f f r a c t i o n process. 2.2 Physical Parameters required i n LEED Theory It has already been indicated that the incident electrons i n LEED experience strong e l a s t i c and i n e l a s t i c s c a t t e r i n g s ; c l e a r l y the c r y s t a l p o t e n t i a l must be chosen c a r e f u l l y to accommodate these two important features i n LEED i n t e n s i t y c a l c u l a t i o n . The \" m u f f i n - t i n \" p o t e n t i a l provides a convenient model for t h i s purpose. In t h i s approximation (figure 2.1), the p o t e n t i a l i s taken as s p h e r i c a l l y symmetric i n the v i c i n i t y of atoms and gure 2.1: M u f f i n - t i n p o t e n t i a l a) i n cross-section as contours, b) along xx' ( V i s the constant intersphere p o t e n t i a l ). Energy Energy T Vocuum level 01 — F E p Fermi energy o Lowest level of conduction bond gure 2.2: I l l u s t r a t i o n of the r e l a t i o n s h i p between energies measured wi respect to the vacuum l e v e l and those measured with respect t the lowest l e v e l of the conduction band. -26-constant elsewhere. The r e a l part of the constant p o t e n t i a l (V ) i s often equated to the empirical inner p o t e n t i a l noted above; 1 ^ I i s roughly equal to the sum of the Fermi energy and the work function as i l l u s t r a t e d i n f i g u r e 2.2. V o r i s negative and i t can be regarded as giving the p o s i t i o n of the muffin-tin zero below the vacuum l e v e l ; i t i s associated with the p o t e n t i a l well that confines the conduction electrons to s o l i d s . Typical values o f V range from -10 to -20 eV. The e f f e c t of t h i s p o t e n t i a l well i s to speed up the incident electrons i n s i d e the c r y s t a l . Although i s s t r i c t l y dep-endent on energy , because of exchange and c o r r e l a t i o n e f f e c t s , t h i s dependence i s often s u f f i c i e n t l y weak that i t can be ignored f o r the purpose of c a l c u l a t i n g 1(E) curves . To a good approximation, changes i n V give a r i g i d s h i f t i n calculated 1(E) curves; t h i s enables values of V^^ used i n c a l c u l a t i o n to be r e f i n e d by t r a n s l a t i n g the calculated 1(E) curves along the energy scale u n t i l optimal matching with the corresponding experimental 1(E) curves i s obtained . I n e l a s t i c s c a t t e r i n g i s conveniently incorporated i n t o c a l c u l a t i o n schemes by gi v i n g an imaginary contribution to the intersphere p o t e n t i a l , that i s expressing the constant part of the p o t e n t i a l as V = V + iV . . (2.2) o or o i For an electron wave function with time dependence V f r . t ) = Y f r ) e l E t ; (2.3) 2V • t the i n t e n s i t y decays with time as e 0 1 provided V i s negative. Pendry established the r e l a t i o n AE = 2IV .I (2.4) w 1 011 -27-where AE^ i s the peak width at h a l f maximum height i n an 1(E) curve and 2 2 the analysis uses atomic units ( n =ro e = e =1). Equation (2.4) i s h e l p f u l for estimating values of V ^ from experimental i n t e n s i t i e s ; t y p i c a l l y V i s around -5 eV with a f a i r l y weak energy dependence . Demuth et a l . proposed the use of the functional form V . = - a E 1 / 3 . (2.5) o i In p r a c t i c e , e s p e c i a l l y f o r an overlayer, the c r y s t a l p o t e n t i a l close to the topmost atoms can be d i f f e r e n t from that of the substrate region ; a schematic representation of the c r y s t a l p o t e n t i a l i s indicated i n f i g u r e 2.3, Ideally the p o t e n t i a l used i n LEED ca l c u l a t i o n s i s constructed from s e l f - c o n s i s t e n t band structure c a l c u l a t i o n s , However s u i t a b l e p o t e n t i a l s of t h i s type are not always a v a i l a b l e , and a p l a u s i b l e a l t e r n a t i v e involves constructing p o t e n t i a l s from the superposition of atomic charge densities i n f i n i t e c l u s t e r s [22,6l]. In either case, the exchange p o t e n t i a l experienced by an electron of wave function <K,L\\) i s most often represented by Slater's l o c a l density approximation V e x(£)Kr) = - 6 ( i g & ? ) 1 / 3 * f r ) (2.6) where p(r) i s the l o c a l charge density. The s c a t t e r i n g of an electron plane wave by a s p h e r i c a l l y symmetric ion-core p o t e n t i a l y i e l d s a sphe r i c a l wave, and the t o t a l wavefield at large |'rj has the form [63,64] * ,ii*iur 4< (r) = e 1^'^ + f(9) s ~ (2.7) ADSORBATE LAYER SPACING SUBSTRATE LAYER SPACINGS IMAGINARY POTENTIAL VACUUM LEVEL REAL POTENTIAL SUBSTRATE kNO REFLECTION MATCHING ADSORBED LAYER TRANSITION REGION AtanMBz* C) i t 2 3: Muffin T i n model of an adsorbate covered surface (after Marcus et a l . ). -29-The s c a t t e r i n g amplitude f(8) i s commonly expanded as f(8) = J~T L C2£+l)expCi6 1)sin<5 £P J l(cos9) , (2.8) where 6^ i s the phase s h i f t which characterizes s c a t t e r i n g by ion-cores for angular momentum SL, and i s a Legendre polynomial. For a p a r t i c u l a r atomic p o t e n t i a l , phase s h i f t s are found by solving the Schro'dinger equation in s i d e the m u f f i n - t i n sphere and j o i n i n g the asymptotic form of the s o l u t i o n smoothly at the boundary of the sphere to those solutions obtained by solving the SchrOdinger equation for the outside region. In p r a c t i c e for LEED i t i s found that f(G) converges f a i r l y r a p i d l y so that only a l i m i t e d number of I values are needed. T y p i c a l l y i n LEED ca l c u l a t i o n s for energies up to and around 200 eV, the maximum value of I ( i . e . I ) needed i n expressions such ' max r as (2.8) i s about 7. The e f f e c t of the thermal motion of ion-cores i s generally treated by adding an i s o t r o p i c Debye-Waller-type contribution into the atomic sc a t t e r i n g f a c t o r . Jepsen et a l . showed that the atomic s c a t t e r i n g f a c t o r f o r such a v i b r a t i n g l a t t i c e can be r e l a t e d to that ( f(6) ) of the r i g i d l a t t i c e but with some modifications to the phase s h i f t s . S p e c i f i c a l l y for the p ^ atom, f(0,T) = f(6)exp[-M ( k ' - k j 2 ] , (2.9) P I where a wave characterized by Ic i s scattered into k , Mp. \" ^< Up>T ' and u i s the v i b r a t i o n a l amplitude i n the d i r e c t i o n of the momentum tra n s f e r P -30-(k -k). In the high temperature l i m i t (T>0 D), u i s r e l a t e d to the Debye temperature (0^) by / v 2 3h 2T ,, ... <*OT = T ' (2.10) P T M k e 2 ~ p I D where M i s the atomic mass and k n i s the Boltzmann constant, p B Computational procedures for LEED i n t e n s i t i e s developed rather slowly, i n part because of the complexity associated with the m u l t i p l e - s c a t t e r i n g . However, during the 1970's a number of schemes have been derived, and h e l p f u l reviews have been given by Duke , Tong and Stoner et a l . . The e a r l i e s t c a l c u l a t i o n s neglected i n e l a s t i c s c a t t e r i n g ; Duke and Tucker were among the f i r s t to emphasize the necessity for including i n e l a s t i c s c a t t e r i n g i n computational schemes. The f i r s t substantial agreement between calculated and experimental 1(E) curves was produced i n 1972 i n the work of Jepsen et a l . on the (100) surface of aluminium, s i l v e r and copper. These ca l c u l a t i o n s assumed: i ) Surface geometries that correspond to undistorted truncations of the bulk structures. i i ) Electron-ion core in t e r a c t i o n s can be represented by p o t e n t i a l s from band structure c a l c u l a t i o n s . i i i ) Absorption e f f e c t s can be incorporated with an imaginary p o t e n t i a l from uniform electron-gas theory . i v ) L a t t i c e v i b r a t i o n s can be treated by a Debye-Waller type factor as indicated above. v) The inner p o t e n t i a l correction (V ) c a n he chosen emp i r i c a l l y by a l i g n i n g t h e o r e t i c a l and experimental 1(E) curves. -31-This work o f Jepsen et a l . established that the dominant aspects of the e l a s t i c LEED process were e s s e n t i a l l y understood, even though numerical agreement was not obtained for absolute i n t e n s i t i e s . The l a t t e r appears to r e l a t e e s p e c i a l l y to incomplete order for the surfaces, but in any event t h i s discrepancy did not i n h i b i t the development of LEED crystallography, since i t was found that the positions o f structure i n 1(E) curves could be calculated to within experimental erro r . Calculations of LEED i n t e n s i t i e s generally involve t r e a t i n g the scatter-ing of a plane wave by a surface region of perfect d i p e r i o d i c symmetry. The t o t a l wave f i e l d outside o f the c r y s t a l has the form ik • r n r ) = 4>(r) + Z c e ~& ~ , (2.11) g ~ where <K£) i s the incident plane wave. The objective i s to cal c u l a t e beam r e f l e c t i v i t i e s , ' k R g k o c | 2 (2.12) g which r e l a t e to the measured i n t e n s i t i e s . B r i e f descriptions of some of the important procedures now ava i l a b l e for c a l c u l a t i n g beam r e f l e c t i v i t i e s are given i n the following sections. -32-2.3 T-Matrix Method The T-matrix method was formulated by Beeby and has since been d e t a i l e d further by Tong . This method s t a r t s by w r i t i n g the wave function for an electron i n s i d e the s o l i d as *C£) = •(£) + / G ( r - r ' ) V(r') *(r') dr' , (2.13) where the Green s function GQr-r, ) describes the propagation of an electron from r, to £. This equation can be solved by defining a t o t a l s c a t t e r i n g matrix (T) f o r the s o l i d V ( r ' ) n r ' ) = / T(r'r)*Q:)dr . (2.14) With the muffi n - t i n approximation for the p o t e n t i a l , s u b s t i t u t i o n of (2.14) int o (2.13) y i e l d s TCCa^) = EVr 2-R,r rR) + E {t R.(r 2-R'.£ 3-R') R ~ R*R v ~ -^^ v^ ~ ' ^ ) d ^ 4 r — ( 2- 1 5 ) where t KtXrE.IrJP • V E ( r 2 - R ) 6 x i l 2 * / v , ( r 2 - R ) G t r 2 - r ) V r - R , r r R ) d r (2.16) i s the t-matrix f o r the s i n g l e ion core at R. In (2.15), the f i r s t term covers a l l s i n g l e ion core s c a t t e r i n g , the second term represents a l l double sca t t e r i n g events, etc. Equation 2.15 therefore sums a l l p o s s i b l e i n t e r -atomic and intra-atomic s c a t t e r i n g events involved with the electron going from r , to r» in s i d e the s o l i d ~1 ~2 For the actual evaluation of the c g i n (2.11), and hence.the beam -33-r e f l e c t i v i t i e s , the c r y s t a l i s divided into subplanes p a r a l l e l to the surface such that each subplane has the same Bravais structure and contains the same kind of atoms. The f i n a l r e s u l t i s ^ Y. (k~)Y*,(k +;) i ( k + - k ~ ) - d a .., c = y E L * ~° I> ° & ^ T L L (k ) (2.17) where L stands f o r the angular momentum quantum numbers I,m, i s the associated sp h e r i c a l harmonic, the second summation i s over a l l subplanes and d i s the vector from the o v e r a l l o r i g i n at the in t e r f a c e to the o r i g i n chosen f o r subplane a. In (2.17) T^ (k Q) ^ s the element of the t o t a l s c a t t e r i n g matrix for subplane a in the angular momentum representation i L 1 L 2 Qfa 1 2 T (k ) i s the LL element of the planar s c a t t e r i n g matrix ( f ) for the ex o r »a subplane a T ( k ) = t (k ) [ l - G S p ( k . ) t (k ) ] _ 1 , (2.19) and t (k ) i s the diagonal t-matrix f o r a si n g l e ion core i n subplane a. «a o The non-zero elements of t h i s matrix r e l a t e to the phase s h i f t 6 by t M ( k ) = 4- [ q2\\\\'1 3 • (2-2°) a v o • 2m 2ik o Also needed i n (2.18) and (2.19) are the intraplanar s t r u c t u r a l propagators G SP and the interplanar propagators Ga^. These are complex matrices which are dependent on the i n e l a s t i c s c a t t e r i n g and the geometries associated with the ion core s i t e s . -34-Successful c a l c u l a t i o n s have been made with t h i s method f or clean metal surfaces. In p r i n c i p l e i t i s exact and can work for any type of surface structure; i n p r a c t i c e , however, the solving of the set of equations '(2.18) to give the matrix T i s very time consuming and requires a large amount of computer core stroage i f an appreciable number of subplanes have to be included. This method i s only p r a c t i c a l i n the presence of i n e l a s t i c s c a t t e r i n g , a feature that Beeby neglected i n the i n i t i a l formulation. The extension to include thermal motion of the ion cores was made by Tong and Rhodin i n 1971 for the (100) surface of aluminum , 2.4 Bloch Wave Method This method was introduced by McRae [67,73] and developed by Pendry , Kambe [75,76] and Jepsen et a l . [57,77]. A d e t a i l e d account has been p u b l i -shed i n Pendry s book . In t h i s approach, the muffin-tin approximation i s again used and an i n f i n i t e c r y s t a l i s b u i l t up of p a r a l l e l layers. For the region of constant p o t e n t i a l between successive layers, each Bloch wave can be expanded i n terms of plane waves. The s c a t t e r i n g s i t u a t i o n at a si n g l e layer i s depicted i n figu r e 2.4, where a set of incident plane waves Y.(r) = Z b + e x p ( i k + - r ) (2.21) i ~ _ g ~g ~ is d i r e c t e d onto the c r y s t a l , and scattered waves Y (r) = Y M*f b + e x p f i k V r ) (2.22) \\$ ~ ~ i g g g ~ J ~ gg ~ ~ ~ propagate both i n the outward d i r e c t i o n (k f) and i n the inward d i r e c t i o n ( k * t ) . The matrices involved i n t h i s formulation are expressed i n terms of -35-£ b ; e x p ( i k + - r ) g 8 ~ ft / E E M^b;.xp(l|4:r) Figure 2A\\ Schematic representation of a set of plane wave incident from the l e f t and multiply scattered by a plane of ion cores p + 1 p -+ at h layer Figure 2.5: Schematic diagram of transmission and reflection'matrices at the a t h subplane. The broken lines are the central lines between the subplanes. -36-th e l i n e a r momentum (K-space) representation; t h i s contrasts with the angular g g momentum (L-space) representation i n the T-matrix method. M*t i s an element ++ A move into the c r y s t a l . The notation M , c o v e r s - a l l four combinations of of the layer d i f f r a c t i o n matrix M where both incident and d i f f r a c t e d beams ++ g'g d i r e c t i o n s . It i s clear f o r the s i t u a t i o n i n figu r e 2.4 that a l l the d i f f r a c t e d beams become coupled together. The c o e f f i c i e n t s f o r plane waves between layers a and a+1 can be expressed i n a compact matrix notation b + i = »a+l = T b «a wa + R+ b , «a a+1 (2.23a) b\" ara = T \" \" b \" i aa a+1 + R\"+b + aa «a (2.23b) where, for example, the components of the column vector b* are the various values of b + between layers a and a+1. For a c r y s t a l composed of i d e n t i c a l l a y e r s , which are separated by a constant displacement c, the transmission and r e f l e c t i o n matrices can be expressed as T + | = P + i ( I i + M +t ) P + (2.24a) I I g J J M & M T \" = P\", ( I , + M\"T ) P\" (2.24b) & & & & fL i l l R*7 = P +. M +T P\" (2.24c) 2 8 g | | g R~* = P\", M\"t P + (2.24d) g g g g g g where P + represents inward propagation with wave vector k + through one h a l f of an i n t e r l a y e r distance while P represents the corresponding outward propagation with wave vector k -37-+ i k P~ = e s g (2.25) The I i i n equation (2.24) are elements of a unit matrix. Schematic repre-S I sentations of the r e f l e c t i o n and transmission matrices are shown i n f i g u r e 2.5. Corresponding c o e f f i c i e n t s between successive layers must s a t i s f y the Bloch conditions + a+1 i k - c , + e ~ b a (2.26a) a - i k - c , e ~ \" b a+1 (2.26b) (2.24) into (2.26) y i e l d s the eigenvalue equation l\"b + 1 = X (2.27) b \" i »a+l b\" , «a+l where L = T + + R - ( T \" \" ) \" 1 R \" V + T\"\"-(T\"\") _ 1R \" V \" (2.28) and ik- c X = exp~~ * . (2.29) Pendry has discussed the evaluation of the layer d i f f r a c t i o n ±± matrices M i n terms of the s c a t t e r i n g properties of the i n d i v i d u a l ion-cores For a layer which involves a s i n g l e atom per unit mesh, the elements s a t i s f y ++ M\"T g g r S . ' Y L f t g O [ l - X ] - J ; . Y L . f k * ) e x p ( i 6 1 , D s i n 6 1 , -Ak k* LL' o~gi (2.30) -38 -+ + where describes multiple s c a t t e r i n g within the layer. Given M~ for a p a r t i c u l a r system, the transmission and r e f l e c t i o n matrices i n equations (2.24) can be set up and hence (2.27) can be solved by standard methods to give eigenvectors, which f i x the Bloch waves, and the corresponding eigen-values which f i x possible wave vectors along with the requirement of conser-vation of momentum p a r a l l e l to the surface. Only h a l f of the 2n possible solutions (where n i s the number of vectors g included i n the c a l c u l a t i o n ) are p h y s i c a l l y acceptable ( i . e . correspond to waves which either propagate or decay exponentially i n the z - d i r e c t i o n ) . To complete the c a l c u l a t i o n of d i f f r a c t e d beam r e f l e c t i v i t i e s i t i s necessary to match each wave function, and i t s f i r s t d e r i v a t i v e with respect to z, at both sides of the solid-vacuum i n t e r f a c e . Corresponding wave matching proce-dures are involved in extending t h i s scheme to s i t u a t i o n s where one or more top layers are d i f f e r e n t from the r e s t . (e.g. for an adsorbed l a y e r ) . This basic approach involves less computer• core storage than the T-matr-ix method, but the s o l u t i o n of equation (2.27) becomes time consuming when n g i s large. -39-2.5 Perturbation Methods The T-matrix and the Bloch wave methods are exact i n the sense that they include a l l multiple s c a t t e r i n g events i n the c r y s t a l . These methods have proved valuable for c a l c u l a t i n g LEED i n t e n s i t i e s of clean surfaces, although they require long computational times and large core storage. Such considerations l i m i t the use of these exact multiple s c a t t e r i n g methods to the more complex surface structures of i n t e r e s t i n LEED crystallography, and therefore encourage the development of approximate schemes based on pertur-bation expansions. Part of the motivation for t h i s comes from the r e a l i z a t i o n that with i n e l a s t i c s c a t t e r i n g the comparatively short mean free path length must l i m i t the order of multiple s c a t t e r i n g that can be important. This suggests that i t should be possible to reduce computational times by formu-l a t i n g i n terms of perturbation theory. Tong et a l . made the T-matrix c a l c u l a t i o n to t h i r d order, and showed that i t can work well for weak scat-t e r i n g metals l i k e aluminum. However the appoach of u t i l i z i n g perturbation theory within the T-matrix method seems less h e l p f u l for stronger s c a t t e r e r s ; b a s i c a l l y t h i s appoach becomes too clumsy and unwieldy at above t h i r d order. Pendry has developed convenient i t e r a t i v e schemes which are based on the Bloch wave method and have the s i g n i f i c a n t property that the contribution from each a d d i t i o n a l order has the same basic form as those from the previous orders (this i s unlike the s i t u a t i o n for the t h i r d order c a l c u l a t i o n noted above for Al(100)). These new methods are the layer doubling and renormalized forward s c a t t e r i n g methods; multiple s c a t t e r i n g c a l c u l a t i o n s described i n t h i s t hesis u t i l i z e d these methods extensively. -40-2.5 (a) Layer Doubling Method This method [24,78] requires that i n e l a s t i c s c a t t e r i n g i s s u f f i c i e n t l y strong so that a s e m i - i n f i n i t e c r y s t a l can be approximated by a slab of f i n i t e thickness. Two layers are considered f i r s t , then four layers, and at each l e v e l of i t e r a t i o n the number of layers i s doubled. This method starts with a c a l c u l a t i o n of the r e f l e c t i o n and transmission matrices as in equations (2.24), and then generates the corresponding matrices f o r a stack of two layers. T ! + = T ! + ( I - C O \" 1 T+ + (2.31a) + - R~+ + T~~R~+ r i - R + \" R \" + ) _ 1 T + + (2.31b) - § A AA * B l i SA * B J *A y Ic - IA a-CsI\")\"1 I\"\" (2.3id) where the i n d i v i d u a l subplanes are denoted by A , B and the r e s u l t i n g composite layer i s denoted by C. The doubling process i s shown schematically i n fi g u r e 2.6; the same set of equations (2.31) are used to extend the c r y s t a l stack to 2, 4, 8, 16... layers. This process i s continued u n t i l the r e f l e c -t i o n amplitudes have converged; t y p i c a l l y t h i s requires 8 or 16 atomic lay e r s . Once convergent r e f l e c t i v i t i e s have been obtained f or the substrate, surface layers can be systematically added s t i l l using equations (2.31). A convenient feature of t h i s method i s that a surface layer can be s h i f t e d either l a t e r a l l y or v e r t i c a l l y , without having to recompute the bulk r e f l e c t i v i t i e s . -41-Figure 2.6: Stacking of planes to form a crystal slab and i l l u s t r a t e th. layer-doubling method. Planes A and B are f i r s t stacked to form the two-layer slab C; the process is continued to form four-layer slab. (After Tong .) -42-This method i s considerably f a s t e r than the f u l l Bloch wave method and yet i t can provide good numerical accuracy. Each i t e r a t i o n involves inver-sions of two matrices of dimension n (the number of beams included i n the c a l c u l a t i o n ) . A l i m i t a t i o n i s that t h i s method i s not s u i t a b l e f or very small i n t e r l a y e r spacings (c<0.5A) when n^ i s required to be excessively large . 2.5 (b) Renormalized Forward Scattering method The renormalized forward s c a t t e r i n g (RFS) method was introduced by Pendry and discussed further by Tong ; i t s c h a r a c t e r i s t i c features are that the i n t r a l a y e r scatterings are c a l c u l a t e d exactly, while the i n t e r -layer scatterings are i t e r a t e d f or the various possible paths i n the c r y s t a l . The p r i n c i p l e of t h i s method i s i l l u s t r a t e d schematically i n f i g u r e 2.7. The c r y s t a l i s again represented by a f i n i t e number of l a y e r s ; the actual number used (n) i s such that the t o t a l e l a s t i c a l l y scattered amplitude t i l that would reach the (n+1) layer i s less than a predetermined f r a c t i o n (e.g. 0.003) of i t s incident amplitude. C l e a r l y the stronger the i n e l a s t i c s c a t t e r i n g , the smaller i s the number of layers that are needed. Following • t l l Tong , A (g) designates the amplitude at the l o c a l o r i g i n between the a a ~ th and (a+1) layers f or the electron wave characterized by g propagating into the c r y s t a l ; the index i i s the order of i t e r a t i o n which i d e n t i f i e s the number of times the electron has propagated into the c r y s t a l along t h i s p a r t i c u l a r - path. For the f i r s t i t e r a t i o n we have V&> = z. CtsOVi^ ' (2-32) g -43-incident beam _ It (a-1) (go.) . + + (28) A a ( S ( a + l) Figure 2.7: a) Illustration of the renormalized forward scattering method. Vertical lines represent layers. Each t r i p l e t of arrows represents the complete set of plane waves that travel from layer to layer. b) Propagation steps of the inward-travelling waves. c) Propagation steps of the outward-travelling waves. (After Van Hove and Tong [8l].) -44-but no waves propagating in the inward direction are included after the n*^ layer. Waves propagating in the outward direction are represented by B 1(g) in an analogous notation. Except at the deepest layer, the outward travel-ling waves consist of two components (figure 2.7c): the reflected portions of the inward travelling waves, and the transmitted portion of the outward-travelling waves. In general, the amplitudes of the outward-directed waves satisfy I J (2.33) ( a = n-1, n-2, ... 0 ), where n is the deepest subplane reached in the appropriate iteration. The corresponding expression for the inward-directed waves is A X(g) = Z R +\"(gg')B 1\" 1(g') + E T + +(gg')A 1 .(g 1) ; a ~ ' a ~2 a ~ ' i a ~~ a-1 2 J ' g I (2.34) ( a = 1, 2, 3, ... n ). Equations (2.33) and (2.34) are solved iteratively in the RFS method until the r e f l e c t i v i t y has converged. This approach is computationally convenient since no eigenvalue equations or matrix inversions are involved. 2 The computation times scale as n , where n is the number of beams included; this is more favorable than the layer doubling method for which computation 3 time scales as n . The RFS method has proved to be an excellent method for g calculating LEED intensities for many systems provided the electron damping is sufficient. Otherwise i t s only limitation is a failure to converge when o . any two layers are closer than about 1 A. In the latter event the layer doubling method may be applicable. -45-2.6 Further M u l t i p l e Scattering Methods The RFS and layer doubling methods have proved to be r e l i a b l e and con-venient for LEED c r y s t a l l o g r a p h i c analyses of many clean and simple overlayer surface structures. A l i m i t a t i o n i n a l l procedures which u t i l i z e the K-space representation ( including the f u l l Bloch-wave method ) i s that the number of plane waves required i n the c a l c u l a t i o n s increases r a p i d l y with decreasing i n t e r l a y e r separations . Once matrices of dimension of the 2 order of 10 are involved the K-space methods become incr e a s i n g l y unwieldy and numerically u n r e l i a b l e ; e f f e c t i v e l i m i t s are set with i n t e r l a y e r spacing o o of around 0.5 A for both the layer doubling and the Bloch wave methods ( ~ l A sets the lower l i m i t for the RFS method ). For models where close i n t e r l a y e r spacing are required, there are two possible approaches: (i) to stay with the K-space representation but t r e a t the two layers as a composite layer (with consequent increase i n matrix dimensions and requirements for computing time and storage), or ( i i ) to work with the L-space representation (as i n the T-matrix method). The dimensions of matrices involved i n L-space c a l c u l a t i o n s are independent of the number of beams required for the c a l c u l a t i o n s , hence t h i s approach s t a r t s to have advantages over the K-space representation when n i s large. To make the T-matrix method more e f f i c i e n t , Zimmer and Holland introduced a reverse-s c a t t e r i n g i t e r a t i v e procedure which e s s e n t i a l l y represents an equivalent of the RFS method i n the L-space representation. This approach again f u l l y accounts for forward s c a t t e r i n g events, but approximates the back-scattering. The reverse s c a t t e r i n g procedure of Zimmer and Holland requires matrices of -46-2 dimensions (I +1) . T y p i c a l l y I +1^8 for electron energies less than max 1 V J max 200 eV, thus t h i s i t e r a t i v e method appears advantageous over the RFS method i f the number of beams required exceeds about 64. However t h i s L-space i t e r a t i o n approach requires the evaluation and storage of n(n-l) square matrices (3°^ for an n-layer c r y s t a l , and moreover these matrices have to be recalculated for every change made to the surface layer. This represents a less s a t i s f a c t o r y feature o f the method. Recently Van Hove and Tong described a combined-space method which u t i l i z e s both the L-space and K-space representation to achieve optimal adv-antages of each. S p e c i f i c a l l y the c a l c u l a t i o n i s made i n the L-space representation for those layers which are c l o s e l y spaced, while the K-space representation i s used for the re s t of the c a l c u l a t i o n where the i n t e r l a y e r spacings are larger. Discussions of approaches and the associated computer programs for the various methods now a v a i l a b l e f o r surface crystallography with LEED have been described i n a recent book by Van Hove and Tong [ 8 l ] . i -47-2.7 General Aspects of Computations 2.7 (a) S t r u c t u r a l Parameters and Use of Symmetry The basic approach to surface crystallography with LEED involves pos-t u l a t i n g a s e r i e s of t r i a l structures and searching for that p a r t i c u l a r model which gives the best agreement between calculated and experimental 1(E) curves. The models postulated must be consistent with the symmetries indicated by the observed LEED pattern. The substrate structure i s generally known from X-ray crystallography, but atoms i n the upper layers of a clean surface need not occupy exactly the positions they would i n the i n f i n i t e c r y s t a l . Many clean metals have surfaces whose t r a n s l a t i o n a l symmetries are found by LEED to be i d e n t i c a l with those of the corresponding substrate structure (the surface i s s a i d to be unreconstructed i f the normal r e g i s t r i e s are maintained, although there may be changes i n the v e r t i c a l spacing); by contrast LEED patterns show d i r e c t l y that many are reconstructed . In general, for both clean surfaces and adsorption systems, the topmost i n t e r -layer spacing must be varied i n the LEED i n t e n s i t y c a l c u l a t i o n s and l a t e r a l v a r i a t i o n s should also be considered. For models where domain structures are possible appropriate beam i n t e n s i t i e s need to be averaged i n the calcu-lations i n order to accommodate the expectation, that the incident beam i n the experiment samples a l l the domain types. For example, for sulphur adsorbed on the bridge s i t e s of the Rh(100) surface, as i n f i g u r e 2;8, an averaging of the i n t e n s i t i e s of the (10) and (01) beams i s necessary for the c a l c u l a t i o n s to become consistent with the four f o l d symmetry observed i n the experimental LEED pattern. •48-Rh(l00)-P(2x2)-S real 11 reciprocal - \\$ — © ® -m 0 & <s> © i © 11 2 mirror planes + 1 C A axis 1F 2 mirror planes only 2.8: Schematic diagram of three simple models for Rh(100)-p(2*2>S In reciprocal space, sets of symmetrically equivalent beams ; indicated by a common symbol. -49-Th e computational e f f o r t can be reduced when the d i r e c t i o n . o f incidence coincides with a symmetry axis or a symmetry plane ; t h i s depends on the i n e v i t a b l e equivalences i n the d i f f r a c t e d beams as a r e s u l t of the symm-etry elements i n the model. The s i m p l i f i c a t i o n s i n the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s represent a standard a p p l i c a t i o n of the group theory. U t i l i z i n g symmetry reduces the dimensions of the matrices required within the K-space representation , s p e c i f i c a l l y only one g vector i s needed f o r each set of symmetry-related beams. For the p a r t i c u l a r examples of the model types shown in f i g u r e 2.8 for Rh(100)-p(2x2>S, i t i s r e a d i l y seen that, with normal incidence, the 4F and IF models preserve two mirror planes of symmetry perpendicular to each other as well as a r o t a t i o n a x i s , whereas the 2F model contains only two mirror planes. A consequence of the axis i s the equivalence of the following 8 f r a c t i o n a l order beams ( l j ) = ( l | ) = ( l | ) E ( i j ) E ( i l ) E ( i l ) E c f l ) = ( i l ) for both the 4F and IF models. The s i t u a t i o n f o r the 2F model i s that these f r a c t i o n a l order beams separate into two sets of 4 equivalent beams, ( 1 | ) = ( 1 y ) = ( 1 y ) = ( 1 \\ ) f ( \\ 1 ) = ( \\ \"1 ) = ( y l ) = ( J 1 ) S i m i l a r l y , the 4F and IF models have the equivalences ( 0 1 ) = ( 0 1 ) = ( 1 0 ) = ( 1 0 ) whereas the corresponding s i t u a t i o n f o r the 2F model involves ( o i ) M o i ) M l o ) = ( I o ). The calc u l a t i o n s f o r the 2F model therefore require more beams, and corres-pondingly larger matrices, than the 4F and IF models. Table 2 . 1 : Numbers of symmetrically-inequivalent beams act u a l l y used i n c a l c u l a t i o n of various surface structures. The models f or the overlayer structures are designated as in figu r e 1 . 7 and 2 . 8 . Surface structure Surface model Type of symmetry Number of symmetrically inequivalent beams used i n c a l c u l a t i o n Equivalent t o t a l number of beams Rh (100) unreconstructed 2 perpendicular mirror planes + 5 3 Rh(llO) unreconstructed 2 perpendicular mirror planes 2 3 71 R h ( l l l ) unreconstructed 3 mirror planes at 6 0 ° to each other + along z-axis 10 37 Rh ( 1 0 0)-p ( 2 x 2)-S 4 F . 1 F 2 F same as Rh (100) 2 perpendicular mirror planes 35 52 2 2 1 1 7 7 Rh ( 1 1 0)-c ( 2 x 2)-S 4 F , 1 F 2 S B . 2 L B same as Rh (110) same as Rh ( 110 ) 49 4 9 1 7 5 1 7 5 -51-Calculations reported here with the RFS and layer doubling methods u t i -l i z e symmetry as i n the discussion and computer programs given by Van Hove and Tong . In these routines symmetry i s accommodated by l i s t i n g the g vectors i n the input data together with appropriate code numbers to i d e n t i f y the symmetry type of each beam. The code number enables the program to use the appropriate symmetrized wave functions and to set up the s i m p l i f i e d d i f f r a c t i o n matrices. L i s t e d i n Table 2.1 are the numbers of symmetrically inequivalent beams needed f o r c a l c u l a t i o n s on the various surfaces studied i n t h i s t h e s i s . 2.7 (t>) Program Flow The flow-chart i n f i g u r e 2.9 summarises the sequence of events that occur i n a m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n . The programs s t a r t by reading i n a l l the relevant s t r u c t u r a l and non-structural parameters as well as a l i s t of d i f f r a c t e d beams with t h e i r symmetry code numbers. At each energy, the dimensions of the matrices are set equal to the number of propagating beams ( i . e . those beams with r e a l k ) plus the f i r s t few evanescent beams. ±± ~ The layer d i f f r a c t i o n matrices M are c a l c u l a t e d ; d i f f e r e n t subroutines are a v a i l a b l e depending on whether the layer corresponds to a simple Bravais net or to a composite layer-type. The stacking of layers i s performed by either the layer doubling or the RFS methods. Each method can include overlayers with structures which d i f f e r from the appropriate layer of the substrate; a s p e c i a l case of t h i s involves a v a r i a t i o n of the topmost layer spacing for example for clean metal surfaces. Generally the c a l c u l a t i o n s are made f o r the energy range 40-200 eV i n increments of 2 eV up to 80 eV.and i n increments •52-Read in (i) geometry ( i i ) VQT> V . 0 1 ( i i i ) beams and symmetrv (iv) temperature data (v) phase s h i f t s Choose i n i t i a l energy Find beams needed at E Compute temperature-dependent phase s h i f t s Calculate layer d i f f r a c t i o n matrices M , Find d i f f r a c t e d beam amplitudes from surface plus substrate by RFS Calculate beam i n t e n s i t i e s Vary surface geometry k-1 Find d i f f r a c t i o n matrices f o r a substrate layers by layer doubling Add surface layer and f i n d d i f f r a c t e d beam amplitudes Calculate beam i n t e n s i t i e s ) Vary surface geometry Increment E ZZl Figure 2.9: Flowchart showing p r i n c i p a l steps i n a m u l t i p l e - s c a t t e r i n g LEED c a l c u l a t i o n , using the RFS or layer doubling programs. -53-of 4 eV above 80 eV; the r e f l e c t e d i n t e n s i t i e s i n the high energy range are then interpolated to give values i n 2 eV i n t e r v a l s . The calculated inten-s i t i e s are stored on magnetic tape and can be plo t t e d f o r v i s u a l comparison with the experimental 1(E) curves; a l t e r n a t i v e l y the calculated i n t e n s i t i e s can be compared with experimental values by means of a r e l i a b i l i t y index as discussed i n the next sections. 2.8 Evaluation of Results 2.8 (a) Introduction In LEED crystallography i t i s necessary to f i n d the s t r u c t u r a l model which gives the best correspondence between the calculated and experimental 1(E) curves. This opens the need to be able to evaluate the s i m i l a r i t y , or otherwise, between two sets of curves on varying s t r u c t u r a l , and some non-s t r u c t u r a l , parameters. Such a search has most often been done by v i s u a l comparisons (e.g. by matching up the positi o n s and r e l a t i v e i n t e n s i t i e s of peaks, dips and other s t r u c t u r a l features), but t h i s approach suffers the disadvantage of being unwieldy when the numbers of beams and v a r i a t i o n para-meters are large. As a consequence there has been considerable encouragement for the development of numerical indices for guiding these comparisons. Among the simplest p o s s i b i l i t i e s i s AE = | E I E ? a l - E ? b s ' | (2.35) i = l which only compares peak p o s i t i o n s . In (2.35), the E^ represent energies at which the peak occurs i n the calculated and observed curves, and N -54-i s the t o t a l number of peaks compared [83,84]. C l e a r l y the better the corres-pondence i n peak posit i o n s between the experimental and calculated 1(E) curves, the lower the value of AE. In p r a c t i c e t h i s c r i t e r i o n seems incomplete because i t ignores the actual i n t e n s i t y values, i t gives an equal weighting to each peak, and i t i s ambiguous when a peak present i n one curve i s either absent or appears as an incompletely developed feature (e.g. a shoulder) i n the other curve. Van Hove et a l . proposed an extension involving f i v e simple i n d i c e s , where each gives a d i f f e r e n t emphasis i n the comparison. However, the most complete index so f a r i s that proposed by Zanazzi and Jona . This index attempts to compare numerically a l l the features included i n a v i s u a l comparison. i 2.8 (b) Zanazzi and Jona s Proposals The r e l i a b i l i t y index proposed by Zanazzi and Jona compares curve th shapes v i a t h e i r d e r i v a t i v e s . For the i beam the r e l i a b i l i t y index i s : u dE (2.36) obs r. = f 2 1 w(E) | cl! - i! | d E / f E 2 i I. i J c 1 i i , c a l i,obs / J p l , b l i 1 t l i where i n t e n s i t i e s are compared between energies E ^ and ^2\\> a n c * t^ i e P r ^ m e s i n d i c a t e f i r s t d e r i vatives f o r the calculated and observed 1(E) curves. The weight function w^ = c c . i : ; ^ - i': > o b s) / ( i i : ) 0 b s i • i i ; > o b s i m a x ) . & w emphasizes the extrema of the experimental curve and other portions with high curvature; the double primes i n (2.37) i n d i c a t e second d e r i v a t i v e s . -55-Th e s c a l i n g constant c. = f 2 1 I. . dE /(* 2 1 I. , dE (2.38) l J E i,obs / J E i . e a l l i l i allows for an a r b i t r a r y scale of i n t e n s i t y i n the experimental curves; com-parisons o f r e l a t i v e i n t e n s i t i e s are s u f f i c i e n t f o r LEED c r y s t a l l o g r a p h i c studies at the present time. One t o t a l r e l i a b i l i t y index given by Zanazzi and Jona for a set of d i f f r a c t e d beams i s r = E (r ) .AE . / E AE. , (2.39) r . r l l . I ' l l where AE. = E_. - E.. . , (r ). i s the reduced s i n g l e beam index l 2i l i ' r l 6 ( r r ) i = r./p , (2.40) and p was equated to 0.27, a mean value of r ^ found by matching random pai r s of curves. In (2.39), an average i s taken over the s i n g l e beam in d i c e s , where they are weighted according to the energy range over which the compar-ison between experiment and c a l c u l a t i o n i s made. A v a r i a t i o n of (2.39), also proposed by Zanazzi and Jona, i s R = • • § ) r r , (2.41) where n i s the number of d i f f e r e n t beams treated i n the comparisons. The advantage of (2.41), over (2.39), i s that i t mitigates against a low value of the o v e r a l l r e l i a b i l i t y index r e s u l t i n g from a comparison inv o l v i n g just a small number of beams; i t i s generally believed that a r e l i a b l e LEED cr y s t a l l o g r a p h i c analysis requires comparisons involving I(E) •curves f o r 10 -56-d i f f e r e n t d i f f r a c t e d beams. R i n (2.41) was set up with the objective of being consistent with the following p o s s i b i l i t i e s f o r values obtained from comparisons of experimental and calculated 1(E) curves for a p a r t i c u l a r proposed model: R<0.20 suggests the model i s \"very probable\", 0.20<R<0.35 suggests the model i s \"possible\", and R>0.35 suggests the model i s u n l i k e l y . 2.8 (c) Further Developments As part of an i n v e s t i g a t i o n o f the proposal of Zanazzi and Jona, Watson et a l . p l o t t e d Cr )^ as a function of topmost spacing f o r the (111) surface of copper. This i s shown i n fi g u r e 2.10 where Ad% gives the topmost spacing expressed as the percentage change from the bulk spacing, ( i . e . d-d Ad% = x 100 d o where d Q i s the bulk i n t e r l a y e r spacing and d i s the topmost i n t e r l a y e r spacing). The curves shown i n f i g u r e 2.10 are s p e c i f i c a l l y f o r V = -9.5 eV; of the '16 beams a v a i l a b l e only 9 are shown f o r c l a r i t y . The reduced r e l i -a b i l i t y index (r ) for the 16 beams i s p l o t t e d as the dashed l i n e i n the same fi g u r e , and the associated error { [ Z A E . ( ( r r ) . - f r ) 2 ] / [ (n-1) Z AE. ] } * , (2.42) e r corresponding to the minimum of r ^ i s indicated by the arrows. In (2.42), n i s the number of beams considered. Watson et a l . concluded that indicates an u n r e a l i s t i c a l l y large error i n Ad% for the data a v a i l a b l e . In fact the top layer spacings indicated •57-0 . 4 0 0.30H (0, 0.20H 0.KH CuOlO Vara Ad% Figure 2.10: Plots for Cu(lll) of ( r r ) . for 9 individual beams versus Ad% with V = -9.5 eV. The dashed line shows the reduced r e l i -or a b i l i t y index (f ) for the total 16 beams. (After Watson et a l . .) -58-by the minima for a l l the i n d i v i d u a l curves are rather close to.the spacing for the minimum i n the dashed l i n e , and t h i s suggests that the uncertainty i n spacing associated with the minimum value of r ^ could be given by the standard error found from the d i s t r i b u t i o n of toplayer spacings ( ° ^ n ) indicated by the minimum for each i n d i v i d u a l curve, e, = ( [ Z AE. ( d 1 . - d . ) ] 2 / [ (n-1) E AE. ] 1 , (2.43) d J . I mm mm J m ! J ; I I where d . = ( E AE.d 1. ) / ( E AE. ) . (2.44) mm . I mm . I l I Figure 2.10 shows d . ±2e ,; t h i s corresponds to -4.1±1.2%. The introduction 6 mm d r of by Watson et a l . makes a s t a r t on the problem of estimating uncertainties i n r e s u l t s from LEED crystallography. C e r t a i n l y numerical r e l i a b i l i t y indices are required for t h i s purpose; i t i s very hard to see how uncertainties could be h e l p f u l l y evaluated s o l e l y from v i s u a l evaluations of 1(E) curves. Another advantage of numerical indices i s that they can be e a s i l y p l o t t e d i n contour form. Again t h i s was introduced by Watson et a l . , and the example i n f i g u r e 2.11 shows a contour p l o t of r versus V and Ad% for C u ( l l l ) . According to r r or the proposal of Zanazzi and Jona, the o v e r a l l minimum i n r ^ i n f i g u r e 2.11 corresponds to the values of V Q r and Ad% which give the best agreement between the complete set of experimental and calculated 1(E) curves. Error bars - shown for the minimum represent ±e^ and ± 6 ^ (the standard error associated with the d i s t r i b u t i o n of values of V f o r the minima of (r ). and defined or r l analogously to e^); these i n d i c a t e 68% confidence l i m i t s associated with the minimum of r _ . -59-Figure 2.11: Contour p l o t f o r C u ( l l l ) of r r versus Ad% and V o r . (After Watson et a l . .) -60-CHAPTER 3 Preliminary Work -61-3.1 General Experimental Procedures 3.1 (a) LEED Apparatus As i n a l l work on well defined c r y s t a l surfaces, LEED experiments must -9 be c a r r i e d out at low pressure ( i . e . ^ 10 t o r r ) . This section describes the general features of the conventional type of LEED apparatus which has been used i n the majority of LEED experiments made so f a r . The discussion w i l l be b r i e f , but a l o t more information can be obtained from the references provided. A review of the various modifications of LEED instruments i s a v a i l a b l e , A schematic diagram of the LEED apparatus used i n t h i s work i s shown in fig u r e 3.1. This involves a Varian FC12 chamber which i s constructed of non-magnetic s t a i n l e s s s t e e l and i s connected to a series of pumping f a c i l i t i e s below the main chamber indicated i n f i g u r e 3.1. The i n i t i a l sorption pumping i s done with high surface area molecular sieves ( z e o l i t e s ) i n containers which are cooled by l i q u i d nitrogen. These pumps can reduce the pressure of -3 -1 the system to -10 t o r r when the main sputter ion pump (200 £ s ) can be started. After baking the whole system for -12 hours at 200°C (to remove adsorbed gases from the chamber w a l l s ) , i t i s necessary to out-gas thoroughly a l l components o f the system that are heated during an experiment. A t i t a n i u sublimation pump i s a v a i l a b l e for extra pumping during both out-gassing and „- the actual experimental periods. Gases for adsorption studies or for ion-bombarding i n the cleaning process can be introduced into the whole chamber through a leak valve from a gas i n l e t manifold. This part of the system i s pumped by i t s own small ion pump (20 I s *) and i t can be baked separately -62-ca) manipulator I o n g u n (b) GAS LINE e o v s i p J EXPTAL. tHAMBER 200 l/s I P S.P. T.S.P S.P F igure 3.1.: (a) Schematic o f the Va r i an FC12 UHV chamber. (b) D iagramat ic r e p r e s e n t a t i o n o f the pumping system: IP = Ion Pump; TSP= T i tan ium Sub l imat ion Pump; SP = So rp t i on Pump. -63-( a ) ( b) Figure 3.2: Crystal ( c ) (a) Schematic diagram of the electron optics used for LEED experiments. (b) Diagram showing sample mounted on a tantalum supporting r i n g (c) Electron bombardment sample heater. Hatched.lines represent s t a i n l e s s s t e e l parts while the s t i p p l e pattern indicates the ceramic i n s u l a t o r . -64-from the main chamber. The objective here i s to l i m i t the amount of impurities i n the admitted gases to very low proportions i n the main chamber. Details of pumping methods, measurement of pressure and associated techniques are given i n reviews by Hobson , Lange and Tom . The sample manipulator (Varian 981-2528) holds the c r y s t a l sample and enables the c r y s t a l to be translated as well as rotated both about the axis 1\" of the chamber (to enable the sample which i s o f f - s e t by 2^ to be directed to d i f f e r e n t f a c i l i t i e s ) and about an axis i n the h o r i z o n t a l plane (to enable the beam from the electron gun to make d i f f e r e n t angles of incidence (9) with respect to the c r y s t a l ) . The sample holder has f a c i l i t i e s f o r electron bombard-ment heating (figure 3.2(c)); the temperature of the c r y s t a l i s measured with a Pt/13%Rh-Pt thermocouple junction i n contact with the sample. The electron gun (Varian 981-2125) produces an electron beam by therm-i o n i c emission from a hot tunsten cathode; these electrons are accelerated and collimated through anode p l a t e s . The t y p i c a l incident beam used for LEED i n t h i s work (energy range 30-230 eV) has a current of about 1 yA, and a beam diameter at the sample of ^ 0.75 mm. The same gun was used for Auger analysis at a t y p i c a l energy of 1 keV and current of 10 yA. Reviews of the design and technology of low voltage electron guns includes those by Rosebury and Kohl . The electron optics (Varian 981-0127) (figure 3.2a) consists of a hemi-sph e r i c a l phosphor screen and four concentric grids each of -80% transparency; the sample i s positioned at the common centre of curvature of the grids and screen for LEED. In the usual mode of operation, the specimen and the gri d closest to the sample are grounded to ensure that electrons t r a v e l through an -65-e l e c t r o s t a t i c a l l y f i e l d - f r e e space between the sample and the optics Cthe f i n a l anode of the electron gun i s also grounded). The second and the t h i r d grids are connected together and are held at a p o t e n t i a l which i s close to that on the cathode i n the electron gun; the objective i s to stop those electrons which have l o s t energy on i n t e r a c t i n g with the sample, while permitting only the e l a s t i c a l l y scattered electrons to pass through. The : fourth g r i d i s earthed. The e l a s t i c a l l y scattered electrons, a f t e r penetra-t i n g t h i s g r i d , are accelerated through about 5 keV onto the phosphor screen, where each beam d i f f r a c t e d from an ordered c r y s t a l surface shows up as a bright spot. The whole d i f f r a c t i o n pattern on the screen can be observed d i r e c t l y through the glass window and photographed. Another accessory needed for the LEED experiment i s the sputtering gun (Varian 981-2043) for cleaning the c r y s t a l by ion bombardment. The chamber i s surrounded by three orthogonal sets of square Helmholtz c o i l s to reduce the r e s i d u a l magnetic f i e l d to a l e v e l where i t s e f f e c t on the motion of electrons being studied i s minimized. 3.1 fh) Crystal Preparation The experiments reported i n t h i s thesis involve surfaces of rhodium cut from two sources of s i n g l e c r y s t a l ; one was purchased commercially (99.99% p u r i t y ) , the other was provided by another laboratory . To s t a r t the preparation process, the s i n g l e c r y s t a l i s oriented to the required sur-face plane by the Laue X-ray b a c k - r e f l e c t i o n technique and cut by spark I I I I erosion ( Agietron , AGIE, Switzerland). To correct for small deviations of o r i e n t a t i o n from the desired c r y s t a l face, the c r y s t a l s l i c e i s mounted i n -66-it I I a c r y l i c r e s i n ( Quickmount Fulton M e t a l l u r g i c a l Produce Corp., USA) and po l -ii ished with 5, 3 and 1 micron diamond paste on a p o l i s h i n g wheel ( Universal I I Polisher , Micrometallurgical Limited, T h o r n h i l l , Ontario.). A f t e r t h i s process, i t i s necessary to check again that the f i n i s h e d surface s t i l l has the required c r y s t a l l o g r a p h i c plane. This i s done by plac i n g the c r y s t a l on the Lau^ X-ray diffractometer so that the desired plane i s perpendicular to the X-ray beam; the whole goniometer and c r y s t a l assembly i s then trans-ferred to an o p t i c a l bench where a Ne-He laser beam i s direc t e d perpendicularly onto the surface and the angle of r e f l e c t i o n i s detected. This provides a te s t of whether the phys i c a l surface coincides with the required c r y s t a l plane. 1° Generally we aim to have the surface oriented to within — of the desired c r y s t a l plane. At t h i s stage the back of the sample i s spot welded onto a supporting tantalum r i n g (figure 3.2(b)), which i n turn i s mounted onto the manipulator. The sample and manipulator i s then placed i n the vacuum chamber, the l a t t e r i s closed and the chamber i s pumped down to a base pressure of -1x10 ^ t o r r a f t e r the standard out-gassing processes. AES indicates that sulphur, phosphorus and carbon are the impurities generally present i n the rhodium c r y s t a l s used i n our experiments; no sub-s t a n t i a l amounts of boron (Auger peak at 180 eV) has been detected although some other research groups [96,97] have reported appreciable amounts of t h i s impurity i n t h e i r rhodium samples. The cleaning processes are generally per-- formed by cycles of heat treatment (700-1000°C for 10-60 min.) to drive most bulk impurities to the surface, and argon ion bombardment to sputter o f f the impurities at the surface. A l l impurities except carbon, can be removed from -67-Figure 3.3: a l -68-rhodium surfaces by argon ion bombardment ( t y p i c a l l y 10 ^ t o r r of Ar at 0.1-1 microamps and ~1 keV for 10-30 min.). Immediately a f t e r sputtering, the •carbon Auger s i g n a l (282 eV) always showed a r e l a t i v e increase; t h i s appears to be associated with the low sputtering cross-section of carbon. However, a f t e r annealing at 700°C for a few minutes, AES indicates that the l e v e l of carbon contamination on the surface i s reduced (presumably by back d i f f u s i o n into the bulk) and LEED indicates that the surface has become ordered again. In preliminary studies, Auger spectra o f the clean Rh(llO) surface were studied as a function of c r y s t a l temperature (figure 3.3), and i t was found that below 300°C carbon d i f f u s e s to the surface whereas above t h i s c r i t i c a l temperature carbon apparently d i f f u s e s back into the bulk. Further general discussions on the preparation of clean surfaces are given i n reviews by Farnsworth , Bauer and Jona . 3.1 (c) Detection of Surface Impurities Surface impurities were detected i n t h i s work by means of Auger electron spectroscopy using the LEED optics as a retarding f i e l d analyzer [16,17]. Auger electrons of c h a r a c t e r i s t i c energies are present as small peaks super-imposed on the high (but r e l a t i v e l y constant) background of the intermediate regions of N(E) vs E curve (figure 1.2), and these peaks can be enhanced by e l e c t r o n i c d i f f e r e n t i a t i o n , With reference to f i g u r e 3.4, the f i n a l anode of the gun, the sample, the f i r s t and fourth grids are grounded as for the normal LEED experiment, but for detecting Auger electrons the retarding p o t e n t i a l applied on the two middle grids has a small modulating voltage AV=Vsinu>t ( t y p i c a l values of V used i n these experiments are <10 eV). With t h i s modulating voltage, the t o t a l current c o l l e c t e d on the screen (held at -69-JElectron Gun 1 Q u n control V «sin ut r 3 0 0 v Neat rail ser Lock- in Amp. sin Qt Freq. x1/2 sin 2wt X-Y Plotter Scope Figure 3.4: Schematic diagram of LEED optics used as a retarding f i e l d analyzer for Auger electron spectroscopy: MCA = multichannel analyzer. -70-Table 3.1: Observed and calculated Auger t r a n s i t i o n energies f o r rhodium. Observed Relative C a l c u l a t i o n Assignment (a) (b) (c) (d) Ce) Intensity % Ca) Cf) (f) 144 145 10 145.0 M4M 1 N 1 176 174 165 170 175 7 174.0 M N N T l 2,3 207 208 200 200 210 10 208.0 M N N 5 2,3 2,3 223 226 222 222 227 27 221.5 M N N 5 1 4,5 255 260 256 256 259 55 255.5 M N N m51 2,3N4,5 302 306 302 302 303 100 303.0 M N N 5 4,5 4,5 (a) t h i s work Cb) Grant and Haas (c) Palmberg et a l . . Cd) Castner et a l . (e) Chan et a l . (f) Coghlan and Clausing -71-a p o s i t i v e p o t e n t i a l of about 300 eV) i s modulated. Using a l o c k - i n amplifier the components of the current corresponding to the f i r s t and second harmonics (frequencies U J and 2OJ respectively) are r e a d i l y i d e n t i f i e d . A p l o t of the •amplitude of these harmonic components as a function of the retarding energy E produces the secondary electron d i s t r i b u t i o n N(E) (figure 1.2) and i t s f i r s t d e r i v a t i v e dN(E)/dE r e s p e c t i v e l y [ 3 l ] . The t y p i c a l Auger spectrum shown i n figure 1.10 i s p l o t t e d i n dN(E)/dE form. T h e o r e t i c a l l y , the s e n s i t i v i t y of the spectra measured by t h i s method i s approximately 1% of a monolayer [l6,17]. Higher s e n s i t i v i t i e s to impurities are possible when Auger spectra are measured with a c y l i n d r i c a l mirror analyzer . Such an analyzer was not a v a i l a b l e at the time the experimental work reported i n t h i s thesis was done. Measured peak energies and r e l a t i v e peak heights for the Auger spectrum of rhodium are summarized i n Table 3.1. The v a r i a t i o n s i n peak energies from other published measurements must be a t t r i b u t e d to errors i n the energy scale and to the lack of an appropriate contact p o t e n t i a l correction; also uncer-t a i n t i e s are i n e v i t a b l y increased for low-intensity peaks. Energies calculated by Coghlan and Clausing for free atoms with an i o n i z a t i o n correction are also l i s t e d i n the t a b l e ; these values are h e l p f u l f o r guiding the assignment to p a r t i c u l a r Auger t r a n s i t i o n s . 3.1 (d) LEED Intensity Measurements D i f f r a c t e d beam i n t e n s i t i e s i n LEED have most often been measured either d i r e c t l y as d i f f r a c t e d beam currents with a moveable Faraday cup c o l l e c t o r inside the chamber or i n d i r e c t l y as the brightness of spots on the -72-phosphor screen with an external spot photometer . A variant of the l a t t e r approach i s the photographic technique introduced by S t a i r et a l . and developed further by Frost et a l . , who employed a computer-controlled Vidicon camera to analyze the photographic f i l m and thereby produce experi-mental 1(E) curves. This l a t t e r procedure has been used i n the present work. B a s i c a l l y photographs of the LEED screen are taken at a series of electron energies and measurements are made of the integrated o p t i c a l d e n s i t i e s f o r the d i f f r a c t e d spots on the f i l m negatives. Assuming the measured o p t i c a l density (D) for a spot i s proportional to the amount of l i g h t which caused the darkening, and hence to the associated electron f l u x which h i t s the screen, then D divided by the incident electron current i s proportional to the d i f f -racted beam i n t e n s i t y . Such measurements i n e v i t a b l y give r e l a t i v e beam i n t e n s i t i e s . The LEED patterns displayed on the phosphor screen were photographed through the window of the vacuum chamber using a Nikon F2 35 mm camera with an 85 mm f l . 8 lens and a K2 extension r i n g . Photographs were taken generally for the range of incident beam energies 30-250 eV i n 2 eV i n t e r v a l s using fixed exposures of 1 s at f4, the incident current and energy being recorded for each photograph. Using a motor-driven unit to wind the f i l m and a 250 exposure f i l m back, LEED patterns could be photographed over t h i s energy range i n less than 5 minutes. A f t e r taking a set of photographs, the surface \" p u r i t y was r o u t i n e l y checked with AES to assess whether any contamination occurred during data c o l l e c t i o n . -73-Standard Kodak Tri-X emulsion f i l m was used and the f i l m was processed i n a continuous length i n Acufine developer at 73 F for 7 minutes. The photographic negatives were analysed with the system indicated i n fi g u r e 3.5. The v i d i c o n camera and associated e l e c t r o n i c s comprise part of the Computer Eye System (Spatial Data System Inc.) which was interfaced to a mini-computer (Data General Nova 2). The f i l m held on the l i g h t t able i s scanned c o n t i -nuously by the camera and the image i s displayed on the TV monitor i n a 512x 480 (xy) array. The i n t e n s i t y (z value) o f any element of the image may be sampled by t r i g g e r i n g the d i g i t i z e r with appropriate i n s t r u c t i o n s from the computer. The p r o f i l e r displays d i r e c t l y on the monitor the v a r i a t i o n of i n t e n s i t y along any selected v e r t i c a l l i n e of the image. The j o y s t i c k controls the p o s i t i o n (coordinates) of the f l a s h i n g spot on the TV monitor, and i s used to s t a r t the analysis by pointing at the spot to be analysed. Assuming a Gaussian d i s t r i b u t i o n for the spot i n t e n s i t y , the background i n t e n s i t y ( z^ a cj <.) ^ s estimated by averaging the z-value of a l l elements l y i n g i n an annulus of mean radius 2a (where 2 a i s the width at h a l f maximum of the i n t e n s i t y d i s t r i b u t i o n ) . The i n t e g r a t i o n procedure involves summing a l l the values o f ( z~ z\\ i a c] () within the c i r c l e of radius 2 a and t h i s value i s divided by the incident beam current to give a measure of the d i f f r a c t e d beam i n t e n s i t y . After the i n t e g r a t i o n , the coordinates of the i n t e n s i t y maximum are determined and stored as the new s t a r t i n g coordinates for the next frame. Since the area scanned for each spot always includes the p o s i t i o n of that spot on the next frame, the computer can automatically follow each spot as i t moves toward the centre of the screen with increasing energy. The whole analysis of each spot takes less than 30 seconds and the 1(E) curves may be displayed on an o s c i l l o s c o p e and p l o t t e d on an xy recorder. -74-vidicon . T comero film V ' transport light table sconner digitiser interfoce novo 2 computer cossette drive teletype joystick scope xy plotter TV monitor profiler Figure 3.5: Schematic diagram of the apparatus used to analyse the photographic negatives of LEED patterns. -75-3.2 Structural Determinations of Low Index Surfaces of Rhodium 3.2 (a) Previous LEED Intensity Calculations for Rhodium Surfaces Watson et a l . [43,44,108,109] analysed measured 1(E) curves from low index surfaces of rhodium with m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s and t h e i r s t r u c t u r a l conclusions are summarized i n Table 3.2. These c a l c u l a t i o n s used two types of atomic p o t e n t i a l : i ) The s e l f - c o n s i s t e n t band structure p o t e n t i a l provided by Moruzzi, Janak and Williams ; t h i s p o t e n t i a l was designated V^^. i i ) The superposition p o t e n t i a l c a lculated f or the ce n t r a l atom i n a R h ^ c l u s t e r by a l i n e a r superposition of atomic charge d e n s i t i e s . This p o t e n t i a l was designated ^£^13' With reference to Table 3.2, Watson et a l . obtained discrepancies bet-ween the two atomic p o t e n t i a l s which res u l t e d i n d i f f e r e n t geometrical s t r u -ctures and inner p o t e n t i a l values reported for the same surface. Generally a band structure p o t e n t i a l i s l i k e l y to be preferred although i t has been suggested [ 6 l ] that the superposition p o t e n t i a l can produce very s i m i l a r r e s u l t s to the band structure p o t e n t i a l for the purpose of LEED crystallography. Watson et a l . supported t h i s suggestion i n a determination of the geometrical structure of the C u ( l l l ) surface. However f o r rhodium, upon evaluating the l e v e l o f agreement between experimental and calculated 1(E) curves, both with v i s u a l analyses and r e l i -a b i l i t y i n d i c e s , Watson et a l . were unable to select one of these atomic po t e n t i a l s as being preferred. This thereby l e f t s i g n i f i c a n t u ncertainties i n the d e t a i l s of the structures of the Rh(100) and (111) surfaces. One of the objectives of my i n i t i a l research was to perform further studies on these surfaces i n order to elucidate t h i s problem. -76-Table 3.2: Structural determination of low index surfaces of rhodium. ( Watson et a l . ) ^JW VRh v vRhl3 Surface Ad%±c d W V ie or v (eV) f r Ad%le d (%) V ie or v (eV) r r Rh(100) -1.8+1.0 -19.610.8 0.17 2.510.9 -11.510.7 0.16 Rh(lll) -4.210.5 -18.610.5 0.16 -0.710.8 -11.310.7 0.12 Rh(llO) — — -- -2.511.2 -11.210.6 0.10 Rh(llO) — -- -- -1.011.2 -10.510.8 0.09 Table 3.3: Structural determination of low index surface of rhodium. '' ( This work ) ^iJW _Rh . V Rhl3 Surface Ad%le d (%) V le or v (eV) r r Ad%le d (%) V ie or v (eV) r r Rh(100) 1.010.9 -12.810.4 0.09 0.511.2 -14.010.6 0.09 Rh(lll) -1.610.8 -11.210.6 0.08 — — Rh(110) -3.311.5 -10.910.8 0.12 — Rh(110) -0.510.7 - 9.610.9 0.09 -- -- --- 7 7 -3.2 (b) Further Studies In t h i s work, multiple s c a t t e r i n g c a l c u l a t i o n s were repeated for normal incidence on the (100), (110) and (111) surfaces of rhodium, and the c a l c u l -ated LEED i n t e n s i t i e s were compared with the experimental 1(E) curves p r e v i -ously produced by Watson et a l . for the (110) and (111) surfaces. Although a new set of experimental data for normal incidence on Rh(100) was obtained, and used i n the comparison i n t h i s work, these new experimental 1(E) curves did not show any s i g n i f i c a n t deviations from the previous data [ i l l ] . P r i o r to making the multiple s c a t t e r i n g c a l c u l a t i o n s , the c a l c u l a t i o n of phase s h i f t s from the two d i f f e r e n t atomic p o t e n t i a l s was completely re-investigated. In doing t h i s an error.was detected i n the value used prev-iously for the p o t e n t i a l at the muffin-tin radius^, and t h i s r e s u l t e d in an M J W incorrect set of phase s h i f t s associated with the p o t e n t i a l . A f t e r making the c o r r e c t i o n for the band structure p o t e n t i a l , a new set of phase s h i f t s was generated for d i f f e r e n t Jl to a maximum value of 7 (figure 3 . 6 ) . These new phase s h i f t s values generated from the band structure p o t e n t i a l of Moruzzi, Janak and Williams are designated as [ V ^ ^ ] to avoid confusion with MJW the erroneous V_, of Watson et a l . Rn With the corrected phase s h i f t s from the band structure p o t e n t i a l , m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s for normal incidence were repeated for the (100), (110) and (111) surfaces assuming regular packing arrangements as i n -- dicated prevously [43,44,108,109]. The non-structural parameters were kept unchanged from those used i n the previous work. S p e c i f i c a l l y , the surface atomic v i b r a t i o n s were assumed to be i s o t r o p i c and layer-independent, the *The p o s s i b i l i t y of a numerical error was f i r s t suggested to K.A.R. M i t c h e l l by J . J . Rehr (Univ. of Washington). The actual error was l a t e r detected by P.R. Watson and W.T. Moore while c a l c u l a t i n g some phase s h i f t s f o r zirconiu: -78-Energy ( Ry ) Figure 3.6: Energy dependence of rhodium phase s h i f t s (£=0-7) for the • , r w M J W - | p o t e n t i a l LVR, J. -79-surface Debye temperature being taken as 406 K ( i . e . /0.7 times, the bulk value of 480 K [112,115]). The imaginary part o f the inner p o t e n t i a l ( v 0^) 1/3 was equated to -1.17E , guidance being provided by the widths of primary Bragg-type peaks i n experimental 1(E) curves according to equation (2.4) and the energy dependence proposed i n equation (2.5). A l l the i n t e r l a y e r spacings below the second rhodium layer were fixed at the bulk values ( i . e . 1.9022 A f o r Rh(100), 1.3452 A f o r Rh(110) and 2.1960 A f o r R h ( l l l ) ) . The topmost i n t e r l a y e r spacings ( i . e . the perpendicular distance between the f i r s t and second rhodium layers) were allowed to vary from a 10% contraction from the bulk value to a 5% expansion i n increments o f 2.5% for the (100) and (111) surfaces, while for the (110) surface c a l c u l a t i o n s were made with the topmost spacing varying from a 12.5% contraction to a 2.5% expansion. The m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s over the energy range of 30-250 eV were done for the (10), (01), (11), (02) and (12) beams for a l l three surfaces (beam notations are i l l u s t r a t e d i n figure 3.7). The c a l c u l a t i o n s u t i l i z e d the RFS method for the (100) and (111) surfaces, whereas the layer-doubling method was used for the (110) surface to avoid any p o s s i b i l i t y that the r e f l e c t e d i n t e n s i t i e s might not converge f o r the smaller i n t e r l a y e r spacings. I n t e n s i t i e s c a lculated with the corrected phase s h i f t s from the band structure p o t e n t i a l for the (100) surface were compared with the new experi-mental 1(E) curves. V i s u a l analysis suggested that the best correspondence - occurred when Ad% i s between 0 and 2.5% (here Ad% indicates the topmost i n t e r l a y e r spacing (d) expressed as the percentage change from the bulk value d ( i . e . Ad% = [(d-d )/d ]xl00). The analysis with the r e l i a b i l i t y o v \" - ^ o o real space RhdOO) Rh(110) O C X ) 666 066 Rh(H1) 10 01 •80-9, 02 01 •00 22 • 11 00 11 reciprocal space 12 10s 12 111 1 1 1 1 10 22 21 20 - 9 , 02 22 21 • g v 20 X 20\" 21 22 Figure 3.7: (a) Schematic diagrams of the (100), (110) and (111) surfaces of rhodium. The dotted c i r c l e s represent rhodium atoms i n the second layer, (b) The corresponding LEED patterns i n d i c a t i n g the beam notation as used i n text. -81-index proposed by Zanazzi and Jona indicated that the minimum value for r was 0.085 and occurred when Ad% = 1.0±0.9% and V = -12.8±0.4 eV. r or To assess the correspondence between the two p o t e n t i a l s V p ^ g and [ V R j ^ ] , another comparison with r ^ was made between the same experimental 1(E) curves and the curves calculated from . This time the minimum value of r ^ was again 0.085, although for the conditions Ad% = 0.5±1.2% and V = -14.0±0.6 eV. These two r e s u l t s , which are summarized i n Table 3.3, or are in contrast to the previous report of Watson et a l . (Table 3.2). Also summarized i n Table 3.3 are the conditions for minimum r ^ from comparisons MJWT of i n t e n s i t i e s c a l c ulated using [ V R n ] with one set of experimental data for the (110) and (111) surfaces; each set of experimental data covers 5 beams in the energy range 30-200 eV. Corresponding r e s u l t s from the p o t e n t i a l VRhl3 obtained previously by Watson et a l . are in Table 3.2. Comparisons of our new r e s u l t s obtained from the corrected phase s h i f t s r MJWn from the band structure p o t e n t i a l LV R h J for three low-index surfaces of rhodium (Table 3.3) with those obtained previously from the superposition p o t e n t i a l V j ^ j g (Table 3.2), allows the conclusion that the values of Ad% and V given by the two potentials are equal to within the indicated uncertainties for each set of experimental measurements. This suggests that the two rhodium pot e n t i a l s are equivalent f o r the purpose of LEED crystallography, and provides support for the suggestion [ 6 l ] that the superposition p o t e n t i a l s from c l u s t e r c a l c u l a t i o n s can be useful when s e l f - c o n s i s t e n t band structure p o t e n t i a l s are unavailable. This s i t u a t i o n f or the rhodium surfaces i s now consistent with that found previously for C u ( l l l ) . -82-3.3 Studies with the R e l i a b i l i t y Index of Zanazzi and Jona 3.3 (a) Introduction The basic approach f o r surface crystallography with LEED involves varying s t r u c t u r a l and non-structural parameters i n the m u l t i p l e - s c a t t e r i n g c a l c u l a -tions i n order to f i n d the best correspondence between calculated and experi-mental 1(E) curves for a l l d i f f r a c t e d beams . At present the high cost of the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s i n h i b i t s a f u l l v a r i a t i o n of non-s t r u c t u r a l parameters to maximize the agreement i n these comparisons, and so far only V q i > has been subjected to much v a r i a t i o n . In part t h i s has been because of a common f e e l i n g that the other non-structural parameters do not have a strong e f f e c t on determined geometries. Thus, the usual procedure i n LEED crystallography involves f i n d i n g a p l a u s i b l e choice of non-structural parameters (e.g. V Q ^ , 8^) at the s t a r t of the c a l c u l a t i o n and keeping these parameters f i x e d from then on . This philosophy i s tested here, p a r t i e -u l a r l y with the r e l i a b i l i t y index r ^ suggested f o r LEED by Zanazzi and Jona . The v a r i a t i o n of non-structural parameters appears to provide a f a i r l y stringent t e s t of r ^ . Hence one objective of t h i s work i s to assess where the use of r ^ for the v a r i a t i o n of non-structural parameters i s able to give r e s u l t s s i m i l a r to a v i s u a l analysis and where i t does not. The content of t h i s section has already been published along with some supplementary obser-vations of P.R. Watson and S.J. White . 3.3 (b) Relations between R e l i a b i l i t y Index and the Imaginary Potential The imaginary part of the inner p o t e n t i a l (V ^) provides a phenomeno-l o g l c a l d e s c r i p t i o n of the i n e l a s t i c s c a t t e r i n g of electrons by a s o l i d ; the l i f e times of electrons of well defined energy i n the s o l i d place a -83-r e s t r i c t i o n (via the uncertainty p r i n c i p l e ) on peak widths i n 1(E) curves according to equation (2.4). Increase i n V . corresponds to a reduction i n 01 the proportion of the e l a s t i c a l l y scattered electrons and to a broadening of peaks i n the 1(E) curves. The i n i t i a l s e l e c t i o n o f a value of V f o r rhodium was made ( u t i l i z i n g o i equations (2.4) and (2.5)) from the measured widths of kinematical peaks i n experimental 1(E) curves; on t h i s basis a p l a u s i b l e expansion for V Q i i s 1/3 -aE with a equal to about 1.17. A point of i n t e r e s t here i s to see whether changes i n a would modify conclusions on the geometries of rhodium surfaces, and whether the r e l i a b i l i t y - i n d e x analysis would i n d i c a t e that ct=1.17 i s the most appropriate value of a. In order to examine t h i s , further multiple-s c a t t e r i n g c a l c u l a t i o n s were made for normal incidence on the (111) surface o f rhodium at a ser i e s of values of a, s p e c i f i c a l l y 1.17, 1.47, 1.76, 2 .05 and 2.34, with a l l other non-structural parameters f i x e d at the values used previously i n section 3.2 (b). The (111) surface was convenient for t h i s study, since the c a l c u l a t i o n s required a comparatively small number of beams and the RFS method was applicable. 1(E) curves f o r the (01) beam for the f i v e values of a with a 2.5% contr-action o f the topmost layer are shown i n f i g u r e 3.8 together with the experi-mental data. The main features o f each i n d i v i d u a l curve are maintained, a l -though increase i n a gives a general lowering of i n t e n s i t i e s and, most s i g n i -f i c a n t l y , a broadening o f the peaks. V i s u a l evaluations of a l l d i f f r a c t e d beams suggested the best agreement between experimental and calculated 1(E) curves occurred when a i s i n the range 1.47 to 1.76. These comparisons were -84-(01) B E A M E N E R G Y / e V Figure 3.8: The experimental I(E) curve for the (01) beam at normal incidence from the R h ( l l l ) surface compared with f i v e corresponding curves c a l c u l a t e d with the p o t e n t i a l C ^ ^ l and Ad% = -2.5% for the parameter ct varying from 1.17 to 2.34. -85-also made with the numerical r e l i a b i l i t y index, and conditions f o r minimum r f o r each value of a are summarized i n Table 3.4. These r e s u l t s i n d i c a t e r that v a r i a t i o n of a has only a minor e f f e c t on the determined topmost i n t e r -layer spacing, and t h i s supports the common assumption that v a r i a t i o n of is not es s e n t i a l i n LEED crystallography. It i s s a t i s f y i n g also that the i n s e n s i t i v i t y of geometrical structure to V i s recognized by r ^ . Neverthe-less i t must be noted that even though c l o s e l y s i m i l a r geometrical structures are indicated by the d i f f e r e n t values of a, the values of r ^ at the d i f f e r e n t minima are not equivalent. The lowest r ^ value corresponds to a close to 1.76, and t h i s suggest that the i n i t i a l choice of 1.17 may not be optimal. Both v i s u a l and r-index evaluations are consistent i n i n d i c a t i n g a i s larger than 1.17 and t h i s supports the use of the index r . On the other hand, values of a larger than 1.17 seem less consistent with determining V from equation 2.4. The values of r ^ reported i n Table 3.4 are unusually low, e s p e c i a l l y those for the higher values of a. The trends found did not seem consistent with the o r i g i n a l conclusions of Zanazzi and Jona, and we wondered whether the tendency for low values of r ^ to be found for high a could be an art e f a c t associated with the value of p being fixed at 0.027 i n the c a l c u l a t i o n of (r ). i n equation (2.40). According to Zanazzi and Jona , t h i s value of r i p was obtained by averaging ( r r ) ^ f ° r matching random pairs of experimental II and calculated 1(E) curves. One uncertainty was whether complexity of I I structure was f u l l y b u i l t into the scheme of Zanazzi and Jona. In general one would expect that an experimental 1(E) curve that contains a l o t of -86-Table 3.4: Conditions f o r best agreement between experimental 1(E) curves at normal incidence for R h ( l l l ) and curves calculated with the p o t e n t i a l [vJJnW] according to the r e l i a b i l i t y indices r and r for d i f f e r e n t values of a. r m 1.17 1.47 1.76 2.05 2.34 Ad% (%) V o r (eV) r r r m -1.6±0.8 -11.210.6 0.080 0.985 •2.510.5 -11.810.7 0.042 0.510 -2.310.6 -11.710.6 0.035 0.430 -2.310.5 -11.610.7 0.037 0.440 -2.010.6 -11.010.8 0.041 0.490 -87-structure would be more d i f f i c u l t to match to calculated curves than one with less structure, and therefore i n s e t t i n g up a many-beam r e l i a b i l i t y - i n d e x perhaps the former should have a r e l a t i v e l y greater weighting than the l a t t e r One approach to t h i s i s to allow the value of p to vary f o r each experimental curve. In order to make an i n i t i a l assessment of whether such e f f e c t s could be relevant to the trends of r ^ with a shown i n Table 3.4, we replaced p i n equation (2.40) with a new quantity r ( s t . l i n e , e x p t ) E^ ' obs 1 max This quantity varies with each experimental 1(E) curve according to the amount of structure i t involves. Equation (3.1) i s obtained from equation (2.36) by comparing an experimental 1(E) curve with a s t r a i g h t l i n e corresponding to I. , = I.' = i \" , = 0. Using r, «... instead of p i n equation i , c a l i , c a l i , c a l & (st.line,expt) (2.40), we then set up a new o v e r a l l reduced r e l i a b i l i t y index designated as r . The values of r f o r the v a r i a t i o n of a values are also summarized i n m m Table 3.4. However, i t turned out for the case considered here that mini-mizing r ^ gave i d e n t i c a l values of Ad% and to those found by minimizing r ^ ; numerical values of the two indices are d i f f e r e n t , but to a good approxi-mation corresponding values of r ^ can be obtained by d i v i d i n g values of r ^ by 12.1. This observation does not support the p o s s i b i l i t y that the low ^values of r ^ found for high a (hence high V ^ ) was associated with the constant value of p used i n equation (2.40). -88-Further i n v e s t i g a t i o n suggested that the high value of a needed f or better matching between calculated and experimental 1(E) curves appears to be associated with the way that the experimental i n t e n s i t i e s were handled i n the ana l y s i s . The i n i t i a l value of a=1.17 was obtained by considering i n d i v i d u a l l y measured 1(E) curves, whereas the experimental 1(E) curves a c t u a l l y used i n the comparisons with the calculated 1(E) curves were ave-raged over appropriate sets of beams which are expected to be, and to a good approximation are, symmetrically equivalent. However, minor errors i n the I I experiment [107,108] can lead to corresponding peak posit i o n s i n equivalent I I sets of beams being s h i f t e d s l i g h t l y (e.g. by 1 or 2 eV) from the mean values and t h i s i n e v i t a b l y leads to some broadening of structure i n the averaged 1(E) curves. Upon i n v e s t i g a t i n g the averaged experimental 1(E) curves, a choice of a as suggested by equations (2.4) and (2.5) for the R h ( l l l ) surface i s 1.65. This value i s i n reasonable agreement with the conclusions noted above from the v i s u a l evaluation and the r - f a c t o r a n a l y s i s . These studies i n d i c a t e the following conclusions: 1) Determined surface geometrical structure i s i n s e n s i t i v e to changes i n V values. This supports the usual approach of keeping V Q i f i x e d i n the multiple sc a t t e r i n g c a l c u l a t i o n s , and of choosing s u i t a b l e values of V from equation (2.4). 2) The index r ^ proposed by Zanazzi and Jona i s consistent with a v i s u a l analysis for i d e n t i f y i n g values of which optimize agreement between ex-perimental and calculated 1(E) curves. - 3) Further improvements are needed i n the experimental measurements for ensuring that 1(E) curves from symmetrically-related beams r e a l l y are equivalent. -89-3.3 (c) R e l i a b i l i t y - I n d e x and the V a r i a t i o n of Surface Debye Temperature The e f f e c t s of atomic vi b r a t i o n s are incorporated i n t o multiple scat-t e r i n g c a l c u l a t i o n s by means of temperature-dependent atomic s c a t t e r i n g factors i n v o l v i n g the Debye temperature (8^) as indicated i n equations (2.9)- (2.10). S t r i c t l y the atomic vi b r a t i o n s are expected to be layer dependent and to decrease into the bulk . However, most LEED studies have used a sing l e e f f e c t i v e Debye temperature (6^ e f f ) f ° r a ^ layers probed by the analysed electrons. In p r i n c i p l e , a better, although s t i l l II II simple, p o s s i b i l i t y i s to give the topmost layer a surface value (8^ s u r f ) and to assume the second and a l l deeper layers can be characterized by the bulk value (8^ ^uUp . In the previous multiple s c a t t e r i n g c a l c u l a t i o n s made so far i n t h i s t h e s i s , e n , for rhodium was taken as 480 K and D,bulk J 9D e f f w a s e s t i m a t e d a s y / ° ~ ^ 9 n b u l k . Although t h i s type of choice seems p l a u s i b l e , i t i s nevertheless made on i n t u i t i v e , rather than rigorous, grounds: moreover f o r assessing further the choice of 9 n -~ i t would seem b ' 6 D.surf h e l p f u l to determine the e f f e c t of i t s v a r i a t i o n on the s t r u c t u r a l conclusions, as considered for v a r i a t i o n s of V . i n section 3.3 (b). For t h i s i n v e s t i -01 gation, multiple s c a t t e r i n g c a l c u l a t i o n s f o r the R h ( l l l ) surface were made by varying 6„ . over the range of 200-600 K i n 100 K steps, a l l other 1 6 D.surf non-structural parameters being f i x e d at the values given i n section 3.2(b) (except a was r e s t r i c t e d to 1.76). Figures 3.9 and 3.10 show two d i f f e r e n t sets o f contours of r ^ pl o t t e d against 8^ -For both the contours are reasonably symmetrical about a h o r i -\" D,surf J J zontal l i n e , and minimum values of r ^ are c l o s e l y indicated to correspond to the -90--Figure 3.9: Contour plot of f versus 9 n o r and V for normal incidence • r U,surr or data from R h ( l l l ) where the ca l c u l a t i o n s use the po t e n t i a l [ V M ^ W ] with a=1.76 and 6 D b u l k = 4 8 0 K. -91-200 300 400 300 600 ® D , S U R F < K ) Figure 3.10: Contour p l o t of r versus 6 D g u r f and Ad% for normal incidence data from R h ( l l l ) where the c a l c u l a t i o n s use the p o t e n t i a l -92-values V = -11.5 eV and Ad% = -2%. These values are comparable with those or r reported previously i n Tables 3.3 and 3.4 for a f i x e d value o f 6^ An unexpected feature of these p l o t s , however, i s that they point to values of 6„ _ i n the p h y s i c a l l y unreasonable ranee of being greater than 0„ , ,, D,surf r b b D.bulk ( i . e . 480 K for rhodium ). Within the conventional procedure for including atomic v i b r a t i o n s i n m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s [24,65], the main e f f e c t of 6^ i s to modify o v e r a l l i n t e n s i t i e s but without appreciably a f f e c t i n g s tructure i n the calculated 1(E) curves. This can be seen i n f i g u r e 3.11 where 1(E) curves calculated for the (01) beam of R h ( l l l ) with Ad% = -2.5% are p l o t t e d f or values of 6„ „ between 200 and 600 K. The most noticeable trend i s that D.surf the lower values of 9^ _ e s p e c i a l l y give r e l a t i v e l y lower calculated inten-D,surf r / & i s i t i e s at the higher energies. This contrasts with the trend observed i n the experimental 1(E) curves where r e l a t i v e l y higher i n t e n s i t i e s are found at the higher energies. This suggests that the tendency to high values of 0^ _ picked out by the use of r i s r e f l e c t i n g a r e a l trend i n the curves D.surf r J r compared, although i t i s p h y s i c a l l y unreasonable for 0^ s u r f t 0 be greater than 0^ kuik' 0 u r f e e l i n g i - s that the o r i g i n of t h i s discrepancy may be associated with general problems i n the data c o l l e c t i n g processes. Changes both i n g r i d transparency and i n the s o l i d angle presented to the camera as the spots move toward the centre of the screen for increasing energies can • caused apparent v a r i a t i o n s i n r e l a t i v e i n t e n s i t i e s . Legg et a l . made corrections for these factors and demonstrated a consequent lowering in r e l a t i v e beam i n t e n s i t i e s at the lower energies. In future LEED i n t e n s i t y -93-Rh(l1l) ENERGY / eV \" Figure 3.11: The experimental 1(E) curve for the (01) beam at normal incidence from the Rh(lll) surface compared with five r M J W i corresponding curves calculated with the potential LV R n J» Ad% = -2.5%, and a = 1.76 for the parameter e Q s u r f varying from 200 to 600 K. - a d -measurements we are planning to incorporate corrections for these e f f e c t s ; also i t i s possible that some refinement i n the background correction could be needed at high energies when spots crowd together i n the LEED screen. At present we f e e l that the source of discrepancy indicated by the untenable large value of 8^ - i s associated with aspects of the experimental mea-& D,surf r r surements, and although t h i s has not yet been unambiguously confirmed, two conclusions do seem secure. The f i r s t i s that the Zanazzi-Jona r e l i a b i l i t y index appears able to give a f a i r assessment of the r e l a t i v e i n t e n s i t i e s of 1(E) curves when 9_ _ i s var i e d i n the cal c u l a t i o n s (although an J D,surf appreciable s e n s i t i v i t y i n r r to r e l a t i v e i n t e n s i t i e s i n successive sections of 1(E) curves has not been recognized p r e v i o u s l y ) . Secondly, surface struc-t u r a l conclusions seem unaffected by v a r i a t i o n of 6„ i n the c a l c u l a t i o n s . D.surf Although there would be advantages i n r e f i n i n g the treatment of atomic vibrations i n LEED i n t e n s i t y c a l c u l a t i o n s , the evidence presented here does suggest that modifying values of 8 D s u r f i s not going to s i g n i f i c a n t l y a f f e c t conclusions about surface geometry. This suggestion i s supported by an independent analysis of the R h ( l l l ) surface with m u l t i p l e - s c a t t e r i n g c a l -culations by Chan et a l . . Using 6 D b u l k as 300 K and 6 D s u r f as 250 K, Chan et a l . obtained Ad% for the topmost rhodium layer as 0±5%. Although these error l i m i t s seem rather large, nevertheless t h i s conclusion i s con-s i s t e n t with our determination of the R h ( l l l ) surface (Table 3.2). Generally - we f e e l , for the present stage of development of LEED crystallography, the mul t i p l e - s c a t t e r i n g c a l c u l a t i o n s might just as well continue to use 8^ g ^ obtained from the experimental measurements or a l t e r n a t i v e l y 6^ s u r f f o r the topmost layer s p e c i f i e d as a ce r t a i n f r a c t i o n of 8^ ^ u i ^ C ^ 2 ] . -95-3.4 Studies of Adsorption of some Gaseous Molecules on Rhodium surfaces 3.4 (a) Bibliography of Overlayer Structures on Rhodium Surfaces The properties of well-defined surfaces of rhodium have been less extensively investigated than those of many other t r a n s i t i o n metals, even though rhodium shows a high degree of c a t a l y t i c a c t i v i t y for many reactions Table 3.5 summarizes studies where general chemisorptive properties of rhodium have been investigated with LEED. Auger electron spectroscopy was not avai-l a b l e f or monitoring surface p u r i t y i n the i n i t i a l studies by Tucker [119-122] although t h i s technique was a v a i l a b l e f o r a l l other studies reported i n Table 3.5. Much o f the work on rhodium that has emerged over the past several years has been concerned mainly with either LEED patterns or adsorption k i n e t i c s . An important objective for part of the research reported i n t h i s thesis was to determine some d e t a i l e d surface structures with LEED for adsorption on rhodium. The i n i t i a l aim was to investigate some comparatively simple structures i n v o l v i n g 0 or S adsorbed on low-index surfaces and to compare with s i m i l a r systems already investigated with LEED crystallography, for example adsorption on n i c k e l . The LEED analyses r e s u l t i n g from the adsor-p t i o n o f l ^ S on the (100) and (110) surfaces of rhodium are described i n chapters 4 and 5 r e s p e c t i v e l y . The next section reviews observations f o r the adsorption of 0^ on Rh(100), a system that was o r i g i n a l l y planned to be investigated v i a analyses of LEED i n t e n s i t i e s . Table 3.5: Surface structures reported for adsorption of small gaseous molecules on low index surfaces of rhodium. Adsorbate Rh(100) surface structure r e f . Rh(llO) surface structure r e f . R h ( l l l ) surface structure r e f . ° 2 p ( 2 x 2)-0 [a,b,f] disorder [c] ( 2 x 2 )-0 [a,e,g] c ( 2 x 2)-0 [ a , f ] c ( 2 x 4)-0 [c] ( 3 x l ) - 0 [b,f] c ( 2 x 8)-0 [c] c ( 2 x 8)-0 [b] ( 2 x 2 )-0 ( 2 x 3 )-0 ( l x 2 ) - 0 ( 1 x 3 )-0 1—I 1—1 1—1 1—1 1 1 1 1 1—1 1—1 CO c (2x2)-CO [a] (2x1)-CO [d] (/3x/3)R30°-CO [a] - hexagonal overlayer [a] c ( 2 x 2)-C [d] (2x2)-CO [a,e] (4x1)-CO CO. c (2x2)-CO [a] (/3x/3)R30°-CO [a] Z (2x2)-CO [a,e] NO c ( 2x2)-N0 [a] c(4x2)-NO c(2x2)-NO [a] [a] H 2S p ( 2 x 2)-S c ( 2 x 2)-S [ f ] [ f ] c ( 2 x 2)-S [ f ] — — — [ a ] - Castner et a l . ; [ b ] - Tucker ; [ c ] - Tucker [l20,12l]; [d]- Marbrow and Lambert ; [ e ] - Grant and Haas ; [ f ] - This work [123,124]; [ g ] - Weinberg et a l . .. -97-3.4 (b) Adsorption of 0^ on Rh(100) The sample used i n t h i s study was cut from the s i n g l e c r y s t a l provided by Tucker , and i t was previously used by Watson et a l . f o r a LEED analysis of the clean Rh(100) surface . P r i o r to s t a r t i n g the adsorption 1° work, the surface was repolished and checked to ensure that i t was within — of the (100) plane. A f t e r mounting and i n s t a l l i n g i n the vacuum chamber, the sample was cleaned according to the procedures described i n section 3 .1 , and annealed u n t i l the LEED pattern exhibited a sharp ( l x l ) pattern with low back-ground i n t e n s i t i e s . The sample was heated to 300°C before high p u r i t y 0^ (99.99%,Matheson) was introduced into the vacuum chamber at a pressure of 10 t o r r . After 5 minutes a sharp (3x1) LEED pattern corresponding to two di f f e r e n t domains was observed, and an Auger spectrum taken a f t e r the formation of t h i s pattern f a i l e d to detect the presence of any impurities. The Auger peaks of oxygen at around 510 eV could not be detected. This e f f e c t has been observed previously for oxygen adsorption on some t r a n s i t i o n metals [125,126] and i t appears to be associated with the low i o n i z a t i o n cross-section for i n i t i a t i n g the Auger process for adsorbed oxygen. A sharp ( l x l ) pattern c h a r a c t e r i s t i c of the clean Rh(100) surface can be restored (presumably by desorption of the oxygen [96,127]) upon heating at 1000°C f o r 10 minutes. After returning to the base pressure the process could be repeated with a new dose of oxygen applied under the same conditions as indicated above. Sharp (3x1) patterns could always be obtained, although on d i f f e r e n t occasions v a r i a t i o n s were found i n the domain structure. These ranged from two equally populated domains f to two unequally populated domains and even to the appearance - 9 8 -of a s i n g l e domain (figure 3 . 1 2 ) . From time to time f a i n t half-order d i f f r -acted spots were observed superimposed on the ( 3 x 1 ) pattern, but the pattern never developed into a complete ( 2 x 2 ) pattern even though the c r y s t a l was exposed to for longer periods of time. Furthermore these spots could be removed by heating at 700°C for a few seconds; then a f t e r cooling to room termperature, the LEED pattern showed only the sharp ( 3 x 1 ) pattern. A well-defined p ( 2 x 2 ) LEED pattern (figure 3 . 1 2 ) could be observed when the clean Rh ( 1 0 0 ) surface was exposed to 0 9 at 10 ^ t o r r f o r 5 minutes. An apparent, but incompletely developed, c ( 2 x 2 ) pattern could also be detected i f the c r y s t a l was l e f t i n the constant atmosphere of 0 ^ at 10 ^ t o r r f o r a further 30 minutes. This was observed as an increase i n i n t e n s i t i e s of f r a c -tional-order spots of type (^j), while the other f r a c t i o n a l - o r d e r spots showed r e l a t i v e decreases i n i n t e n s i t i e s . These r e s u l t s f o r the adsorption of oxygen on Rh ( 1 0 0 ) agree p a r t l y with e a r l i e r work done by Tucker [ 1 1 9 ] and Castner et a l . [ 9 6 ] . Tucker reported ( 2 x 2 ) , ( 3 x 1 ) and ( 2 x 8 ) patterns f o r increasing oxygen exposures, but he did not observe the c ( 2 x 2 ) pattern. Castner et a l . reported a p ( 2 x 2 ) pattern which transformed to the c ( 2 x 2 ) pattern at higher oxygen exposures, but no ( 3 x 1 ) pattern was detected i n that work over a wide range of temperature and pressure. My observation of the p ( 2 x 2 ) pattern seems broadly i n agreement with those observed i n these other two studies. Also I had some evidence, ' through f a i n t LEED patterns, for the transformation of a p ( 2 x 2 ) pattern into a c ( 2 x 2 ) pattern with oxygen exposure. One p o s s i b i l i t y f o r the discrepancies between these d i f f e r e n t studies could involve other gases (e.g. CO) being -99-I'igure 3 . 1 2 : Photographs o f some p ( 2 x 2 ) and (3><1) LEED p a t t e r n s o b s e r v e d at normal i n c i d e n c e from t h e a d s o r p t i o n o f oxygen on a Rh(lOO) s u r f a c e . (a) Rh[ 1 0 0 )-p ( 2 * 2 ) - 0 at 70 eV, (b) R h ( 1 0 0 ) - ( 3 * l ) - 0 , s i n g l e domain a t 174 eV, (c) R h ( 1 0 0 ) - ( 3 x l ) - 0 , 2 e q u a l l y p o p u l a t e d domains at 100 eV. (d) R h ( 1 0 0 ) - ( 3 * l ) - 0 , 2 e q u a l l y p o p u l a t e d domains at 152 eV. -100-d i s p l a c e d from the w a l l s o f the vacuum chamber on a d m i t t i n g oxygen to the system. U n f o r t u n a t e l y , the mass spectrometer d i d not f u n c t i o n p r o p e r l y during these experiments and so we had no independent assessment of the gases i n the chamber. However, no evidence was found f o r the b u i l d up of i m p u r i t i e s on the surface on adding oxygen to the system, although i t was again unfortunate that the r e t a r d i n g f i e l d analyzer as used at the time of t h i s work was not s e n s i t i v e enough to detect the oxygen. Nevertheless care was taken during the heat treatments t o operate under c o n d i t i o n s where carbon does not appre-c i a b l y migrate from the b u l k ; the Auger sp e c t r a confirmed t h a t carbon impuri-t i e s remained at low l e v e l s during these experiments. Two complete sets o f photographs f o r the ( 3 x 1 ) p a t t e r n s were taken on d i f f e r e n t occasions over the energy range 3 0 - 2 0 0 eV f o r normal incidence. The f i l m s were analysed to y i e l d the 1(E) curves shown i n Appendices A l and A 2 ; the f i r s t i s f o r two e q u a l l y populated domains and the second i s f o r a s i n g l e domain type only. These 1(E) curves have not yet been analysed w i t h m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s e s p e c i a l l y because we have no clues at present to the p o s s i b l e s t r u c t u r e , and some geometrical models t h a t should be t e s t e d are complex. An attack on the problem of the s t r u c t u r e o f the ( 3 x 1 ) surface would be aided by the a v a i l a b i l i t y of more d e t a i l e d experimental data, f o r example on surface coverage (from AES w i t h a c y l i n d r i c a l m i r r o r analyzer) and on p o s s i b l e oxygen bonding s i t e s from h i g h - r e s o l u t i o n e l e c t r o n energy loss . spectroscopy [ 1 2 8 ] . -101-CHAPTER 4 LEED A n a l y s i s o f Rh(100)-p(2x2)-S Surface S t r u c t u r e X -102-4.1 Introduction Knowledge of the structures adopted by atomic and molecular species adsorbed on surfaces of rhodium i s of importance for an understanding o f the c a t a l y t i c properties of t h i s metal. This chapter reports an analysis with LEED for the (2x2) structure formed by adsorbing H^S on the clean (100) surface ; t h i s appears to represent the f i r s t such s t r u c t u r a l analysis for adsorption on rhodium. H^S was chosen for t h i s i n i t i a l study since some st r u c t u r a l information i s ava i l a b l e for sulphur adsorption (via H^S) on other t r a n s i t i o n metal surfaces, thereby providing points of reference for assess-ing the structure of Rh(100)-p(2x2)-S. One immediate objective i s to gain information about the chemical bonding at these surfaces. 4.2 Adsorption of H 2 S on Rh(100) A clean (100) surface of rhodium with a sharp ( l x l ) LEED pattern (obtained by the procedures described i n section 3.1) was exposed to high p u r i t y H^S - 8 (Matheson) at 10\" t o r r for 1 min. A f t e r pumping excess gas, the surface was annealed at 300°C for 1 min. and a sharp p(2x2) LEED pattern obtained with good contrast (figure 4.1). Auger spectra (figure 4 .2) taken a f t e r the f o r -mation of t h i s pattern indicated S as the main foreign component with Auger peak height r a t i o s 152eV(S)/302eV(Rh)=2/3. Small traces of C could also be detected, but i t s proportions were minimized by the low temperature annealing. We believe that H 2S dissoc i a t e d on the Rh(100) surface, i n part because we also obtained t h i s p(2x2)-S LEED pattern by heating the metal such that sulphur impurity segregated to the surface from the bulk. Exactly s i m i l a r - c- -d-!• i gure 4.1: Photographs of LHHD patterns observed at normal incidence from adsorption of S on Rh(100) surface. (a) Rh(100)-c(2*2)-S at 80 eV, (b) Rh(100)-p(2x2)-S at 72 eV, ( c ) Rh(100)-p(2*2)-S at 114 eV, (d) Rh(100)-p(2*2)-S at 168 eV. -104-Rh 100 200 300 Energy (eV ) Figure 4.2: Auger spectra of Rh(100) surfaces with 1.5 keV and 10 microamp beam at d i f f e r e n t stages during the preparation of Rh(100)-p(2*2)-S. -105-observations have been reported by Gauthier et a l . and Demuth et a l . [130,131] in t h e i r preparations of Ni(100)-p(2x2)-S and Ni(100)-c(2x2)-S, and also by Castner et a l . i n t h e i r studies of the Rh(100) surface. The 1(E) curves measured from the Rh(100)-p(2x2)-S surface obtained by the migra-t i o n of the bulk sulphur impurity agreed c l o s e l y with those prepared by H^S adsorption. This provided some tentative evidence that the adsorption of H^S on t h i s rhodium surface involves d i s s o c i a t i v e adsorption. Direct evidence for d i s s o c i a t i n g on a metal surface was provided by Keleman and Fischer s study on the Ru(100) surface with the a d d i t i o n a l techniques o f uv photo-emission and thermal desorption spectroscopy . This work indi c a t e d that H^S d i s s o c i a t e d upon adsorption over the e n t i r e range of coverage. In Rh (100)-p (2x2)-S, the adsorbed sulphur atoms are held strongly to the surface and could be removed only by extensive A r + bombardment. A f t e r clean-ing the Rh(100) surface, a c(2x2) pattern could also be formed on exposure to H 2S. This required heating the c r y s t a l at 400°C for 4 min. i n the atmos-sphere of H^ s ( l x l O - 7 t o r r ) , and on cooling the LEED pattern of the surface exhibited a c(2x2)-S overlayer pattern (figure 4.1). Auger spectra f o r t h i s surface gave a r a t i o of peak heights 152eV(S)/302eV(Rh)=4/3 which suggests that the S coverage for t h i s structure i s approximately twice that of the Rh(100)-p(2x2)-S structure. 1(E) curves were measured for Rh(100)-p(2x2)-S for normal incidence f o r - the beams (01), (11), (02), (12), (0±) , ( l | ) , (^ |), ( o | ) , and (y|), using the beam notation shown i n figure 4.3. These measurements involved photographing the LEED screen at 2 eV i n t e r v a l s over the energy range 40-200 eV, and analy-zing the photographic negatives with the computer-controlled Vidicon camera as described i n section 3.1. Two independent sets of experimental data were c o l l e c t e d . -106-9x Figure 4.3: Beam notation for the LEED pattern of Rh(100)-p(2x2)-S structure. -107-4.3 Computational Scheme 1(E) curves were calculated with the layer-doubling method , using a conventional muffin-tin-type p o t e n t i a l , f or some surface models i n which only sulphur was present i n an overlayer. The scatterings by the atomic po t e n t i a l s were described by eight phase-shifts. A band structure p o t e n t i a l was used for the atomic regions i n the substrate . For the atomic reg-ions i n the sulphur overlayer, the superposition p o t e n t i a l obtained by Demuth et a l . was used. This superposition p o t e n t i a l was also used by Van Hove and Tong i n an analysis of surface structures formed by S on Ni(100). The r e a l part of the inner p o t e n t i a l (V ) was i n i t i a l l y set at -12.0 eV or (although t h i s was r e f i n e d l a t e r i n the comparison with experimental data) for both the overlayer and the substrate, while the imaginary part (V ^) was 1/3 equated to -1.51E eV. The e f f e c t i v e Debye temperatures were taken as 406 K for rhodium (as discussed i n section 3.4) and 236 K for sulphur following Demuth et a l . [ l 3 l ] . The geometrical models considered for Rh(100)-p(2x2)-S were s i m p l i f i e d by o f i x i n g a l l i n t e r l a y e r spacings i n the metal at the bulk value (1.9022 A); t h i s follows our previous conclusion for clean Rh(100) that t h i s surface i s not reconstructed and i t s topmost spacing i s within 2.5% of the bulk value (section 3.2). Three types of s t r u c t u r a l model were tested, a l l corresponding to a quarter monolayer of S atoms. These models are shown i n fi g u r e 2.8 and they are designated according to the number of nearest-neighbour metal atoms (as already described i n section 2.7) as 4F, IF and 2F. The packing of hard spheres, with r a d i i given by Pauling , was used to guide the possible -108-values of topmost i n t e r l a y e r spacing for each model type; t h i s analysis speci-es f i c a l l y considered spacings between 2.1 and 2.7 A for the IF model, between o o 1.4 and 2.2 A for the 2F model and between 1.0 and 1.6 A f o r the 4F model. Symmetry could be used i n the ca l c u l a t i o n s at normal incidence and the number of beams used i n the c a l c u l a t i o n s are summarized i n Table 2.1. For the 2F model i t i s necessary to average appropriate calculated beam i n t e n s i t i e s according to the possible symmetrically-equivalent domains. 4.4 Results 1(E) curves measured for normal incidence f o r the (01) and (~) sets of beams are shown i n figu r e 4.4 for two independent experiments. Beams within each set should be symmetrically-equivalent, both with regard to peak p o s i t -ions and other s t r u c t u r a l features. The correspondences seen i n the figu r e suggest that the experimental data are c l o s e l y reproducible, and t h i s supports t h e i r general r e l i a b i l i t y . The small v a r i a t i o n s which do occur must be att r i b u t e d to experimental errors (involving such factors as uneven response of the screen, imperfections of the c r y s t a l surface, and some uncertainty i n se t t i n g the angle of incidence); such e r r o r s , although small, do i n e v i t a b l y l i m i t the l e v e l of agreement possible between c a l c u l a t i o n and experiment. To minimize any artefacts i n the comparisons with the calculated i n t e n s i t i e s , measured 1(E) curves f o r sets of beams which are t h e o r e t i c a l l y equivalent were averaged and d i g i t a l l y smoothed (by two operations of the three-point smoothing f i l t e r ) p r i o r to comparing with the c a l c u l a t i o n s . -109-01 01 so 100 iTo energy(eV) 200 1 1 1 50 100 150 energy (eV ) 200 Figure 4.4: Comparison for the (~) and (01) beams of 1(E) curves from two d i f f e r e n t experiments measured at normal incidence. -110-Some comparisons of experimental and cal c u l a t e d 1(E) curves f o r Rh(100)-p(2x2)-S are shown i n figure 4.5. V i s u a l comparisons of a l l data a v a i l a b l e points to the conclusion that the centre (4F) model gives a better o v e r a l l correspondence to the experimental 1(E) curves than the bridge (2F) and on-top (IF) models. For the integral-order beams alone, reasonable match-ups bet-ween experimental and calculated 1(E) curves are found f o r the (01) and (02) o o o beams with a l l the three models ( i . e . 4F at 1.3 A, 2F at 1.9 A and IF at 2.3 A), but the 4F model also gives a good correspondence for the (11) beam whereas the 4F and IF models f a i l i n t h i s regard. As expected, the f r a c t i o n a l - o r d e r beams are generally more s e n s i t i v e to the locations of the overlayer atoms, and the o v e r a l l conclusion from a v i s u a l analysis of a l l data f o r the f r a c t -i o n a l order beams i s that the 4F model gives the best account o f the experi-o mental 1(E) curves with the Rh-S i n t e r l a y e r spacing close to 1.3 A. However, the agreement i s not complete, r e l a t i v e peak i n t e n s i t i e s are not properly accounted for and i n a few instances the 4F model f a i l s to reproduce features i n the experimental 1(E) curves. In p a r t i c u l a r , the calculated 1(E) curve for the (0^-) beam for the 4F model with the Rh-S i n t e r l a y e r spacing equal to 1.3 A does not reproduce the peak present i n the experimental curve at 110 eV; 3 also for the (0-) beam the 4F model shows an extra small peak at 130 eV which could not be detected i n the experimental curve. 3 13 For some f r a c t i o n a l - o r d e r beams (e s p e c i a l l y (0^) and C^))» calculated 1(E) curves from the bridge (2F) model give reasonable agreement with the o experimental 1(E) curves for the topmost spacing of 1.9 A, but t h i s adsorption s i t e i s less favorable than the 4F s i t e for the (o|) and ( ™ ) beams. The on-top (IF) model gives poor v i s u a l agreement between c a l c u l a t i o n and experiment -111-Figure 4.5: Comparison of experimental 1(E) curves for various i n t e g r a l -and f r a c t i o n a l - o r d e r d i f f r a c t e d beams from Rh(100)-p(2x2)-S with the calculated curves for S adsorbed on the 4F, 2F and IF s i t e s at the topmost Rh-S i n t e r l a y e r spacing indicated for each curve. Electron energy (eV) T — \" — i — i — i — 1 — i — i 1 I — | — | — i — i — i — i — i — i 1 ( — i — i — i — i — i — i — i — I 1 I — i — i — r — i i i i r I — i — i — i — i — i — i — i — i 1 I — i — i — i — i — i — i — i — i 1 I — i — i — i — i — i — i — i — i I — i — i — i — i — i i i i 40 80 120 160 200 40 80 120 160 200 40 80 120 160 200 40 80 120 160 200 Electron energy (eV) -114-13 for most beams, although some agreement i s present for the (— —) beam f o r the o topmost spacing of 2.7 A. I l l u s t r a t e d i n figure 4.6 are comparisons of experi-mental 1(E) curves for the (o|) and (-—) beams with those calculated from the 4F model for various values of the topmost i n t e r l a y e r spacing ranging from 1.0 A to 1.6 A. Although the l e v e l of agreement i s not complete, the best correspondence seems to occur with the S-Rh i n t e r l a y e r spacing between 1.2 and 1.3 A. The correspondence between the experimental and calculated 1(E) curves for the Rh(100)-p(2x2)-S surface were also assessed by evaluating the r e l i -a b i l i t y index (r^) proposed by Zanazzi and Jona . Figures 4.7(a)-4.7(c) give contour pl o t s of r ^ as a function of the Rh-S spacing and V f o r each of the three models when compared with one set of experimental data. Compari-son with the other set of experimental data produced s i m i l a r r e s u l t s , as sum-marized i n Table 4.1. The analysis with r ^ unambiguously showed that the 4F model gives the best correspondence.between the experimental and calculated. 1(E) curves. For t h i s model, r ^ i s minimized (figure 4.7(a)) with the Rh-S o i n t e r l a y e r spacing equal to 1.30±0.03 A and V ^ equal -13.6±0.9 eV, when the uncertainties are given as i e . and ±e as indicated i n section 2.8. The b d v uncertainties correspond to 68% p r o b a b i l i t i e s according to the analysis of Watson et a l . . The minimum value of r for the 4F model i s 0.26; t h i s r / represents a moderate l e v e l of agreement and suggests that the structure i s - at least probably correct according to a c r i t e r i o n of Zanazzi and Jona L 4 5 J . The bridge (2F) model also gives a l o c a l i z e d minimum, s p e c i f i c a l l y at the o Rh-S i n t e r l a y e r spacing of 1.94±0.08 A and V ^ equal to -11.611.4 eV. The -115-I 1 1 1 1 1 1 1 — ' 1 I 1 1 1 1 1—I 1 1 40 80 120 160 200 40 80 120 160 200 Electron energy (eV) Figure 4.6: Comparison of experimental I(E) curves for the (0^) and (~,~) beams from the Rh(100)-p(2x2)-S surface with those calculated for S adsorbed on the 4F s i t e for a range of topmost Rh-S interlayer spacings. -116-Table 4.1: Conditions f o r minima of r r for d i f f e r e n t models of Rh(100)-p(2x2)-S. surface model expt. no. AE (eV) S-Rh (A) V n T, (eV) or centre s i t e (4F) 856 932 1.3010.03 1.3110.03 •13.6+0.9 •13.810.8 0.26 0.25 bridge s i t e (2F) 856 932 1.9410.08 1.9410.08 •11.6+1.4 -13.511.2 0.30 0.28 on-top s i t e (IF) 856 932 no l o c a l i z e d minimum no l o c a l i z e d minimum t o t a l range of energy compared. -117-corresponding minimum value of (0.30) i s higher than that of the 4F model (0.26), although these r ^ values are closer than expected on the basis of the v i s u a l a n a l y s i s . Further suggestive support for the 4F model, from the r e l i a b i l i t y index a n a l y s i s , i s indicated by the larger uncertainties a s s o c i -ated with the bridge model. The contour p l o t of r ^ i n figure 4.7 (c) does not ind i c a t e a l o c a l i z e d minimum for the on-top (IF) model, also values of r ^ are comparatively high over the complete ranges of V and Rh-S i n t e r l a y e r spacing considered. However i t was observed i n separate calculations that the contour plots of r ^ for the integral-order beams alone and for the f r a c -t i onal-order beams alone did show l o c a l minima corresponding to Rh-S i n t e r -o o layer spacings of 2.3 A and 2.7 A r e s p e c t i v e l y ; t h i s i n d i c a t e s the reason why the calculated 1(E) curves shown in figure 4.5 are for the spacings 2.3 and 2.7 X. 4.5 Discussion The evidence presented above indicates that the surface structure Rh(100)-p(2x2)-S has the sulphur atoms adsorbed on the f o u r - f o l d (4F) s i t e s o of the Rh(100) surface at about 1.30 A above the topmost rhodium layer. o This corresponds to a nearest neighbour S-Rh distance equal to 2.30 A. E v i -dence that t h i s i s a reasonable bond distance i s suggested by the average o values found by X-ray crystallography i n Rhj^S (2.33 A) (_ 134J and i n Rh^S^ (2.37 X) ; also Rh-S distances i n unhindered coordination complexes generally range from 2.23 to 2.38 A [136-138J. Often structures from LEED I I I I crystallography are discussed i n terms of e f f e c t i v e r a d i i (r ) for the © I T -118-(a) Rh(100)-P(2x2)S 4F S-Rh DISTANCE (A) Figure 4.7: Contour plo t s of r ^ f o r Rh(100)-p(2x2)-S versus and Rh-S i n t e r l a y e r spacing f o r (a) 4F model, (b) 2F model, and (c) IF model. Error bars i n d i c a t e standard errors as defined i n chapter 2. -119-RhdOO)-p(2x2)S 2F o S-Rh DISTANCE (A ) -120--121-adsorbed species . By considering Rh as being unchanged by adsorption o so that i t retains the m e t a l l i c radius of 1.34 A, an e f f e c t i v e radius of S i s obtained by subtracting the rhodium m e t a l l i c radius from the Rh-S nearest-o neighbour distance, and t h i s gives the value of r £^ for S equal to 0.96 A. This value can be compared with other values f o r S (Table 4.2) deduced with LEED crystallography for adsorption on m e t a l l i c surfaces. From Table 4.2 i t i s clear that r of S obtained i n t h i s work i s s i m i l a r to values obtained e f f from some other studies, although i t i s probably not reasonable to expect r £^ of S to be constant i n different.bonding s i t u a t i o n s (involving for example, d i f f e r e n t metal atoms, d i f f e r e n t substrate dimension and e s p e c i a l l y d i f f e r e n t coordination s i t e s ) . Although hard sphere r a d i i (e.g. r ) have often been used for i n t e r -pretations of surface bond distances, i t would c l e a r l y be preferable to r e l a t e such discussions more c l o s e l y to the concepts of covalent bonding. That some M-X surface bond lengths correspond i n a good approximation to single-bond values i s established in Table 4.3 where some comparisons are given for M-X distances for the heavier chalcogens on (100) surfaces of fee metals. Ratio-n a l i z a t i o n s of such correlations and t h e i r extensions to other surface systems, have been given by M i t c h e l l [143,144] based on h y b r i d i z a t i o n schemes f o r metals given by Altmann, Coulson and Hume-Rothery and on r e l a t i v e valencies and the bond length - bond order r e l a t i o n given by Pauling , The point of immediate i n t e r e s t , however, i s that the Rh-S bond length found o i n the LEED analysis of Rh(100)-p(2x2)-S i s within 0.01 A of the single-bond value, thereby i n d i c a t i n g a general consistency with surface bond lengths Table 4.2: E f f e c t i v e r a d i i of chemisorbed sulphur atoms on various metal surfaces. System Overlayer surface structure Bonding s i t e M-S bond o distance (A) r r r of e f f 9 sulphur. (A) References S/Ni(100) c (2x2) 4F 2.18 0.94 131 S/Ni(100) P(2x2) 4F 2.18 0.94 131 S/Ni(110) p (2x2) 4F 2.17, 2.35+ 0.93 140 S / N i ( l l l ) P (2x2) 3F 2.02 0.78 140 S / I r ( l l l ) (/3x/3)R30° 3F 2.28 0.92 147 S/Rh(110) c (2x2) 4F 2.12, 2.45+ 0.77 123 S/Rh(100) p(2x2> 4F 2.30 0.96 124 S/Fe(100) c (2x2) 4F 2.30 1.06 142 +Each S atom i s closer to a metal atom in the second layer than the atoms i n the f i r s t layer. Table 4 . 3 : Comparisons of M-X bond distances for chalcogen atoms adsorbed on ( 1 0 0 ) surfaces of fee metals with Pauling s s i n g l e bond lengths [ 1 3 3 ] . overlayer bonding M-X distance ; M-X s i n g l e references o 0 surface structure s i t e by LEED (A) bond length (A) S/Ni ( 1 0 0 ) c ( 2 x 2 ) 4 F 2 . 1 8 2 . 1 9 131 . p ( 2 x 2 ) 4F 2 . 1 8 2 . 1 9 131 Se/Ni ( 1 0 0 ) c ( 2 x 2 ) 4F 2 . 2 8 2 . 3 2 1 3 1 , 1 4 0 P ( 2 x 2 ) 4F 2 . 3 2 2 . 3 2 1 3 1 , 1 4 0 Te/Ni ( 1 0 0 ) c ( 2 x 2 ) 4F 2 . 5 9 2 . 5 2 1 3 1 , 1 4 0 , 1 4 9 p ( 2 x 2 ) 4F 2 . 5 2 2 . 5 2 1 3 1 , 1 4 0 Te/Cu (100) p ( 2 x 2 ) 4 F 2 . 4 8 2 . 5 4 1 4 8 S/Rh (100) p ( 2 x 2 ) 4 F 2 . 3 0 2 . 2 9 1 2 3 -124-reported from other example of S, Se and Te adsorption on fee (100) surface. This c o r r e l a t i o n had not been recognized at the time we i n i t i a l l y published our LEED analysis for Rh(100)-p(2x2)-S . Generally i t i s f e l t that the surface structure reported here f o r Rh(100)-p(2x2)-S gives bond dimensions which are broadly consistent with X-ray c r y s t a l l o g r a p h i c data for S-Rh bond lengths and with LEED r e s u l t s f o r adsorp-t i o n of S atoms on other surfaces. The l e v e l of agreement reached between the calculated and experimental 1(E) curves i s not complete, and the o r i g i n s of the d e f i c i e n c i e s are presently unknown. The number of model structures considered i n the c a l c u l a t i o n for t h i s work i s l i m i t e d ; i n p r i n c i p l e more complicated models are po s s i b l e , but since no c o n f l i c t seems to be present with the p r i n c i p l e s of surface s t r u c t u r a l chemistry, as they are presently evolving, we do not f e e l that further m u l t i p l e - s c a t t e r i n g c a l c u l -ations on more complex surface models are required at t h i s time. An i n e v i t a b l e problem with the t r i a l - a n d - e r r o r approach i n LEED crystallography i s that, however good the correspondence may be between experimental and calculated 1(E) curves f or a given s t r u c t u r e , there i s no absolute way of r u l i n g out the p o s s i b i l i t y that some other (untested) structures could give even better agreement. Although the o r i g i n of some discrepancies between the experimental and ca l c u l a t e d i n t e n s i t i e s found here are not yet c l e a r , we beli e v e the r e s u l t s i n d i c a t e that the structure most l i k e l y involves S atoms \"adsorbed at 1.3 A above the f o u r - f o l d s i t e s of the Rh(100) surface. - 1 2 5 -CHAPTER 5 LEED Analysis of the Rh ( 1 1 0 j-c ( 2 x 2)-S Surface Structure -126-5.1 Introduction Having determined the surface geometry for sulphur adsorbed on the (100) surface of rhodium, we were interested i n comparing with the s i t u a t i o n f or S adsorbed on the more open (110) surface. A second reason f o r making a LEED analysis of t h i s additional structure was suggested by e a r l i e r reports that two d i f f e r e n t adsorption s i t e s are indicated by LEED crystallography for atomic adsorption on (110) surfaces of face-centered cubic metals. Oxygen atoms are reported to adsorb on the short-bridge s i t e s of both Ni(110) and (impurity-stabilized) unreconstructed Ir(110) , whereas sulphur atoms adsorb on the centre (four-fold) s i t e s of Ni(110) . It i s hoped that an i n v e s t i g a t i o n of the adsorption of S on the Rh(110) surface may give fur -ther i n s i g h t s into surface chemical bonding. 5.2 Experimental The f i r s t part of this, study involved obtaining a clean (110) surface of rhodium, and t h i s followed c l o s e l y the procedures described e a r l i e r i n t h i s thesis and i n other work reported from our laboratory . This study was performed on a single c r y s t a l s l i c e cut from a rod of p u r i t y 99.99% purchased from Research Organic/Inorganic Chemical Corp. Af t e r pumping down i n the vacuum chamber, the i n i t i a l Auger spectrum indicated some contamination from phosphorus, sulphur and carbon. The S and P impurities could be removed from . the surface by argon-ion bombardment (1 keV at 5 microamps for 20 minutes), but, as previously, a r e l a t i v e increase i n the surface concentration of C was indicated. However, t h i s impurity apparently d i f f u s e d i n t o the bulk on heatinj at 300°C. A f t e r several cycles of ion-bombardment and annealing, the surface -127-showed both an e s s e n t i a l l y - c l e a n Auger spectrum (figure 5 . 1(a)) and a sharp ( l x l ) LEED pattern. This r e s u l t i n g Auger spectrum i s s i m i l a r to that obtained for the cleaned (100) surface of rhodium (figure 4 . 2 ) . A f t e r obtaining the well-defined LEED pattern c h a r a c t e r i s t i c of the clean Rh(110) surface, high p u r i t y l ^ S (Mathe-son) was allowed to adsorb on the surface by the following procedures. F i r s t the sample was heated at 3 0 0 ° C for 1 minute and U^S was l e t i n t o the vacuum chamber at the pressure of _7 5x10 t o r r f or 1 minute. After pumping out the excess gas, LEED showed a d i f f u s e r i n g pattern i n d i c a t i n g that H^S (or S) had adsorbed with only p a r t i a l ordering on the surface. The sample was then heated at 300°C for 3 minutes and allowed to cool down. At th i s point LEED indicated that the r i n g pattern had been replaced by traces of (^j) spots, c h a r a c t e r i s t i c of a c(2x2) pattern, but the spot i n t e n s i t i e s were weak. With furthur heating at 700°C for 2 minutes, LEED showed a stable and sharp c(2x2) pattern (figure 5.2) which could be removed only by argon ion bombardment. The Auger spectrum indicated no other detectable impurities and a r a t i o of the Auger peak heights S(152):Rh(302) approximately equal to 3:4 (figure 5 . 1(b)). For the purposes of beam i n t e n s i t y measurements, two sets of photographs were taken: one at normal incidence over the energy range 22 to 220 eV and the other for off-normal incidence ( s p e c i f i c a l l y 6 = 1 0 ° , c f > = 1 3 5 ° [ l 0 0 ] ) from 22 to 160 eV. The photographic negatives were analyzed with the computer-con t r o l l e d Vidicon camera as described i n section 3 .4. For normal incidence, 1(E) curves were measured for 9 integral-order beams and for 5 f r a c t i o n a l -order beams using the beam notation indicated i n f i g u r e 5 .3 . These are -128-Energy (eV) Figure 5.1: Auger spectra f o r a Rh(llO) surface when cleaned and when containing a c(2*2) overlayer of sulphur. - 1 2 9 -- fo-r i gure 5.2: Photographs o f LEED p a t t e r n s o b s e r v e d a t normal i n c i d e n c e f rom a d s o r p t i o n o f S on R h ( l l O ) s u r f a c e . (a) Rh (110) a t 144 eV, ( b ) Rh(110)-c(2x2)-S a t 78 eV, (c) Rh (110 ) - c ( 2 x2 ) - S a t 102 eV, ^ (d) Rh (110 ) - c (2 *2 ) - S a t 150 eV. -130-9, 1 2 11 10 3 3 2 2 3 1 2 2 22 21 20 1 1 01 11 21 Figure 5.3: Beam notation for the LEED pattern from the Rh(110)-c(2><2)-S surface structure. -131-(01) (02) (03) (10) (11) (12) (13) (20) (21) 11 13 15 31 33 ^22^ ^22^ \"^22^ ^22^ ^22^ \" The 1(E) curves for the integral-order beams were found to be rather s i m i l a r to those of the clean (110) surface; t h i s suggested that the pro-duction of the Rh(110)-c(2x2)-S structure did not involve any appreciable changes i n the p o s i t i o n s of the Rh atoms from those i n the clean surface. Typical experimental 1(E) curves f o r normal incidence are shown i n f i g u r e 5.4, The s i m i l a r i t i e s for the beams which should be equivalent are not as close as those generally found from the Rh(100)-p(2*2)-S structure. This probably indicates larger deviations from normal incidence, although there may be extra degrees of roughness f o r the (110) surface. The complete sets of i n t e n s i t y data for both di r e c t i o n s of incidence are c o l l e c t e d i n Appendices A5-A6. 5.3 Calculations The simplest models for the c(2x2) t r a n s l a t i o n a l symmetry associated with atoms adsorbed on an unreconstructed (110) surface of a face-centred cubic metal have already been shown i n figure 1.8. These models are desig-nated according to the s i t e s of adsorption namely: centre or f o u r - f o l d (4F) model, on-top or one-fold (IF) model, short-bridge (2SB) model and a long-bridge (2LB) model. 1(E) curves for the various required d i f f r a c t e d beams were calculated using the layer-doubling method for a l l of these models. The computing times were reduced by e x p l o i t i n g the symmetry at normal i n c i -dence, and by adding the adsorbate layer separately to both the bottom and -132-Figure 5.4: Experimental 1(E) curves for two sets of beams which are expected to be equivalent f or the Rh(110)-c(2x2)-S structure. - 1 3 3 -the top of the substrate stack, a f t e r the r e f l e c t i o n and transmission matrices have converged f o r the substrate alone (this t y p i c a l l y requires 8 to 16 l a y e r s ) , to give d i f f r a c t e d beam i n t e n s i t i e s from the 4 F and I F models from a s i n g l e set of m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s ( s i m i l a r l y the 2 L B and 2 S B models could be treated together). A l l four s t r u c t u r a l models considered have two perpendicular mirror planes; 49 symmetrically inequivalent beams were included i n the c a l c u l a t i o n to ensure convergence. The same non-structural parameters were used i n the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s on R h ( 1 1 0 ) - c ( 2 x 2 ) - S as for the analysis of the R h ( 1 0 0 ) - p ( 2 x 2 ) - S structure. S p e c i f i c a l l y the Rh p o t e n t i a l was characterized by phase s h i f t s (to 1=1) derived from a band structure c a l c u l a t i o n [ 1 1 0 ] ; the r e a l part of the constant p o t e n t i a l (V ) between the atomic spheres was set i n i t i a l l y at or - 1 2 . 0 eV; a superposition p o t e n t i a l [ 1 3 1 ] was used f o r S ; the surface Debye temperatures were taken as 4 0 6 and 2 3 6 K for Rh and S r e s p e c t i v e l y , while the imaginary part (V ^) of the constant p o t e n t i a l between a l l spheres was equ-1 / 3 ated to - 1 . 5 1 E eV. The s t r u c t u r a l parameters f o r the R h ( 1 1 0 ) - c ( 2 * 2 ) - S surface were s i m p l i f i e d by f i x i n g a l l i n t e r l a y e r spacings f o r Rh ( 1 1 0 ) at the bulk value ( 1 . 3 4 5 A); t h i s follows our previous observations that the clean Rh ( 1 1 0 ) surface i s not reconstructed and that the topmost i n t e r l a y e r spacing i s contracted by only 3% from the bulk value [ 1 0 9 , 1 5 0 ] . The Rh-S o spacings were varied over the following ranges: 0 . 6 5 - 1 . 2 5 A f o r the 4 F model, 2 . 0 - 2 . 6 A for the I F model, 1 .1 - 1.7 A for the 2 L B model and 1 . 6 - 2 . 2 A for the 2SB model. -134-Preliminary attempts were made to c a l c u l a t e the d i f f r a c t e d beam inten-s i t i e s f o r the conditions measured i n the experiment for off-normal i n c i -dence (6 = 10°, <J>=135°). Symmetry could not now be exploited and hence the t o t a l number of beams needed i n the c a l c u l a t i o n i s greatly increased over that f or normal incidence. Around 175 beams would be required at 200 eV, and we found that the consequent computational requirements were too expen-sive f or us to proceed with these c a l c u l a t i o n s . The experimental data for off-normal incidence has however been c o l l e c t e d i n the appendix. 5.4 Results Some comparisons of experimental and calculated 1(E) curves are given i n figures 5.5 and 5.6. Figures 5.5(a)-5.5(c) compare experimental 1(E) 31 curves for the (10), (01) and (--) beams with those calculated for the 4F, IF, 2SB and 2LB models for various Rh-S i n t e r l a y e r spacings. V i s u a l compari-sons show poor agreement for the short-bridge (2SB) and long bridge (2LB) models, and while the on-top (IF) model produced a reasonable correspondence for the (10) beam, there was l i t t l e agreement for other beams. V i s u a l comparisons over the complete range of data unambiguously indicated that the best correspondence between the experimental and calculated 1(E) curves i s provided by the 4F model with the Rh-S i n t e r l a y e r spacing i n .the range 0.75 to 0.85 X (figure 5.6). Discrepancies are apparent, e s p e c i a l l y for some r e l a t i v e peak heights, although at the present stage of development of LEED crystallography the general correspondence can (we believe) be c l a s s i f i e d II it as good . -135-Figure 5.5: Comparison of some experimental 1(E) curves from Rh(110)-c(2x2)-S with those calculated for the four s t r u c t u r a l models over a range of topmost i n t e r l a y e r spacings: (a) (01) beam, (b) (10) 31 beam, and (c) (rr) beam. 2LB (01) beam 4 0 8 0 120 160 2 0 0 l b ) Electron energy (eV) I—I—I—I—I—I—I—I—I—• 4 0 8 0 120 160 2 0 0 Electron energy (eV) -139-—I—1—1—1—1—1—1—1 V A (10) beam ft •A AO. 85 A i i' • * i 11 a • i i i i i i i i -i—i—i—I—I—i—i—r (II) beam i i i T i i i — L 40 120 200 40 120 200 «> \"c => >» k» o w J5 w O >-10 z UJ -I—I—I—I—I—I—'—I— (12) beam T—I—I—l—l—l—\"—'— (01) beam 40 I l i I—r—i—i I ( I L) beam - i — i — i — i — i — i — i — r ~ ( i |)beam i 1 1 - i — i — i — i — i — i — i — i -(20) beam 200 40 —1—1—» 1 1 i i—i—i— V| l)beam A A K. / < k J i i i 1 i i i 1 120 200 i—i i l—r—i—i— (Jf)beam i i i I i i i. 120 200 200 40 120 200 40 120 ELECTRON ENERGY (eV) Figure 5.6: Comparison of experimental 1(E) curves f o r some i n t e g r a l - and fra c t i o n a l - o r d e r beams from Rh(110)-c(2x2)-S with those calculated for the 4F model with sulphur either 0.75 or 0.85 X above the topmost rhodium layer. -140-4 F model I F model Rh-S spacing (A) Figure 5.7: Contour plots of r r for Rh(110)-c(2x2)-S versus V Q r and i n t e r l a y e r spacing for four d i f f e r e n t s t r u c t u r a l models. -141-The comparisons between experimental and c a l c u l a t e d 1(E) curves were also assessed by evaluating the r e l i a b i l i t y index proposed by Zanazzi and Jona . Figure 5.7 gives contour pl o t s of r ^ as a function of Rh-S spacing and V for each of the four models considered here. Again there i s c l e a r evidence that the centre (4F) model gives the best correspondence between the experimental and calculated i n t e n s i t i e s . The minimum value of r ^ (0.165) represents a good l e v e l of agreement [ 4 5 ] , and i t corresponds to V = -12.210.8 eV and a Rh-S i n t e r l a y e r spacing of 0.7710.04 A. For the or J r t> other models, r ^ was always s u f f i c i e n t l y large (>0.35) to i n d i c a t e a poor correspondence between the experimental and calculated 1(E) curves. 5.5 Discussion The evidence j u s t presented indicates that the Rh(110)-c(2x2)-S struc-ture has the sulphur atoms adsorbed on the centre (4F) s i t e s of the Rh(110) o surface at about 0.77 A above the topmost rhodium layer. The multiple-s c a t t e r i n g c a l c u l a t i o n s made here assumed that a l l metal-metal distances correspond to the normal bulk values. Tentative evidence i n support i s provided by an a d d i t i o n a l analysis with the r e l i a b i l i t y index r ^ ; we used t h i s index to assess the l e v e l of correspondence between the experimental 1(E) curves for the beams (10), (01), (11) and (12) for the overlayer struc-ture and those calculated for the clean surface. For these conditions, we ...found r r was minimized at the value of 0.22 with the topmost i n t e r l a y e r spacing of rhodium being expanded by j u s t 1% over the bulk value. -142-Figure 5.8 indicates interatomic distances i n the v i c i n i t y of adsorbed sulphur atoms i n the Rh(110)-c(2x2)-S structure assuming there i s no relaxa-t i o n for the rhodium structure. It i s apparent that the f o u r - f o l d hole i n the Rh(llO) surface i s s u f f i c i e n t l y large that the sulphur atom can penetrate quite deeply; i n f a c t sulphur becomes considerably closer to the rhodium atom d i r e c t l y below i n the second metal layer than to the four neighbouring rhodium atoms i n the f i r s t layer. The respective distances are Rhj -S = 2.12 A and Rh-S = 2.45 A . Similar observations have also been made from LEED c r y s t a l l o g r a p h i c analyses for S adsorbed on the N i ( l l O ) surface , for which the corresponding distances are N i j j - S = 2.17 A and Ni -S = 2.35 A, and for 0 adsorbed on the Fe(100) surface (figure 5.9) for which F e n - 0 = 2.02 A and F e ^ S = 2.08 A. By contrast, adsorption o f S on the Fe(100) surface does not involve s i g n i -f i c a n t i n t e r a c t i o n of S to the second layer Fe atom . In t h i s case, S i s too large to sink deeply into the f o u r - f o l d hole of the Fe(100) surface. The differences between 0 and S chemisorbed on fee(110) surfaces can p l a u s i b l y be associated with s i z e e f f e c t s . 0 appears too small to adsorb on the centre (4F) s i t e and i n t e r a c t with metal o r b i t a l s directed at t h i s s i t e i n terms of h y b r i d i z a t i o n model of Altmann, Coulson and Hume-Rothery . \"Bonding p o s s i b i l i t i e s f o r 0 seem better on the short-bridge s i t e s . -143-Figure 5.9: Interatomic distances for the s p e c i f i c a t i o n of hard sphere r a d i i i n the neighbourhood of an oxygen atom i n the o F e ( 1 0 0 ) - ( l x l ) - O structure. Distances i n Angstroms. (After Legg et a l . ). -144-Th e most s i g n i f i c a n t comparison for the new r e s u l t s for S on Rh(llO) i s with the structure formed by adsorption of the same species on N i ( l l O ) . M i t c h e l l has offered a t e n t a t i v e analysis of these structures, and indicated a tendency for S to form a s i n g l e covalent bond to the metal atom d i r e c t l y below i n the second layer and four 3/4 order bonds to the neighbouring metal atoms i n the topmost layer. An i n t e r e s t i n g point i s that while the • distances found from LEED for S on N i ( l l O ) are broadly consistent with t h i s , i t i s physical impossible for the corresponding distances to be simultaneously s a t i s f i e d f o r S on Rh(llO), and t h i s i s a d i r e c t consequence of the longer Rh-Rh distance compared with the Ni-Ni distance. M i t c h e l l concluded that t h i s r e s u l t s i n S being held at that height above the Rh(llO) surface where the combined strengths of the f i v e bonds are optimized, and t h i s requires some t i l l o squeezing of the Rhjj-S distance from the s i n g l e bond value (2.29 A) i n order to get reasonable i n t e r a c t i o n s to the four Rh atoms i n the f i r s t layer. An important aspect of t h i s discussion i s that i t represents a s t a r t on u t i l -i z i n g covalent bonding concepts f o r chemisorption. Most analyses so f a r have emphasized hard sphere r a d i i . The e f f e c t i v e radius indicated f o r S on Rh(llO) i s 0.77 A; t h i s can be compared with other values reported from LEED c r y s t a l -o o lography varying from 0.78 A to 1.04 A as noted i n section 4.5. -145-CHAPTER 6 Studies with the Quasidynamical Method -146-6.1 Introduction Rel i a b l e surface structures so far reported by LEED crystallography have come from studies which used the t r i a l - a n d - e r r o r approach wherein experimental I(E) curves are compared with those calculated for a range of possible surface models and a s e l e c t i o n i s made of the geometrical model that gives the best o v e r a l l correspondence. Generally the c a l c u l a t i o n s have used m u l t i p l e - s c a t t e r i n g methods which are e i t h e r formally exact (e.g. the T-matrix or Bloch-wave methods) or involve good i t e r a t i v e approximations to the f u l l m u l t i p l e - s c a t t e r i n g methods (e.g. layer-doubling or RFS methods). This provides the only generally-accepted approach to LEED crystallography at the present time. Aside from l i m i t a t i o n s s t i l l present i n the experi-mental measurements, and l i m i t a t i o n s introduced i n t o the c a l c u l a t i o n s through the model assumed for the p o t e n t i a l and l a t t i c e v i b r a t i o n s , the accuracy of the present approach to LEED crystallography i s l i m i t e d e s p e c i a l l y by comput-ation time and core storage. A serious problem for surface s t r u c t u r a l chemi-s t r y concerns the l i m i t a t i o n s set on t h i s approach for complex surface struc-tures, for which m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s i n e v i t a b l y become p r o h i b i -t i v e l y expensive. This opens the need to search for new c a l c u l a t i o n schemes which maintain r e l i a b i l i t y while reducing the computational burden. In p r i n c i p l e the simplest LEED c a l c u l a t i o n involves the kinematical method i n which s c a t t e r i n g by ion-cores i s assumed to be weak so that only s i n g l e s c a t t e r i n g events are included. The discussion i n section 2.1 . establishes that t h i s method i s inadequate for decribing the actual features observed i n the s c a t t e r i n g of low-energy electrons by a s o l i d surface. Attempts -147-have been made to make the kinematical theory usable f o r LEED by processing experimental data such that the mu l t i p l e - s c a t t e r i n g contributions are aver-aged out and the res i d u a l i n t e n s i t i e s can then be analyzed with the kinematic II theory. These data processing procedures include the constant momentum II it transfer averaging method introduced by Lagally et a l . , the energy i t II averaging method introduced by Tucker and Duke , and the Fourier II transform method . Although a t t r a c t i v e i n p r i n c i p l e , these methods cannot yet be considered well-established for determining unknown surface structures i n v o l v i n g adsorption. A new approximate m u l t i p l e - s c a t t e r i n g scheme for c a l c u l a t i n g LEED inten-s i t i e s i s the quasidynamical method , In t h i s method, only s i n g l e s c a t t e r i n g i s included within an atomic layer, while the i n t e r l a y e r s c a t t e r i n g i s calcu-lated properly, for example by the RFS method. The o r i g i n a l authors proposed that t h i s approach should be most r e l i a b l e f o r surface systems i n v o l v i n g l i g h t atoms i n r e l a t i v e l y open structures, where the neglect of i n t r a l a y e r multiple-s c a t t e r i n g i s expected to be less serious. I n i t i a l analyses f o r the unrecon-structed model of GaAs(llO) and for reconstructed Si(100) gave promising agree-ment with f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s and with experimental data [46,47] r e s p e c t i v e l y . Such tests indicated that the quasidynamical method can give reasonable accounts of the positions of the main peaks i n experi-mental 1(E) curves, as well as much of the secondary structure, although the absolute i n t e n s i t i e s and the r e l a t i v e i n t e n s i t i e s of neighbouring peaks are . often not predicted r e l i a b l y . The purpose of the present study i s to inves t i g a t e further the quasi-dynamical method by comparing with experimental and calculated 1(E) curves -148-a l ready repor ted i n t h i s t h e s i s , e s p e c i a l l y f o r the ad so rp t i on systems Rh(100)-p(2x2)-S and Rh (110)-c(2x2)-S. A p a r t i c u l a r o b j e c t i v e i s to assess whether t h i s method can i d e n t i f y c e r t a i n su r face models as g i v i n g s u f f i c -i e n t l y poor correspondences w i t h the exper imenta l 1(E) curves t ha t these models need not be cons idered i n the ref inement stages of LEED c r y s t a l l o -graph ic ana ly ses . Analyses f o r the corresponding c lean su r faces of rhodium are made, and they p rov ide convenient r e f e rence po i n t s f o r the ad so rp t i on systems .-6.2 C a l c u l a t i o n s A fundamental pa r t o f c a l c u l a t i o n s of LEED i n t e n s i t i e s i n vo l v e s e v a l u -a t i o n o f the l a ye r d i f f r a c t i o n matr i ces M (equation 2.30) f o r each atomic p l ane ; then the planes are s tacked i n order to determine the s c a t t e r i n g from a c r y s t a l s l ab (of e i t h e r f i n i t e o r s e m i - i n f i n i t e e x t e n t ) . Gene ra l l y the e va l ua t i on o f M i s the most t ime consuming pa r t o f t h i s whole p roces s , s p e c i f i c a l l y because i t i n vo l ve s c a l c u l a t i n g Q - X l - 1 which desc r ibes a l l m u l t i p l e - s c a t t e r i n g events w i t h i n an atomic l a y e r (equat ion 2.30). The quas idynamical scheme makes use o f a commonly-found ob se r v a t i on , t ha t i n t e r -l a y e r m u l t i p l e s c a t t e r i n g i s much s t ronger than i n t r a l a y e r m u l t i p l e - s c a t t e r i n g [ 4 6 ] , by equat ing the p l ana r s c a t t e r i n g ma t r i x to zero . This assumption g ives s u b s t a n t i a l r educ t i on s i n computation t imes . The important ques t i on now concerns whether the ga in i n computat ional convenience i s o f f s e t , or n o t , by too great a lo s s o f accuracy i n the c a l c u l a t e d 1(E) curves . The present t e s t s w i t h the quas idynamical method use the same types o f s u r f a ce models as those cons idered i n the p rev ious s tud ie s w i t h the f u l l -149-m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s [ 1 2 3 , 1 2 4 , 1 5 0 ] . Thus only the regular face-centred cubic r e g i s t r i e s were considered here f o r the clean surfaces, but relaxations of the topmost i n t e r l a y e r spacings were allowed. D i f f e r e n t models for the S overlayer are designated as i n figures 1 . 8 and 2 . 8 ; a l l Rh-Rh distances are fi x e d at the appropriate bulk values. Unless otherwise indicated here the same non-structural parameters were used i n the quasidynamical calcu-la t i o n s as i n the corresponding m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s described previously (chapters 3 - 5 ) . The only modifications made i n t h i s regard were to the constant p o t e n t i a l s between the spherically-symmetric atomic p o t e n t i a l s . The imaginary part (V ^ ) of t h i s p o t e n t i a l was f i x e d at - 6 . 8 eV for the IF, 2SB and 2 L B models of Rh ( 1 1 0 )-c ( 2 x 2 )-S whereas the energy dependent form 1 / 3 V . = - 1 . 7 6 E eV was used f o r a l l other surfaces considered, except clean 01 1 / 3 Rh(llO) f o r which V was represented by - 2 . 0 5 E eV. The r e a l part of t h i s p o t e n t i a l ( v o r ) was fixed at - 1 2 eV for a l l c a l c u l a t i o n s , although t h i s value was e f f e c t i v e l y r e f i n e d during comparisons with experimental 1 ( E ) curves for each system. Quasidynamical c a l c u l a t i o n s were made for normal incidence over the energy range 4 0 to 2 0 8 eV for clean Rh ( 1 0 0 ) and over the range 50 to 178 eV for a l l other systems considered here. The RFS method was used f o r stacking atomic planes, these c a l c u l a t i o n s were made with 9 1 beams and electrons were allowed to t r a v e l through upto 12 layers i n the c r y s t a l . For the 4F model of Rh ( 1 1 0 )-c ( 2 x 2 )-S, i t was necessary to combine the sulphur layer and the top-most rhodium layer as a composite layer because o f t h e i r close spacing. -150-6.3 Results and Discussion 6.3 (a) Rh(llO) and Rh(110)-c(2*2)-S Experimental 1(E) curves for normal incidence on the clean Rh(110) surface are compared with those from quasidynamical c a l c u l a t i o n s f or Ad% = 0 and -10% i n f i g u r e 6.1. General correspondences i n peak posit i o n s are apparent f o r every p a i r of curves, although r e l a t i v e i n t e n s i t i e s are often not s a t i s f a c t o r y . Comparisons between quasidynamical (QD) and m u l t i p l e - s c a t t e r i n g (MS) calculated 1(E) curves are also shown i n the same fi g u r e ; again major peak posit i o n s match, although the r e l a t i v e i n t e n s i t i e s have changed i n the quasidynamical case. The experimental and c a l c u l a t e d 1(E) curves were also assessed with the r e l i a b i l i t y index r ^ , and Table 6.1 l i s t s the conditions for best correspondence ( i . e . minimum r ) between experimental and calculated 1(E) curves (from both mutiple-s c a t t e r i n g and quasidynamical c a l c u l a t i o n s ) f or the various surfaces considered. The previous m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s on clean RhfllO) indicated that the best correspondence i s with the topmost i n t e r l a y e r spacing contracted by 3.3% from the bulk value. The corresponding analysis with the quasidynamical method points to a contraction of 10.8%, however d e t a i l e d studies of the i n d i -v i d u al 1(E) curves suggested that the index may be less h e l p f u l f o r t h i s • p a r t i c u l a r purpose. This conclusion depends on r ^ being quite s e n s i t i v e to r e l a t i v e i n t e n s i t i e s over successive portions of i n d i v i d u a l 1(E) curves , and the fact (as seen from fig u r e 6.1 and noted above for GaAs(llO) and Si(100) ) that the quasidynamical method i s often u n r e l i a b l e f o r peak magnitudes within each 1(E) curve. Table 6.1: Comparisons of conditions for minimum for various surface structures obtained from evaluating experimental 1(E) curves with corresponding curves from multiple-s c a t t e r i n g c a l c u l a t i o n s and from quasidynamical c a l c u l a t i o n s . Surface structure M u l t i p l e - s c a t t e r i n g calculations Ad% d R h _ s ( A ) V o r(eV) r r Quasidynamical calculations A d % d R h - S * V o r C e V ) Rh(llO) -3.3 -11.9 0.12 •10.8 -16.0 0.23 Rh(100) 1.0 •12.8 0.09 3.2 •18.0 0.17 Rh(110)-c(2x2)-S (4F model) 0.77 -12.2 0.17 0.83 -24.4 1.02 0.23 •18.0 0.26 0.72 •16.4 0.30 Rh(100)-p(2x2)-S (4F model) 1.30 -13.6 0.26 1.32 -21.0 0.28 -152-( 0 1 ) beam E X P T A A • \\ ( 10 ) beam „ _ E X P T QD MS - i — i — i 1—i r — i — i 1 / \\ f ( 0 2 ) beam iJ K.J V..--'~-/- ^ P T 40 80 120 160 200 240 Electron energy (eV) - 1 0 % - i — i — i — i — ' — r — r n 1 1 1 1 r—I l I - i 1 1—I 1 1 1 1 1 ,x (02) beam i i — i — r 40 80 120 160 200 240 Electron energy (eV) Figure 6.1: Comparison of experimental 1(E) curves f o r normal incidence on Rh(llO) with those calculated with the quasidynamical method and the f u l l m u l t i p l e - s c a t t e r i n g method when the topmost i n t e r -layer spacing equals the bulk value (0%) and when i t i s contracted by 10%. -153-1(E) curves for d i f f e r e n t models of the Rh(110)-c(2x2)-S surface calcu-lated by the quasidynamical (QD) method were compared, by d i r e c t observation, with the experimental 1(E) curves and also with the corresponding curves calculated with the mu l t i p l e - s c a t t e r i n g (MS) method for the 4F model with the o topmost Rh-S i n t e r l a y e r spacing (d R h_g) equal to 0.75 A. Overall i t was d i f f i c u l t to pin-point the s t r u c t u r a l model from the quasidynamical c a l c u l a t i o n which gives the best agreement with the experimental curves; i n part t h i s was because of the e f f e c t s of errors i n r e l a t i v e i n t e n s i t i e s f o r successive por-tions of the calculated 1(E) curves. Also there are systematic s h i f t s i n peak positions f or the quasidynamically-calculated 1(E) curves. However i t did seem possible to conclude, from the v i s u a l a n a l y s i s , that the best match with the 1(E) curves from the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s occurred f o r o the 4F model i n the quasidynamical c a l c u l a t i o n s with d R ^ _ g - 1.15 A. Conclusions on conditions for correspondence between quasidynamically-calculated and experimental 1(E) curves were aided with the r e l i a b i l i t y index of Zanazzi and Jona. Two-dimensional contour plots of r versus d n, and . r r Rh-S V0T> f o r each s t r u c t u r a l model, are shown i n fi g u r e 6.2. Comparisons of con-tour plots suggest that the 4F model gives the lowest r ^ value (0.23). No lo c a l minima are found for the 2SB model whereas, for the 2LB and IF models, minima i n T occur with rather high values (>0.42) which suggests that these models are less probable. The contour pl o t s of r ^ for the 4F and 2LB models show the common feature of e x h i b i t i n g more than one l o c a l minimum (figure 6.2). o For the 4F model, the f i r s t minimum (with r ^ = 0.23) occurs for d R ^ g = 0.83 A and Vnr_ = -24.4 eV, the second (with a s l i g h t l y higher value of r , v i z . 0.28) -154-Figure 6.2: Contour plo t s of r r for Rh(110)-c(2x2)-S versus V Q r and the Rh-S i n t e r l a y e r spacing f or four d i f f e r e n t s t r u c t u r a l models calculated with the quasidynamical method. -155-o _ occurs with d_, „ = 1.02 A and V = -18.0 eV and the t h i r d (r = 0.30) occurs Rh-S or r at d,,, „ = 0.72 A and V = -16.4 eV (Table 6.1). This s i t u a t i o n i s to be Rh-S or \"compared with a s i n g l e minimum f o r the corresponding contour p l o t of r ^ f o r the same system when the cal c u l a t i o n s u t i l i z e the f u l l m u l t i p l e - s c a t t e r i n g o procedures (figure 5.7); i n t h i s case d^ ^ = 0.77 A, V = -12.2 eV and « r r = 0.17 (Table 6.1). In p r i n c i p l e the existence of more than one l o c a l minimum could r e l a t e to multiple-coincidences i n adsorbate-substrate spacings as discussed by Andersson and Pendry [ l 5 6 j . However, against t h i s p o s s i b i l i t y are the f o l -lowing observations: i ) no such e f f e c t was detected i n the previous analysis with the multiple-sc a t t e r i n g c a l c u l a t i o n (figure 5.7), and i i ) v i s u a l analysis of the i n d i v i d u a l 1(E) curves calculated with the quasi-o dynamical method for the spacings 0.75, 0.85 and 1.05 A are on balance less o s a t i s f a c t o r y than those calculated f or 1.15 A. Two e f f e c t s seem to be involved here. The f i r s t concerns the incomplete nature of the quasidynamical method, and the second appears to be associated with the r e l i a b i l i t y - i n d e x analysis being less r e l i a b l e for assessing i n t e r l a y e r spacings when the r e l a t i v e i n t e n s i t i e s of successive portions of i n d i v i d u a l 1(E) curves are not calculated c o r r e c t l y , even though a reasonable match i n positions of structure may s t i l l be recognized between the experimental and calculated 1(E) curves. -156-For quasidynamical c a l c u l a t i o n s for the 4F model, minima i n r ^ are associated with values of V i n the range -16.4 to -24.4 eV. These values or b are s u b s t a n t i a l l y changed from the value of -12.2 eV reported from the mu l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . This shows up in the v i s u a l analysis of the i n d i v i d u a l 1(E) curves; features from the quasidynamical c a l c u l a t i o n s occur on average at about 6 eV lower i n energy than do the corresponding features from the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . This need f o r a systematic s h i f t i n the 1(E) curves must be associated with the neglect of i n t r a l a y e r m u l t i p l e - s c a t t e r i n g i n the quasidynamical c a l c u l a t i o n s . Similar changes i n have also been observed for the quasidynamical c a l c u l a t i o n s of Rh(llO) (Table 6.1) and of S i (100) . 33 Figure 6.3 compares experimental 1(E) curves f o r the (01) and (--) d i f f -racted beams with the corresponding quasidynamically-calculated 1(E) curves f o r p a r t i c u l a r geometries of the four d i f f e r e n t s t r u c t u r a l models. Also shown are the corresponding 1(E) curves calculated by the m u l t i p l e - s c a t t e r i n g method for d ^ g = 0.75 A. For these two representative beams, quasidynamical calcu-l a t i o n f o r the 2SB and 2LB models do not show any agreement with the experi-mental 1(E) curves. S i g n i f i c a n t l e v e l s of agreement are apparent f o r both beams for the 4F model, whereas f or the IF model the quasidynamical c a l c u l a t i o n 33 gives some reasonable agreement f o r the (yr) beam but l i t t l e agreement f or the (01) beam. These comparisons emphasize the matching of peak p o s i t i o n s ; when a l l a v a i l a b l e data from the quasidynamical c a l c u l a t i o n s are considered the 4F model appears to give the best correspondence with 1(E) curves from both experiment and from the reference m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . -157-fth (IIO)-c(2*2)-S • ........ EXPT t : j ; • *• A ; * ' • • \\ ' • \\.-v : OD 2LB.I5 A \\ f: i • \\ ; \\ OD . \\ / \\ \\ ''• 2SB ,1 6A \"A \\.. i \\ •-. •. 1 : \\ \\ 1 \\ • '• / IF , 2.2A '. *\\ / \\ •\"' / \\ * OD 4F,I 15 A \\ •\"• MS 4F 0.75 A 120 •ncrgy («V) t o o ( 3 | ) b « a m EXPT A l y * OD . \\ ,2LB,I.5A i\\ / \\!\\ {'• .*> » ^ I / v v \\ oD ; / \\2SB,I.6A QD \\ IF, 2.2A] 40 •ntrgy («V ) 33 Figure 6.3: Comparison of 1(E) curves measured for the (01) and (-^j) d i f f r a c t e d beams f o r normal incidence on Rh(110)-c(2x2)-S with those ca l c u l a t e d by the quasidynamical method and by the f u l l multiple-s c a t t e r i n g method for the four s t r u c t u r a l models described i n tex t . -158-Table 6.2: A demonstration o f the correspondence between peak positions i n 1(E) curves calculated with the quasidynamical method for the four models of Rh(110)-c(2*2)-S at the s p e c i f i e d S-Rh i n t e r l a y e r spacing and those given by experiment and by the corresponding f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . In the entries for each beam, the denominator s p e c i f i e s the number of s i g n i f i c a n t peaks i n the relevant 1(E) curve from experiment or from the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s , and the numerator gives the number of those peaks that are matched to within 7 eV by the quasidynamical c a l c u l a t i o n s . Beam 4F Expt S-Rh=1.15A F u l l MS IF Expt S-Rh=2.2A F u l l MS 2SB Expt S-Rh=1.6A F u l l MS 2 LB Expt S-Rh=1.5A F u l l MS (01) 2/4 3/5 2/4 3/5 2/4 1/5 1/4 1/5 (02) 2/3 3/4 2/3 3/4 1/3 3/4 1/3 2/4 (03) 0/1 1/3 0/1 1/3 0/1 1/3 1/1 1/3 (10) 3/5 4/5 3/5 5/5 1/5 2/5 3/5 4/5 (11) 3/4 2/2 2/4 2/2 2/4 2/2 2/4 1/2 (12) 3/4 4/4 1/4 2/4 2/4 2/4 1/4 3/4 (13) • 0/2 0/1 0/2 1/1 0/2 0/1 0/2 0/1 (20) 1/2 2/4 0/2 2/4 1/2 3/4 0/2 1/4 (21) 1/1 2/2 0/1 1/2 0/1 1/2 1/1 1/2 (hh) 2/2 2/2 1/2 2/2 1/2 1/2 2/2 2/2 (h 3/2) 2/4 1/4 3/4 2/4 3/4 2/4 1/4 1/4 (h 5/2) 2/2 4/4 0/2 2/4 2/2 3/4 2/2 3/4 (3/2 h) 2/2 2/3 1/2 1/3 2/2 1/3 1/2 1/3 (3/2 3/2)1/2 4/4 0/2 1/4 1/2 2/4 1/2 2/4 T o t a l 24/38 34/47 15/38 28/47 18/38 24/47 17/38 23/47 - 1 5 9 -Figure 6.4: Comparisons of some experimental 1(E) curves for f r a c t i o n a l -order beams for normal incidence on Rh ( 1 1 0)-c ( 2 x 2)-S and Rh ( 1 0 0)-p ( 2 x 2)-S with those calculated for the centre adsorption s i t e s with the quasidynamical method and with the f u l l m u l t i p l e - s c a t t e r i n g method. The topmost Rh-S i n t e r -o layer spacings i n the quasidynamical cal c u l a t i o n s are 1 . 1 5 A and 1 . 3 A for Rh ( 1 1 0)-c ( 2 * 2)-S and Rh ( 1 0 0 ) - p ( 2 x 2 ) - S r e s p e c t i v e l y ; the corresponding values f o r the mu l t i p l e - s c a t t e r i n g c a l c u l a t i o n s o o are 0 . 7 5 A and 1 . 3 A. -160-Rh(HO)-c(2x2)-S EXPT A ( ^ | ) b e o m £ | ) b e o m ( f l ) b e o m Rh(100)-p(2x2)-S (O^)beam V A - V ' ^ E X P T 40 80 120 160 200 ( l ^ ) b e a m v E X P T ( ^ ) b e a m V' - EXPT ( O l ) b e o m \\ /v EXPT v \" i — i — i — i i 40 80 120 160 200 Electron energy (eV) Electron energy (eV) -161-Evidence i s provided i n Table 6.2, where the d e t a i l s i n matching of peak positions f or each beam and f o r each model are summarized. A spread i n peak positions of up to 7 eV was allowed i n t h i s matching i n order to accom-modate v a r i a t i o n s of V for the d i f f e r e n t surface models. or From the comparisons indicated i n Table 6.2, i t appears for the 4F model that the l e v e l of agreement, between the quasidynamical c a l c u l a t i o n s and either experiment or f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s , i s better for the f r a c t i o n a l - o r d e r beams than for the integral-order beams. A s i m i l a r observa-t i o n was also reported by Tong and Maldonado f o r the Si(100) surface [ 4 7 ] , Figure 6.4 (a) d e t a i l s some s p e c i f i c 1(E) curves f o r the f r a c t i o n a l - o r d e r beams calculated with the quasidynamical method for Rh(110)-c(2x2)-S, and compares with those from experiment and from m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . 6.3 (b) Rh(100) and Rh (100)-p(2x2)-S The previous analysis of LEED i n t e n s i t i e s from Rh(100), based on multiple-s c a t t e r i n g c a l c u l a t i o n s and the use of the r e l i a b i l i t y - i n d e x r , indicated that the topmost i n t e r l a y e r spacing i s very close to the bulk value, there being a surface layer contraction o f about 1% [43,150]. A s i m i l a r analysis made here with beam i n t e n s i t i e s c a l c ulated with the quasidynamical method also suggests a small contraction, t h i s time by 3% (Table 6.1). Figure 6.5 indicates for clean unreconstructed Rh(100) appreciable correspondence between peaks i n 1(E) curves calculated with the quasidynamical method and those from e i t h e r experiment or m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . In matching with the experi-mental 1(E) curves, the quasidynamically-calculated 1(E) curves needed s h i f t i n g to loweii energy by approximately 6 eV. This i s consistent with r ^ being mini-mized at V = -18.0 eV. or energy (eV ) e nerg y (eV ) Comparisons of some experimental 1(E) curves for normal incidence on Rh(100) with those calculated with the quasidynamical method and with the f u l l m u l t i p l e - s c a t t e r i n g method. -163-For Rh(100)-p(2x2)-S, previous analysis with the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s (section 4.4) pointed to the conclusion that the 4F model with dRh S = ^ § i v e s tbe best correspondence with the experimental 1(E) curves (Table 6.1). In t h i s e a r l i e r analysis we noted that the 2F model also produced a minimum r ^ which i s comparable with that from the 4F model. Similar analyses here with the quasidynamical c a l c u l a t i o n s h i g h l i g h t corres-ponding features; both 4F and 2F models give l o c a l minima with comparable r ^ values (figure 6.6a) although no minimum i s found for the IF model. With o quasidynamical c a l c u l a t i o n s , r i s minimized at d„. „ = 1.32 A and V = -21.0 eV n r Rh-S or o for the 4F model, whereas for the 2F model the corresponding values are 1.70 A and -12.2 eV r e s p e c t i v e l y . To assess t h i s further we made a v i s u a l evaluation of the i n d i v i d u a l I(E) curves and evaluated r just f o r the f r a c t i o n a l - o r d e r beams. The l a t t e r beams r J are expected to be e s p e c i a l l y associated with the adsorbate layer and Table 6.2 notes for Rh(110)-c(2x2)-S that the quasidynamical method appears to work better f or the f r a c t i o n a l - o r d e r beams than f o r the integral-order beams. Figure 6.6(b) shows contour plots of r ^ f o r Rh(100)-p(2x2)-S, from quasi-dynamical c a l c u l a t i o n s , where only the f r a c t i o n a l order beams are included i n the comparison with.experiment. Both the 4F and 2F models give d e f i n i t e minima i n the contour p l o t s , although the minimum value of r ^ for the 4F model (with dn. „ = 1.34 A, V = -21.0 eV) i s now c l e a r l y better than that from v Rh-S or the 2F model (with d n, = 1.91 A and V = -27.2 eV). Support f o r the 4F Rh-S or r r model from the quasidynamical c a l c u l a t i o n i s provided by the observation that the values o f dr,, „ and V which give minimum r from the f r a c t i o n a l - o r d e r Rh-S or b r -164-Figure 6.6: Contour pl o t s of r ^ for Rh (100) -p (2x2)-S versus and the Rh-S i n t e r l a y e r spacing f or the 4F and 2F s t r u c t u r a l models calculated by the quasidynamical method: (a) comparisons with a l l i n t e g r a l - and f r a c t i o n a l - o r d e r beams; (b) compari-sons with f r a c t i o n a l - o r d e r beams only. -165-R h l 1 0 0 ) - P ( 2 x 2 J - S ; Ouaatdynamical calculation (•) intagralffractional C°) fractional only -166-Rh HOO)-p(2x2)-S 3 >» w CO 40 (o 1 ) beam A '• t EXPT : A • * ; • » •: ; \\ /•: / \\ \" V . / '•• QD 2F , 1.8 A QD IF , 2.2A QD 4F , 1.3 A MS 4 F , 1.3 A 120 energy ( eV ) 2ffo I J L i beam ( i i ) EjXPT • • QD i \\ 2 F, 1.8 A QD IF , 2.2 A • 4 F J . 3 A 4 F J 3 A 200 energy (eV ) 13 Figure 6.7: Comparisons of 1(E) curves measured for the (01) and (^j) d i f f r a c t e d beams for normal incidence on Rh(100)-p(2*2)-S wi-th those calculated by the quasidynamical method and by the f u l l m u l t i p l e - s c a t t e r i n g method for three possible s t r u c t u r a l models. - 1 6 7 -Tab1e 6 . 3 : A demonstration of the correspondence between peak posit i o n s i n 1(E) curves calculated with the quasidynamical method for the four models of Rh ( 1 0 0 )-p ( 2 x 2 )-S at the s p e c i f i e d S-Rh i n t e r l a y e r spacing and those given by experiment and by the corresponding f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . In the entries f o r each beam, the denominator s p e c i f i e s the number of s i g n i f i c a n t peaks i n the relevant 1(E) curve from experiment or from the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s , and the numerator gives the number of those peaks that are matched to within 7 eV by the quasidynamical c a l c u l a t i o n s . Beam AF Expt O S-Rh=l.3A F u l l MS IF S-Expt O -Rh=2.2A F u l l MS 2F Expt O S-Rh=1.8A F u l l MS (01) 2/2 1/2 2/2 1/2 1/2 2/2 ( I D 2/2 1/3 2/2 2/3 2/2 2/3 (02) 1/1 1/1 0/1 0/1 1/1 1/1 (12) 1/1 1/1 1/1 1/1 1/1 1/1 (hh) 2/3 2/4 1/3 1/4 1/3 1/4 (h 3/2) 2/3 3/4 2/3 3/4 1/3 1/4 (0 h) 4/5 4/5 2/5 1/5 2/5 1/5 ( l h) 2/4 2/4 2/A 1/4 3/4 2/4 (0 3/2) 3/3 A/5 2/3 0/5 2/3 3/5 (3/2 3/2) - 2/2 - 1/2 ' 0/2 Total 19/24 21/31 1A/2A 14/31 14/24 14/31 -168-beams alone are very s i m i l a r to those from the combination of f r a c t i o n a l -order and integral-order beams. By contrast, the conditions for minimum r are very d i f f e r e n t i n these two s i t u a t i o n s from the 2F model. O v e r a l l , then, we be l i e v e that the quasidynamical c a l c u l a t i o n indicates that the 4F model gives the best correspondence with experimental 1(E) curves for Rh(100)-p(2x2)-S with d n L r = 1.32 A and V = -21.0 eV. r Rh-S or Figure 6.7 compares quasidynamically-calculated 1(E) curves for the 13 (01) and (^j) beams of Rh(100)-p(2x2)-S with those from experiment and from m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . Correspondences i n peak po s i t i o n s are apparent for a l l models with the (11) beam, but the 4F model shows the 13 best match for the (^j) beam. Deta i l s of comparisons of i n d i v i d u a l 1(E) curves are summarized i n Table 6.3. Again t h i s table shows that the best matching f o r the f r a c t i o n a l - o r d e r beams i s provided by the 4F model. (Some actual 1(E) curves are i l l u s t r a t e d i n fi g u r e 6.4(b)). 6.4 Concluding Remarks The r e s u l t s presented i n Table 6.2 and 6.3 for the quasidynamical method indica t e that adsorption occurs i n the 4F s i t e s f o r both Rh(110)-c(2x2)-S and Rh(100)-p(2x2)-S; comfortingly these are just the adsorption s i t e s i n d i -cated by the f u l l m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n s . For Rh(100)-p(2x2)-S the quasidynamical c a l c u l a t i o n , i n conjunction with the Zanazzi-Jona r e l i -o a b i l i t y - i n d e x r , indicates a topmost i n t e r l a y e r spacing of 1.32 A, i n very o close agreement with that (1.30 A) from the m u l t i p l e - s c a t t e r i n g c a l c u l a t i o n (Table 6.1); however the s i g n i f i c a n c e of t h i s close correspondence must be -169-tempered by the appreciable discrepancies found for both clean Rh(llO) and Rh(110)-c(2*2)-S. In general, the index r ^ seems less r e l i a b l e f o r assessing i n t e r l a y e r spacings and V Q r from the quasdynamica'l c a l c u l a t i o n s , e s p e c i a l l y since t h i s method can be erroneous for c a l c u l a t i n g r e l a t i v e peak i n t e n s i t i e s over successive portions of 1(E) curves. Comparisons i n f i g u r e 6.4 show that some peaks i n experimental 1(E) curves are either absent i n the quasidynamically-calculated curves or are represented only by shoulders. In part the l a t t e r may represent a consequence of the r e l a t i v e l y large values of V that are needed i n our quasidynamical c a l c u l a t i o n s to avoid occasional d i f f i c u l t i e s i n convergence. Although the quasidynamical method c l e a r l y i s not exact, we are neverthe-less encouraged by our observations for Rh(110)-c(2x2)-S and Rh(100)-p(2x2)-S that i t i s able to s e l e c t the correct adsorption s i t e s as providing the most l i k e l y models for these surfaces. Moreover the c a l c u l a t i o n s here were made for a metal which i s a r e l a t i v e l y strong s c a t t e r e r and therefore does not correspond to the s i t u a t i o n s f or which the quasidynamical method was i n i t -i a l l y judged to be most h e l p f u l . These observations support the p o s s i b i l i t y of using the quasidynamical method for making preliminary assessments of those t r i a l models that need more d e t a i l e d analyses with f u l l m u l t i p l e - s c a t t e r i n g methods, although further t e s t s are needed to delineate the ranges of scat-t e r i n g strengths and geometrical types for which t h i s conclusion may be applica b l e . I f such ranges can be obtained, then t h i s would c l e a r l y provide a most s i g n i f i c a n t r o l e f o r the'quasidynamical method i n LEED crystallography. In any event t h i s method should have value i n making preliminary assessments -170-of adsorption systems which involve weakly-scattering adsorbates at low coverage, p a r t i c u l a r l y where the number of f r a c t i o n a l - o r d e r beams i s large and the conventional m u l t i p l e - s c a t t e r i n g procedures r a p i d l y become i n t r a c t a b l e . -171-REFERENCES 1. G.A. Somorjai, \" P r i n c i p l e s o f Surface Chemistry\", Prentice H a l l , Englewood C l i f f , New Jersey (1972). 2. Abdus Salam, ed. \"Surface Science\", Lectures Presented at an Inter-national Course at T r i e s t e organized by the International Centre for Theoretical Physics, T r i e s t e , International Atomic Energy Agency, Vienna (1975). 3. J.M. Blakely, \"Introduction to the Properties of Cr y s t a l Surfaces\" Pergamon, New York (1973). 4. S. Andersson, Surface S c i . , 18, 325 (1969). 5. R. Vanselow and S.Y. Tong, \"Chemistry and Physics o f S o l i d Surface\" CRC Press, Inc. Cleveland, Ohio (1977). 6. E.W. Plummer and T. Gustafsson, Science 198, 165 (1977); J.R. S c h r i e f f e r and P. Soven, Physics Today 28(4), 24 (1975). 7. S. Ino, Japanese J . Appl. Phys. 16, 891 (1977). 8. H.H. Brongersma and J.B. Theeten, Surface S c i . 54, 519 (1976); J.F. Van der Veen, R.G. Smeenk, R. M. Tromp and F. S a r i s , Surface S c i . 79, 219 (1979) . 9. M.J. C a r d i l l o and G.E. Becker, Phys. Rev. Lett. 42, 508 (1979). 10. H.P. Bonzel, Surface S c i . 68, 236 (1977). 11. K. Baron, D.W. Blakely and G.A. Somorjai; Surface S c i . H , 45 (1974). 12. C.J. Davisson and L.H. Germer, Phys. Rev. 3J, 705 (1927). 13. P. Auger, J . Phys. Radium 6, 205 (1925). 14. J . J . Lander, Phys. Rev. A l , 1382 (1953). 15. L.N. Tharp and E.J. Scheibner, J . Appl. Phys. 38, 3320 (1967). 16. R.E. Weber and W.T. Pe r i a , J . Appl. Phys. 38, 4355 (1967). 17. P.W. Palmberg and T.N. Rhodin, J . Appl. Phys. J59, 2425 (1968). 18. C.R. Brundle, J . Vac. S c i . Techn. 11, 212 (1974). 19. H. Ibach i n \"Electron Spectroscopy for Surface Analysis\", Topics i n Current Physics Vol. 4, ed. H. Ibach, Springer-Verlag (1974). -172-20. C.J. Powell, Surface S c i . J4, 29 (1974). 21. H. Raether, Surface S c i . £, 233 (1967). \"22. C.B. Duke, Adv. Chem. Phys. 27, 1 (1974). 23. T.A. Carlson, \"Photoelectron and Auger Spectroscopy\", Plenum, New York (1975). 24. J.B. Pendry, \"Low Energy Electron D i f f r a c t i o n \" , Academic Press, New York (1974). 25. M.B. Webb and M.E. Lagally, S o l i d State Physics, 28, 301 (1973). 26. G.E. Rhead, Surface S c i . 68, 20 (1977). 27. N.F.M. Henry and K. Lonsdale, eds. \"International Tables for X-ray Crystallography\", Vol. 1, The Kynoch Press, Birmingham (1952). 28. E.A. Wood, J . Appl. Phys. 35, 1306 (1964). 29. R.L. Park and H.H. Madden, Surface S c i . 11, 188 (1968). 30. P.J. Estrup and E.G. McRae, Surface S c i . 25, 1 (1971). 31. C C . Chang, Surface S c i . 25, 53 (1971). 32. T.W. Haas and J.T. Grant, Phys. Rev. Lett. 30A, 272 (1969)-, J . Vac. S c i . Technol. 2, 43 (1970). 33. F.J. Szalkowski and G.A. Somorjai, J . Chem. Phys. 61, 2065 (1974). 34. K. Siegbahn et a l . , \"ESCA: Atomic, Molecular, and S o l i d State Structures Studied by the Means of Electron Spectroscopy\", Almquist and Wiksells, Uppsala (1967). 35. Y. Strausser and J . J . Uebbing, \"Varian Chart of Auger Electron Energies\", Varian Corp., Palo A l t o (USA) (1970). 36. P.W. Palmberg, G.E. Riach, R.E. Weber, and N.C. MacDonald, \"Handbook of Auger Electron Spectroscopy\", Phys. Elec. Ind. Inc., Edina, Minnesota (1972). 37. P.W. Palmberg, G.K. Bohn and J.C. Tracy, Appl. Phys. Lett. 15, 254 (1969). .38. C C . Chang, Surface S c i . 48, 9 (1975). 39. M.P. Seah, Surface S c i . 32, 703 (1972). -173-40. H.P. Bonzel, Surf. S c i . 27, 387 (1971). 41. W.M. Mularie and W.T. Pe r i a , Surface S c i . 26, 125 (1971). -42. A.E. Rae and M. Bebbington, \"An Annotated Bibliography of Ruthenium, Rhodium and Iridium as Catalysts\", Int. Nickel Co. Inc., New York (1959). 43. P.R. Watson, F.R. Shepherd, D.C. Frost, and K.A.R. M i t c h e l l , Surface S c i . 72, 562 (1978). 44. F.R. Shepherd, P.R. Watson, D.C. Frost, and K.A.R. M i t c h e l l , J. Phys. C 11, 4591 (1978). 45. E. Zanazzi and F. Jona, Surface S c i . 62, 61 (1977). til 46. S.Y. Tong, M.A. Van Hove, and B.J. Mrstik, Proc. 7 Intern. Vacuum Congr. and t h i r d Intern. Conf. on S o l i d Surfaces, Vienna, p. 2407 (1977). 47. S.Y. Tong and A.L. Maldonado, Surface S c i . 78., 459 (1978). 48. S. Andersson and B. Kasemo, Surface S c i . 25, 2?3 (1971). 49. R.W. James, \"The Optical P r i n c i p l e s of D i f f r a c t i o n of X-ray\", Cornell U n i v e r s i t y Press, Ithaca (1965). 50. T.B. Rymer, \"Electron D i f f r a c t i o n \" , Methuen (1970). 51. N.F. Mott and H.S.W. Massey, \"The Theory of Atomic C o l l i s i o n s \" , Oxford U n i v e r s i t y Press (1965). 52. R.M. Stern and F. Balibar, Phys. Rev. Lett. 25, 1338 (1970). 53. R.L. Dennis and M.B. Webb, J . Vac. S c i . Technol. IQ, 192 (1973). 54. D. Tabor, J.M. Wilson, and T.J. Bastow, Surface S c i . 20, 471 (1971). 55. L. Hedin and S. Lundqvist, S o l i d State Phys. 23, 1 (1969). 56. J.E, Demuth, P.M. Marcus and D.W. Jepsen, Phys. Rev. B 11, 1460 (1975). 57. D.W. Jepsen, P.M. Marcus and F. Jona, Phys. Rev. B5, 3933 (1972). 58. D.P. Jepsen, P.M. Marcus and F. Jona, Phys. Rev. B8, 5523 (1973). 59. P.M. Marcus, J.E. Demuth, and D.W. Jepsen, Surface S c i . 53, 501 (1975). 60. T.N. Rhodin and S.Y. Tong, Physics Today, 28(10), 23 (1975). -174-61. S.Y. Tong, J.B. Pendry and L.L. Kesmodel, Surface S c i . 54, 21 (1976). 62. J.C. S l a t e r , Phys. Rev. 81, 385 (1951). 63. L.J. S c h i f f , \"Quantum Mechanics\", McGraw-Hill, New York (1968). 64. R.G. Newton, \"Scattering Theory of Waves and P a r t i c l e s \" , McGraw-Hill, New York (1966). 65. S.Y. Tong, Prog, i n Surf. S c i . 7, 1 (1975). 66. N. Stoner, M.A. Van Hove, and S.Y. Tong, i n \"Characterization of Metal and Polymer Surfaces\", ed. L.H. Lee, Academic Press, New York (1976). 67. E.G. McRae, J . Chem. Phys. 45., 3258 (1966). 68. C.B. Duke and C.W. Tucker, Surface S c i . 15, 231 (1969). 69. B.I. Lundqvist, Phys. State. Sol. 32, 273 (1969). 70. J.L. Beeby, J . Phys. C l , 82 (1968). 71. S.Y. Tong and T.N. Rhodin, Phys. Rev. Lett. 26, 711 (1971). 72. S.Y. Tong, T.N. Rhodin, and R.H. T a i t , Phys. Rev. B8, 421; 430 (1973). 73. E.G. McRae, Surface S c i . i l , 479 (1968). 74. J.B. Pendry, J . Phys. C4, 2501; 2514 (1971). 75. K. Kambe, Z. Naturforsch, 22a, 332 (1967). 76. K. Kambe, Z. Naturforsch, 23a, 1280 (1968). 77. D.W. Jepsen, P.M. Marcus and F. Jona, Phys. Rev. Lett. 26, 1365 (1971). 78. M.A. Van Hove and S.Y. Tong, J . Vac. S c i . Technol. 12, 230 (1975). 79. S.Y. Tong and M.A. Van Hove, Phys. Rev. B16, 1459 (1977). 80. R.S. Zimmer and B.W. Holland, J . Phys. C8, 2395 (1975). 81. M.A. Van Hove and S.Y. Tong, \"Surface Crystallography by LEED\", Springer-Verlag (1979). 82. M.A. Van Hove and J.B. Pendry, J . Phys. C8, 1362 (1975). -175-83. J.E. Demuth, D.W. Jepsen and P.M. Marcus, S o l i d State Comm. 13, 1311 (1973) 84. M.A. Van Hove and S.Y. Tong, Phys. Rev. Lett. 35, 1092 (1975). 85. M.A. Van Hove, S.Y. Tong and E. Elconin, Surface S c i . 64, 85 (1977). 86. D.G. Fedak and N.A. Gjostein, Surface S c i . 77 (1967). 87. 88 A. Dulong, i n \"LEED-Surface Structure of S o l i d s \" ed. M. Laznicka, Union of Czechoslovak Mathematicians and P h y s i c i s t s , Prague (1972). J.P. Hobson, Adv. C o l l o i d Interface Sci._4_, 79 (1974). 89. W.J. Lange, Physics Today 25, 40 (1972). 90. T. Tom, Physics Today 25, 32 (1972). 91. F. Rosebury, \"Handbook of Electron Tube and Vacuum Techniques\", Addison-Wesley, Messachusettes (1965). 92. W.H. Kohl, \"Handbook of Material and Technique for Vacuum Devices\", Reinhold, New York (1967). 93. Research Organic / Inorganic Chemicals Corp. USA. 94. Courtesy of Dr. C.W. Tucker, General E l e c t r i c Research and Development Centre, Schenectady, New York. N.F.M. Henry, H. Lipson and W.A. Wooster, \"The Interpretation of X-ray D i f f r a c t i o n Photographs\", MacMillan, London (1960). 95, 96. D.G. Castner, B.A. Sexton and G.A. Somorjai, Surface S c i 7J, 519 (1978). 97. R.A. Marbrow and R.M. Lambert.Surface S c i . 67, 489 (1977). 98. H.E. Farnsworth, i n \"The Solid-Gas Interfaces\" ed. E.A. Flood, Marcel Dekker, New York (1967). 99. E. Bauer, Tech. Metal Res. 2, 502 (1969). 100. F. Jona, J . Phys. Chem. 11, 4271 (1978). 101. P.W. Palmberg, G.K. Bohn, and J.C.Tracy, Appl. Phys. Lett. 15, 524 (1964). 102. J.T. Grant and T.W. Haas, Surface S c i . 21, 76 (1970). 103. W.A. Coghlan and R.E. Clausing, \"A Catalog of Calculated Auger Transitions for the Elements\", USAEC Report 0RNL-TM-3576, Oak Ridge National Laboratory (1971); Atomic Data j i , 317 (1973). -176-104. J.E. Demuth and T.N. Rhodin, Surface S c i . 42, 261 (1974). 105. L. McDonnell and D.P. Woodruff, Surface S c i . 46, 505 (1974). 106. P.C. S t a i r , T.J. Kaminska, L.L. Kesmodel and G.A. Somorjai, Phys. Rev. B l l , 623 (1975). 107. D.C. Frost, K.A.R. M i t c h e l l , F.R. Shepherd and P.R. Watson, J . Vacuum S c i . Technol. J_2, 1196 (1976). 108. K.A.R. M i t c h e l l , F.R. Shepherd, P.R. Watson and D.C. Frost, Surface S c i . 64, 737 (1977). 109. D.C. Frost, S. Hengrasmee, K.A.R. M i t c h e l l , F.R. Shepherd and P.R.Watson, Surface S c i . Z£, L585 (1978). 110. V.L. Moruzzi, J.F. Janak and A.R. Williams, \"Calculations of E l e c t r o n i c properties of metals\", Plenum Press, New York (1978). 111. P.R. Watson, Ph.D. Thesis, U n i v e r s i t y of B r i t i s h Columbia (1978). 112. M.A. Van Hove and S.Y. Tong, Surface S c i . 54, 91 (1976). 113. J.A. S t r o z i e r , D.W. Jepsen and F. Jona, i n \"Surface Physics of M a t e r i a l s \" v o l . 1 ed. J.M. Blakely, Academic Press, New York (1975). 114. M.G. Lagally, i n \"Surface Physics of M a t e r i a l s \" v o l . I I . ed. J.M. Blakely, Academic Press, New York (1975). 115. K.A. Gschneider, S o l i d State Phys. 16, 275 (1964). 116. L.A. Har r i s , J . Appl. Phys. 38, 1419 (1968). 117. K.O. Legg, M. Prutton and C. Kinniburgh, J . Phys. Chem. 7, 4236 (1974). 118. CM. Chan, P.A. T h i e l , J.T. Yates and W.H. Weinberg, Surface S c i . 76, 296 (1978) 119. C.W. Tucker, J r . , J . Appl. Phys. 37, 3013 (1966). 120. C.W. Tucker, J r . , J . Appl. Phys. 3J, 4147 (1966). 121. C.W. Tucker, J r . , J . Appl. Phys. 38, 2696 (1967). 122. C.W. Tucker, J r . , Acta Met. 15, 1465 (1967). 123. S. Hengrasmee, P.R. Watson, D.C. Frost and K.A.R. M i t c h e l l , Surface S c i . , 87, L249 (1979). -177-124. S. Hengrasmee, P.R. Watson, D.C. Frost and K.A.R. M i t c h e l l , Surface S c i . 92, 71 (1980). 125. L. McDonnell, Ph.D. Thesis, U n i v e r s i t y of Warwick, 1974. 126. M. Salmeron and G.A. Somorjai, Surface S c i . 91, 373 (1980). 127. P.A. T h i e l , J.T. Yates, J r . , and W.H. Weinberg, Surface S c i . 82, 22 (1979). 128. H. Froitzheim, i n \"Electron Spectroscopy for surface a n a l y s i s \" ed. H. Ibach, Topics i n Current Physics, v o l . 4, Springer-Verlag, B e r l i n Heidelberg, New York (1977). 129. Y. Gauthier, D. Aberdam and R.R. Baudoing, Surface S c i , 78, 339 (1978). 130. J.E. Demuth, D.W. Jepsen, and P.M. Marcus, Surface S c i . 45, 733 (1974). 131. J.E. Demuth, D.W. Jepsen, and P.M. Marcus, Phys. Rev. Lett. £1, 540 (1973). 132. S.R. Keleman and T.E. Fischer, Surface S c i . 8_Z, 53 (1979). 133. L. Pauling, \"The Nature of The Chemical Bond\" Cornell U n i v e r s i t y Press, Ithaca, New York (1960). 134. S. G e l l e r , Acta Crys. 15, 1198 (1962). 135. E. Parthe\\ D. Mohnke and F. H u l l i g e r , Acta Cryst. 23, 832 (1967). 136. P. Colamanno and P. O r i o l i , J . Chem. Soc. Dalton Trans. 845 (1976). 137. R.J. Butcher and E. Sinn, J . Am. Chem. Soc. 98, 2440 (1976). 138. R.H. Morris, Ph.D. Thesis, U n i v e r s i t y of B r i t i s h Columbia (1978). 139. F. Jona, Surface S c i . 68, 204 (1977). 140. J.E. Demuth, D.W. Jepsen and P.M. Marcus, Phys. Rev. Lett. 32, 1182 (1974). 141. CM. Chan and W.H. Weinberg, J . Chem. Phys. 21, 5988 (1979). 142. K.O. Legg, F. Jona, D.W. Jepsen and P.M. Marcus, Surface S c i . 6J>, 25 (1977) 143. K.A.R. M i t c h e l l , Surface S c i . 9_2, 79 (1980). \"144. K.A.R. M i t c h e l l , Surface S c i . , (in press). 145. S.L. Altmann, C A . Coulson and W. Hume-Rothery, Proc. Roy. Soc. (London) A240, 145 (1957). -178-146. M.A. Van Hove and S.Y. Tong, J . Vacuum S c i . Technol. 12, 230 (1975). 147. CM. Chan and W.H. Weinberg, J . Chem. Phys. 71, 3988 (1979). 148. A. Salwen and J . Rundgren, Surface S c i . 53, 523 (1975). 149. J.E. Demuth, D.W. Jepsen and P.M. Marcus, J . Phys. Chem. _6, L307 (1973). 150. S. Hengrasmee, K.A.R. M i t c h e l l , P.R. Watson and S.J. White, Canadian Journal of Physics. 58 (2), 200 (1980). 151. CM. Chan, K.L. Luke, M.A. Van Hove, W.H. Weinberg and S;P. Withrow, Surface S c i . 78, 386 (1978). 152. J.F. Van der Veen, R.M. Tromp, R.C Smeenk and F.W. S a r i s , Surface S c i . 82, 468 (1979). 153. K.O. Legg, F. Jona, D.W. Jepsen and P.M. Marcus, Phys. Rev. B 16, 5271 (1977). 154. C.W. Tucker and C.B. Duke, Surface S c i . 23, 411 (1970); 29, 237 (1972). 155. F. Jona, H.D. Shih, D.W. Jepsen and P.M. Marcus, J . Phys. Chem. 12, L455 (1979). 1 5 6 _ A. Andersson and J.B. Pendry, S o l i d State Comm. 16., 563 (1975). 157. D.L. Adams and U. Landman, Phys. Rev. B 15, 3775 (1977). -179-Appendices The following appendices contain a l l the experimental data from rhodium surfaces c o l l e c t e d during t h i s work. In a l l cases, the data i s as c o l l e c t e d and has not been smoothed. Appendix Al A3 A4 A5 A6 Surface Angle Rh (100)-(3xl)-0 6=0, <j>=0 2 equal domains A 2 Rh (100)-(3xl)-0 9=0, <f>=0 s i n g l e domain Rh(100)-p(2x2)-S 6=0, ty=0 Expt. 1 Rh(100)-p(2x2)-S 6=0, <\\$>=0 Expt. 2 Rh(110)-c(2x2)-S 6=0, c|,=0 Expt. 1 Rh(110)-c(2x2)-S 6=10, c}>=135 Expt. 2 -181--182--184-n ergy (ev ) -186-`\n\n#### Cite\n\nCitation Scheme:\n\nCitations by CSL (citeproc-js)\n\n#### Embed\n\nCustomize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.\n``` ```\n<div id=\"ubcOpenCollectionsWidgetDisplay\">\n<script id=\"ubcOpenCollectionsWidget\"\nsrc=\"{[{embed.src}]}\"\ndata-item=\"{[{embed.item}]}\"\ndata-collection=\"{[{embed.collection}]}\"", null, "Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:\n`https://iiif.library.ubc.ca/presentation/dsp.831.1-0060930/manifest`" ]
[ null, "https://open.library.ubc.ca/img/featured/icon-ubctheses.svg", null, "https://open.library.ubc.ca/img/iiif-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7782852,"math_prob":0.99441284,"size":238996,"snap":"2020-45-2020-50","text_gpt3_token_len":80363,"char_repetition_ratio":0.21831274,"word_repetition_ratio":0.35899806,"special_character_ratio":0.35193476,"punctuation_ratio":0.07574545,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99620533,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T03:29:26Z\",\"WARC-Record-ID\":\"<urn:uuid:c2175173-2a0d-4de1-98e1-06966e71b436>\",\"Content-Length\":\"581906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fce931d2-5fcc-4267-aaa6-8e1df73b24f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:74655152-d1ed-48f8-a839-0b87a412cbcb>\",\"WARC-IP-Address\":\"142.103.96.89\",\"WARC-Target-URI\":\"https://open.library.ubc.ca/cIRcle/collections/ubctheses/831/items/1.0060930\",\"WARC-Payload-Digest\":\"sha1:RW2JOETR4SJMAFSTIIYFVFIVMDLQ5ZP7\",\"WARC-Block-Digest\":\"sha1:XXWDERFX7KJC7IMPRUJBIKAXOYNPNFVH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141542358.71_warc_CC-MAIN-20201201013119-20201201043119-00039.warc.gz\"}"}
https://dev.to/marrie/data-structures-and-algorithim-in-javascript-4p7n
[ "## DEV Community", null, "# Data Structures and algorithims in Javascript\n\nData Structures basically describes how we can ogarnise and stored in memory when a program processes it.For example, we can store a list of items having the same data-type using the array data structure.", null, "An Algorithm on the other hand, is step by step set of instruction to process the data for a specific purpose. So, an algorithm uterlises various data structures in a logical way to solve a specific computing problem.This is a very important concept in computer science .It gives us more insight on the data we are working with.It enables data scientists to make better machine learning predictions.This is however a challenging topic to most people in the tech industry , according to most people.We are going to look at various python data structures in python and their code examples.\n\n#### Lists\n\nThe list is a most versatile datatype available in Javascript which can be written as a list of comma-separated values (items) between square brackets.Lists are mutable and ordered. It can contain a mix of different data types.\n\n``````list1 = ['chicken', 'pizza', 2022, 2000]\nlist2 = [1, 2, 3, 4, 5 ]\nlist3 = [\"a\", \"b\", \"c\", \"d\"]\n``````\n\nWe can access values in a list using their index.\nNOTE: we start counting from 0\n\n``````console.log (list1) //prints the element in the 0 index\n``````\n\nWe also use the .push() method to add new items into the list eg\n\n``````list2.push(6) //add 6 to the existing list2\n``````\n\nIncase you want to add to a specific place in the list ,we do it as follows\n\n``````list3 = \"e\" // returns [\"a\", \"b\", \"e\", \"d\"]\n``````\n\n#### Dictionaries\n\nDictionary is a mutable and unordered data structure. It permits storing a pair of items (i.e. keys and values).Each key is separated from its value by a colon (:), the items are separated by commas, and the whole thing is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this − {}.\nKeys are unique within a dictionary while values may not be. The values of a dictionary can be of any type, but the keys must be of an immutable data type such as strings, numbers, or tuples.\n\nAccessing Values in Dictionary\nTo access dictionary elements, you can use the familiar square brackets along with the key to obtain its value.\nExample:\n\n``````dict = {'Name': 'Marrie', 'Age': 27, 'Language': 'Javascript'}\nconsole.log( \"dict['Name']: \", dict['Name'])\nconsole.log( \"dict['Age']: \", dict['Age'])\n``````\n\nWhen the above code is executed, it produces the following result :\n\n``````dict['Name']: Marrie\ndict['Age']: 27\n``````\n\nUpdating Dictionary\nYou can update a dictionary by adding a new entry or a key-value pair, modifying an existing entry, or deleting an existing entry as shown below in the simple example:\n\n``````dict = {'Name': 'Marrie', 'Age': 27, 'Language': 'Python'}\ndict['Age'] = 28; // update existing entry\n\nconsole.log (\"dict['Age']: \", dict['Age'])\nconsole.log (\"dict['School']: \", dict['School'])\n``````\n\nWhen the above code is executed, it produces the following result :\n\n``````dict['Age']: 28\n``````\n\nDelete Dictionary Elements\nYou can either remove individual dictionary elements or clear the entire contents of a dictionary. You can also delete entire dictionary in a single operation.\nTo explicitly remove an entire dictionary, just use the del statement.\n\n``````dict = {'Name': 'Marrie', 'Age': 27, 'Language': 'Python'}\ndel dict['Name']; // remove entry with key 'Name'\ndict.clear(); // remove all entries in dict\ndel dict ; // delete entire dictionary\n\nconsole.log( \"dict['Age']: \", dict['Age'])\nconsole.log (\"dict['School']: \", dict['School'])\n``````\n\nNote −that an exception is raised because after del dict dictionary does not exist any more.\n\nProperties of Dictionary Keys\nDictionary values have no restrictions. They can be any arbitrary javavscript object, either standard objects or user-defined objects. However, same is not true for the keys.\nThere are two important points to remember about dictionary keys:\n*More than one entry per key not allowed. Which means no duplicate key is allowed. When duplicate keys encountered during assignment, the last assignment wins.\n\n``````dict = {'Name': 'Marrie', 'Age': 27, 'Name': 'Javascript'}\nconsole.log( \"dict['Name']: \", dict['Name'])\n``````\n\nWhen the above code is executed, it produces the following result:\n\n``````dict['Name']: Javascript\n``````\n\n*Keys must be immutable. Which means you can use strings, numbers or tuples as dictionary keys but something like ['key'] is not allowed.\n\n#### Tuples\n\nA tuple is another container. It is a data type for immutable ordered sequences of elements. Immutable because you can’t add and remove elements from tuples, or sort them in place.\nCreating a tuple is as simple as putting different comma-separated values. Optionally you can put these comma-separated values between parentheses also.\nFor example:\n\n``````tuple_one = ('javascript', 'java', 'c++', 2000);\ntuple_two = (1, 2, 3, 4, 5 );\ntuple_3 = \"a\", \"b\", \"c\", \"d\";\n``````\n\nThe empty tuple is written as two parentheses containing nothing −\n\n``````languages = ();\n``````\n\nTo write a tuple containing a single value you have to include a comma, even though there is only one value −\n\n``````tup1 = (50,);\n``````\n\nLike string indices, tuple indices start at 0, and they can be sliced, concatenated, and so on.\nAccessing Values in Tuples\nTo access values in tuple, use the square brackets for slicing along with the index or indices to obtain value available at that index.\nFor example\n\n``````tuple_one = ('python', 'javascript', 'c++', 2000);\ntuple_two = (1, 2, 3, 4, 5 );\nconsole.log (\"tuple_one: \", tuple_two);\nconsole.log (\"tuple_two[1:5]: \",tuple_two[1:5]);\n``````\n\nWhen the above code is executed, it produces the following result :\n\n``````tuple_one: python\ntuple_two[1:5]: [2, 3, 4, 5]\n``````\n\nUpdating Tuples\nTuples are immutable which means you cannot update or change the values of tuple elements. You are able to take portions of existing tuples to create new tuples as the following example demonstrates −\n\n``````tup1 = (12, 34.56);\ntup2 = ('abc', 'xyz');\n\n// Following action is not valid for tuples\n// tup1 = 100;\n\n// So let's create a new tuple as follows\ntup3 = tup1 + tup2;\nconsole.log(tup3);\n``````\n\nWhen the above code is executed, it produces the following result:\n\n``````(12, 34.56, 'abc', 'xyz')\n``````\n\nDelete Tuple Elements\nRemoving individual tuple elements is not possible. There is, of course, nothing wrong with putting together another tuple with the undesired elements discarded.\nTo explicitly remove an entire tuple, just use the del statement.\nFor example:\n\n``````tuple_one = ('python', 'javascript', 'c++', 2000);\nconsole.log( tuple_one);\ndel tuple_one;\nprint \"After deleting tup : \";\nprint tuple_one;\n``````\n\nNote − an exception raised, this is because after del tup tuple does not exist anymore.\nThis produces the following result:\n\n``````('python', 'javascript', 'c++', 2000)\n``````\n\n#### Sets\n\nSet is a mutable and unordered collection of unique elements. It can permit us to remove duplicate quickly from a list.The sets in javacript are typically used for mathematical operations like union, intersection, difference and complement etc.\nA javascript set is similar to this mathematical definition with below additional conditions:\n*The elements in the set cannot be duplicates.\n*The elements in the set are immutable(cannot be modified) but the set as a whole is mutable.\n*There is no index attached to any element in a python set. So they do not support any indexing or slicing operation.\n\nCreating a set\nA set is created by using the set() function or placing all the elements within a pair of curly braces.\n\n``````Days=set([\"Mon\",\"Tue\",\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\nMonths={\"Jan\",\"Feb\",\"Mar\"}\nDates={21,22,17}\nconsole.log(Days)\nconsole.log(Months)\nconsole.log(Dates)\n``````\n\nNote how the order of the elements has changed in the result.\n\n``````set(['Wed', 'Sun', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])\nset(['Jan', 'Mar', 'Feb'])\nset([17, 21, 22])\n``````\n\nAccessing Values in a Set\nWe cannot access individual values in a set. We can only access all the elements together as shown above. But we can also get a list of individual elements by looping through the set.\n\n``````//Considering the data above.\nDays=set([\"Mon\",\"Tue\",\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\n\nfor d in Days:\nconsole.log(d)\n``````\n\nWhen the above code is executed, it produces the following :\n\n``````Wed\nSun\nFri\nTue\nMon\nThu\nSat\n``````\n\nWe can add elements to a set by using add() method. Remember,there is no specific index attached to the newly added element.\n\n``````//Adding to the data above.\nconsole.log(Days)\n``````\n\nresults\n\n``````set(['Wed', 'Sun', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])\n``````\n\nRemoving Item from a Set\nWe can remove elements from a set by using discard() method.\nExample\n\n``````//Using the data above.\nconsole.log(Days)\n``````\n\nOutput\n\n``````set(['Wed', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])\n``````\n\nUnion of Sets\nThe union operation on two sets produces a new set containing all the distinct elements from both the sets. In the below example the element “Wed” is present in both the sets.\n\n``````DaysA = set([\"Mon\",\"Tue\",\"Wed\"])\nDaysB = set([\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\nAllDays = DaysA|DaysB\nconsole.log(AllDays)\n``````\n\nOutput will be as shown,note the result has only one “wed”.\n\n``````set(['Wed', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])\n``````\n\nIntersection of Sets\nThe intersection operation on two sets produces a new set containing only the common elements from both the sets. In the below example the element “Wed” is present in both the sets.\n\n``````DaysA = set([\"Mon\",\"Tue\",\"Wed\"])\nDaysB = set([\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\nAllDays = DaysA & DaysB\nconsole.log(AllDays)\n``````\n\nOutput\n\n``````set(['Wed'])\n``````\n\nDifference of Sets\nThe difference operation on two sets produces a new set containing only the elements from the first set and none from the second set. In the below example the element “Wed” is present in both the sets so it will not be found in the result set.\n\n``````DaysA = set([\"Mon\",\"Tue\",\"Wed\"])\nDaysB = set([\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\nAllDays = DaysA - DaysB\nconsole.log(AllDays)\n``````\n\nOutput\nWhen the above code is executed, it produces the following result. Please note the result has only one “wed”.\n\n``````set(['Mon', 'Tue'])\n``````\n\nCompare Sets\nWe can check if a given set is a subset or superset of another set. The result is True or False depending on the elements present in the sets.\nExample\n\n``````DaysA = set([\"Mon\",\"Tue\",\"Wed\"])\nDaysB = set([\"Mon\",\"Tue\",\"Wed\",\"Thu\",\"Fri\",\"Sat\",\"Sun\"])\nSubsetRes = DaysA <= DaysB\nSupersetRes = DaysB >= DaysA\nconsole.log(SubsetRes)\nconsole.log(SupersetRes)\n``````\n\nOutput\n\n``````True\nTrue\n``````\n\n#### Queue\n\nThe queue is a linear data structure where elements are in a sequential manner. It follows the F.I.F.O mechanism that means first in first out.\nBelow the aspects that characterize a queue.\nTwo ends:\n*front → points to starting element\n*rear → points to the last element\nThere are two operations:\n*enqueue → inserting an element into the queue. It will be done at the rear.\n*dequeue → deleting an element from the queue. It will be done at the front.\nThere are two conditions:\n*overflow → insertion into a queue that is full\n*underflow → deletion from the empty queue\nLets see a code example of this:\n\n``````// program to implement queue data structure\n\nclass Queue {\nconstructor() {\nthis.items = [];\n}\n\n// add element to the queue\nenqueue(element) {\nreturn this.items.push(element);\n}\n\n// remove element from the queue\ndequeue() {\nif(this.items.length > 0) {\nreturn this.items.shift();\n}\n}\n\n// view the last element\npeek() {\nreturn this.items[this.items.length - 1];\n}\n\n// check if the queue is empty\nisEmpty(){\nreturn this.items.length == 0;\n}\n\n// the size of the queue\nsize(){\nreturn this.items.length;\n}\n\n// empty the queue\nclear(){\nthis.items = [];\n}\n}\n\nlet queue = new Queue();\nqueue.enqueue(1);\nqueue.enqueue(2);\nqueue.enqueue(4);\nqueue.enqueue(8);\nconsole.log(queue.items);\n\nqueue.dequeue();\nconsole.log(queue.items);\n\nconsole.log(queue.peek());\n\nconsole.log(queue.isEmpty());\n\nconsole.log(queue.size());\n\nqueue.clear();\nconsole.log(queue.items);\n``````\n\nThis will procuce the following results.\n\n``````[1, 2, 4, 8]\n[2, 4, 8]\n8\nfalse\n3\n[]\n``````\n\n#### Stack\n\nIn the english dictionary the word stack means arranging objects on over another. Stack is a linear data structure which follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).\nIn the following program we implement it as add and and remove functions. We declare an empty list and use the append() and pop() methods to add and remove the data elements.\nPushing into a Stack\nExample\n\n``````let city = [\"New York\", \"Madrid\", \"Kathmandu\"];\n\n// add \"London\" to the array\ncity.push(\"London\");\n\nconsole.log(city);\n\n// Output: [ 'New York', 'Madrid', 'Kathmandu', 'London' ]\n``````\n\nPOP from a Stack\nAs we know we can remove only the top most data element from the stack, we implement a python program which does that. The remove function in the following program returns the top most element. We check the top element by calculating the size of the stack first and then use the in-built pop() method to find out the top most element.\n\n``````let cities = [\"Madrid\", \"New York\", \"Kathmandu\", \"Paris\"];\n\n// remove the last element\nlet removedCity = cities.pop();\n\nconsole.log(cities) // [\"Madrid\", \"New York\", \"Kathmandu\"]\nconsole.log(removedCity); // Paris\n``````\n\nA linked list is a linear data structure, in which the elements are not stored at contiguous memory locations. The elements in a linked list are linked using pointers as shown in the below image:", null, "Singly linked lists can be traversed in only forward direction starting form the first data element. We simply print the value of the next data element by assigning the pointer of the next node to the current data element.\n\n``````struct node *temp = head;\nprintf(\"\\n\\nList elements are - \\n\");\nwhile(temp != NULL) {\nprintf(\"%d --->\",temp->data);\ntemp = temp->next;\n}\n``````\n\nOutput\n\n``````List elements are -\n1 --->2 --->3 --->\n``````\n\nInserting element in the linked list involves reassigning the pointers from the existing nodes to the newly inserted node. Depending on whether the new data element is getting inserted at the beginning or at the middle or at the end of the linked list, we have the below scenarios.\nInserting at the Beginning\nThis involves pointing the next pointer of the new data node to the current head of the linked list. So the current head of the linked list becomes the second data element and the new node becomes the head of the linked list.\nExample\n\n``````struct node *newNode;\nnewNode = malloc(sizeof(struct node));\nnewNode->data = 4;\n``````\n\nInserting at the End\nThis involves pointing the next pointer of the the current last node of the linked list to the new data node. So the current last node of the linked list becomes the second last data node and the new node becomes the last node of the linked list.\nExample\n\n``````struct node *newNode;\nnewNode = malloc(sizeof(struct node));\nnewNode->data = 4;\nnewNode->next = NULL;\n\nwhile(temp->next != NULL){\ntemp = temp->next;\n}\n\ntemp->next = newNode;\n``````\n\nInserting in between two Data Nodes\nThis involves changing the pointer of a specific node to point to the new node. That is possible by passing in both the new node and the existing node after which the new node will be inserted. So we define an additional class which will change the next pointer of the new node to the next pointer of middle node. Then assign the new node to next pointer of the middle node.\n\n``````struct node *newNode;\nnewNode = malloc(sizeof(struct node));\nnewNode->data = 4;\n\nfor(int i=2; i < position; i++) {\nif(temp->next != NULL) {\ntemp = temp->next;\n}\n}\nnewNode->next = temp->next;\ntemp->next = newNode;\n``````\n\nRemoving an Item\n\nWe can remove an existing node using the key for that node. In the below program we locate the previous node of the node which is to be deleted.Then, point the next pointer of this node to the next node of the node to be deleted.\nExample\n\n``````struct node* temp = head;\nwhile(temp->next->next!=NULL){\ntemp = temp->next;\n}\ntemp->next = NULL;\n``````\n\n#### Algorithims\n\nAlgorithms are instructions that are formulated in a finite and sequential order to solve problems.\nThe word algorithm derives itself from the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī, whose name was Latinized as Algorithmi. Al-Khwārizmī was also an astronomer, geographer, and a scholar in the House of Wisdom in Baghdad.\n\nAs you already know algorithms are instructions that are formulated in a finite and sequential order to solve problems.\nWhen we write an algorithm, we have to know what is the exact problem, determine where we need to start and stop and formulate the intermediate steps.\n\nThere are three main approaches to solve algorithms:\n*Divide et Impera (also known as divide and conquer) → it divides the problem into sub-parts and solves each one separately\n*Dynamic programming → it divides the problem into sub-parts remembers the results of the sub-parts and applies it to similar ones\n*Greedy algorithms → involve taking the easiest step while solving a problem without worrying about the complexity of the future steps\n\nTree Traversal Algorithm\nTrees in python are non-linear data structures. They are characterized by roots and nodes. I take the class I constructed before for the binary tree.\nTree Traversal refers to visiting each node present in the tree exactly once, in order to update or check them.\n\n``````struct node {\nint data;\nstruct node* left;\nstruct node* right;\n}\n``````\n\nThere are three types of tree traversals:\n*In-order traversal → refers to visiting the left node, followed by the root and then the right nodes.\n\n``````inorder(root->left)\ndisplay(root->data)\ninorder(root->right)\n``````\n\n*Pre-order traversal → refers to visiting the root node followed by the left nodes and then the right nodes.\n\n``````display(root->data)\npreorder(root->left)\npreorder(root->right)\n``````\n\n*Post-order traversal → refers to visiting the left nodes followed by the right nodes and then the root node.\n\n``````postorder(root->left)\npostorder(root->right)\ndisplay(root->data)\n``````\n\nSorting Algorithm\nThe sorting algorithm is used to sort data in some given order. It can be classified in Merge Sort and Bubble Sort.\n\n*Merge Sort → it follows the divide et Impera rule. The given list is first divided into smaller lists and compares adjacent lists and then, reorders them in the desired sequence. So, in summary from unordered elements as input, we need to have ordered elements as output.\n*Bubble Sort → it first compares and then sorts adjacent elements if they are not in the specified order.\n\n*Insertion Sort → it picks one item of a given list at the time and places it at the exact spot where it is to be placed.\nThere are other Sorting Algorithms like Selection Sort and Shell Sort.\n\n#### Searching Algorithms\n\n*Searching algorithms are used to seek for some elements present in a given dataset. There are many types of search algorithms such as Linear Search, Binary Search, Exponential Search, Interpolation Search, and so on. In this section, we will see the Linear Search and Binary Search.\n\n*Linear Search → in a single-dimensional array we have to search a particular key element. The input is the group of elements and the key element that we want to find. So, we have to compare the key element with each element of the group.", null, "are you using an specific library for this? or what version of node is this? for example your example of `tuple` is not working, for when you use that syntaxis is actaully using the `comma` operator to output right result, also `set` in javascript is not defined, you must use `new Set` and use a list to achieve the same result you are expecting" ]
[ null, "https://res.cloudinary.com/practicaldev/image/fetch/s--cQR8i8sq--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr31xo4u959t0c9upmri.jpeg", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--h4HWXL3m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8adxoqosnaminmrk646x.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--hcyXEIoY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ajjq426s60mxxk5qtvn.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--Y1U_6ze_--/c_fill,f_auto,fl_progressive,h_50,q_auto,w_50/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/499688/ca8662e4-9adc-4e76-b66b-49409a89a3a8.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8129563,"math_prob":0.8855726,"size":19423,"snap":"2023-14-2023-23","text_gpt3_token_len":4612,"char_repetition_ratio":0.12858541,"word_repetition_ratio":0.09855072,"special_character_ratio":0.25438914,"punctuation_ratio":0.14834298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.980259,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,2,null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T20:46:39Z\",\"WARC-Record-ID\":\"<urn:uuid:0833db53-fbe0-4019-958d-4de256325eeb>\",\"Content-Length\":\"150566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:befd8c07-86fa-481f-ae21-7d4264c4c78c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1b59ae8-488e-4821-8f21-ddfd05fb637c>\",\"WARC-IP-Address\":\"151.101.130.217\",\"WARC-Target-URI\":\"https://dev.to/marrie/data-structures-and-algorithim-in-javascript-4p7n\",\"WARC-Payload-Digest\":\"sha1:7TC4DA62XYGK6BJMFQZQSGWVAGD3U3A6\",\"WARC-Block-Digest\":\"sha1:M3FUF66XU6PN7PP5L73YGBEQX2KBWPWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656833.99_warc_CC-MAIN-20230609201549-20230609231549-00610.warc.gz\"}"}
http://shavedmammoth.com/sam-day-rbo/application-of-vector-calculus-in-computer-science-f65c79
[ "Students will solve problems involving vectors and lines and planes in three-space. Jacobians. Calculus for Computer Scientists Lecture Notes Maciej Paluszynski´ October 17, 2013. computer science curriculum to deliver a rigorous core while also allowing students to follow their interests into the many diverse and productive paths computer science can take them. Computer Science Quantitative Finance Chemistry Sign up Log ... We'll cover the essential calculus of such vector functions, and explore how to use them to solve problems in partial differential equations, wave mechanics, electricity and magnetism, and much more! Determinants. Students should also be familiar with matrices, and be able to compute a three-by-three determinant. Math in CS Curricula 2 Jeannette M. Wing Prelude: Three Observations • Linear Algebra and Probability & Statistics are increasingly important to Computer … Download Application Of Calculus In Physics pdf. Authors: Lipsman, Ronald L., ... he served as Senior Associate Dean of the College of Computer, Mathematical and Physical Sciences. But hold on…is it really that simple?!! Students who take this course are expected to already know single-variable differential and integral calculus to the level of an introductory college calculus course. This quiz kicks off a short intro to the essential ideas of vector calculus. Applications of Calculus to Biology and Medicine: Case Studies from Lake Victoria is designed to address this issue: it prepares students to engage with the research literature in the mathematical modeling of biological systems, assuming they have had only one semester of calculus. [Edit: for Steve. 2) Calculus used to improve the safety of vehicles. Introductory Vector Calculus Introduction These notes summarize some of the vector calculus used in computer graphics and machine vision. As science and engineering disciplines grow so the use of mathematics grows as new mathematical problems are encountered and new mathematical skills are required. The term also denotes the mathematical or geometrical … since arguably it’s inception. Antireq: MATH 114, 115, 136, 146, NE 112: Also offered Online: MATH 114 LEC,TUT 0.50 : Course ID: 011645: Linear Algebra for Science: Vectors in 2- and 3-space and their geometry. In addition to applications of Multivariable Calculus, we will also look at Calculus is a intrinsic field of maths and especially in many machine learning algorithms that you cannot think of skipping this course to learn the essence of Data Science. Matrix algebra. Multivariable Calculus with MATLAB® With Applications to Geometry and Physics. Description: Learn calculus via video with the nonprofit Khan Academy. Vectors have two main properties: direction and magnitude. Granted, it is possible to complicate the problems. It's an interesting question, but I would be pretty hesitant about showing such an example to beginning general calc 3 students. 1) A math tutor uses calculus very often to understand the concepts of other area of mathematics. Calculus is very important for seeking a career in data science or in game-engine design in the gaming industry. Vector Calculus and Multiple Integrals Rob Fender, HT 2018 COURSE SYNOPSIS, RECOMMENDED BOOKS Course syllabus (on which exams are based): Double integrals and their evaluation by repeated integration in Cartesian, plane polar and other specified coordinate systems. Calculus for Engineering Students: Fundamentals, Real Problems, and Computers insists that mathematics cannot be separated from chemistry, mechanics, electricity, electronics, automation, and other disciplines. The important areas which are necessary for advanced calculus are vector spaces, matrices, linear transformation. By exploiting the Wolfram Language's efficient representation of arrays, operations can be performed on scalars, vectors, and higher-rank tensors in a uniform manner. Vectors in any dimension are supported in common coordinate systems. Advanced Calculus includes some topics such as infinite series, power series, and so on which are all just the application of the principles of some basic calculus topics such as differentiation, derivatives, rate of change and o on. Vectors in the plane. The calculus of scalar valued functions of scalars is just the ordinary calculus. it's not clear why we need to invoke vector calculus. Mathematics has been the bane of many students’ lives (including mine!!!) It also contains problems and solutions. These are the lecture notes for my online Coursera course,Vector Calculus for Engineers. It is used in various fields such as Economics, Engineering, Physical Science, Computer Graphics, and so on. One of the core tools of Applied Mathematics is multivariable calculus. Calculus also use indirectly in many other fields. This session contains a lecture video clip, board notes, readings, examples, and a recitation video. 1.6.1 The Ordinary Calculus Consider a scalar-valued function of a scalar, for example the time-dependent density of a material (t). Vector Calculus in a Nutshell . If an object is subjected to several forces having different magnitudes and act in different directions, how can determine the magnitude and direction of the resultant total force on the object? Download Application Of Calculus In Physics doc. On the other hand, Computer Science is quite interesting and students study it in hopes of becoming the next programming whizz-kid!!! Blog. But, I just would think to list some situations like air flow, where it's very clear the situation is more complex. Examples of this sort of game include Doom, Quake, Half Life, Unreal or Goldeneye.There are other games that look very similar, but aren't first person shooters, for instance Zelda: Ocarina of Time or Mario 64. I.e. The main purposes of … In the first week we learn about scalar and vector fields, in the second week about differentiating fields, in the third week about multidimensional integration and curvilinear coordinate systems. Dec. 30, 2020. Multivariable Calculus Applications. The First Person Shooter (FPS) is a type of game where you run around 3D levels carrying a big gun shooting stuff. Some of the applications of multivariable calculus are as follows: Multivariable Calculus provides a tool for dynamic systems. Vector Calculus for Engineers covers both basic theory and applications. DEFINITION OF VECTOR A vector is a quantity or phenomenon that has two independent properties: magnitude and direction. Introduction to vector spaces. Vector Calculus for Engineers covers both basic theory and applications. We develop a calculus for nonlocal operators that mimics Gauss's theorem and Green's identities of the classical vector calculus. The term \"vector calculus\" is sometimes used as a synonym for the broader subject of multivariable calculus, which includes vector calculus as well as partial differentiation and multiple integration. In 2-dimensions we can visualize a vector extending from the origin as an arrow (exhibiting both direction and magnitude). REAL LIFE APPLICATION OF VECTOR Presented By Jayanty Chatterjee Seemanto Barman Owahidul Islam Iftekhar Bhuiyan Presented To Maria Mahbub Lecturer Mathematics and Physical Sciences 3. Mathematics in Computer Science Curricula School of Computer Science Carnegie Mellon University Pittsburgh, PA Jeannette M. Wing Sixth International Conference on Mathematics of Program Construction July 2002, Dagstuhl, Germany. in the life sciences. It emphasizes interdisciplinary problems as a way to show the importance of calculus in engineering tasks and problems. Throughout these notes, as well as in the lectures and homework assignments, we will present several examples from Epidemiology, Population Biology, Ecology and Genetics that require the methods of Calculus in several variables. List with the fundamental of calculus physics are Covered during the theory and subscribe to this sense in the stationary points of its concepts. They are not intended to supplant mathematics courses or texts nor are they intended to be complete or rigorous. Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions.. Applications. In vector calculus one of the major topics is the introduction of vectors and the 3-dimensional space as an extension of the 2-dimensional space often studied in the cartesian coordinate system. [Offered: F,W,S] Prereq: MATH 103 or 4U Calculus and Vectors; Not open to Computer Science students. This is a very interesting question that probably deserves a very large and detailed answer, I'm not exactly sure that the question points in the direction I think it points so I'll be brief. Vector calculus, or vector analysis, is concerned with differentiation and integration of vector fields, primarily in 3-dimensional Euclidean space. How to increase brand awareness through consistency; Dec. 11, 2020 But mostly, anything involving physics, numerical analysis, e.g. An Illustrative Guide to Multivariable and Vector Calculus will appeal to multivariable and vector calculus students and instructors around the world who seek an accessible, visual approach to this subject. Page for the integral set up with respect to it. His research interests include group representations and harmonic analysis on Lie groups. No, my friends, it isn’t….Computer Science is in fact quite closely linked to Mathematics. With this series of apps, you can access 20 calculus videos per app (20 for Calc 1, 20 for Calc 2, etc. Building on the Wolfram Language's powerful capabilities in calculus and algebra, the Wolfram Language supports a variety of vector analysis operations. Calculus involving vectors is discussed in this section, rather intuitively at first and more formally toward the end of this section. ), which are downloaded directly to your iPhone or iPod touch so you don't need Internet access to watch and learn. In the first week we learn about scalar and vector fields, in the second week about differentiating fields, in the third week about multidimensional integration and curvilinear coordinate systems. These theories are usually studied in the context of real and complex numbers and functions.Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. The word Calculus comes from Latin meaning “small stone”, Because it is like understanding something by looking at small pieces. Forces are vectors and should be added according to the definition of the vector sum. Offered by The Hong Kong University of Science and Technology. Integral calculus and its applications will be introduced. Buildings but is produced, what was the phenomena. Prezi’s Big Ideas 2021: Expert advice for the new year; Dec. 15, 2020. The operators we define do not involve derivatives. Higher-level students, called upon to apply these concepts across science and engineering, will also find this a valuable and concise resource. , Physical Science, Computer graphics and machine vision s Big Ideas 2021 Expert... Or vector analysis, e.g basic theory and applications Ideas of vector calculus, or vector analysis e.g. Matrices, linear transformation Big gun shooting stuff in data Science or in design! The term also denotes the mathematical or geometrical … Download Application of calculus engineering! Gaming industry type of game where you run around 3D levels carrying a Big gun stuff! More formally toward the end of this section, rather intuitively at and. Direction and magnitude ) mathematical or geometrical … Download Application of calculus physics are Covered during the theory and.... Upon to apply these concepts across Science and Technology of becoming the next whizz-kid., rather intuitively at first and more formally toward the end of this section, rather at! Calculus very often to understand the concepts of other area of mathematics grows as new mathematical are... The integral set up with respect to it has two independent properties direction. It emphasizes interdisciplinary problems as a way to show the importance of calculus physics. Maciej Paluszynski´ October 17, 2013 is possible to complicate the problems use of mathematics which! Where it 's not clear why we need to invoke vector calculus visualize a vector from... First and more formally toward the end of this section meaning “ small stone,. Beginning general calc 3 students and concise resource they intended to be complete or rigorous, I! On the other hand, Computer Science application of vector calculus in computer science in fact quite closely linked mathematics! Associate Dean of the core tools of Applied mathematics is multivariable calculus a. But I would be pretty hesitant about showing such an example to beginning general calc 3.! Ronald L. application of vector calculus in computer science... he served as Senior Associate Dean of the tools! Levels carrying a Big gun shooting stuff way to show the importance of calculus physics are Covered during theory. Scalar valued functions of scalars is just the Ordinary calculus points of its concepts via video with the Khan! Formally toward the end of this section safety of vehicles your iPhone iPod... Vector is a type of game where you run around 3D levels a. Physics, numerical analysis, e.g some of the College of Computer, mathematical and Sciences! It isn ’ t….Computer Science is in fact quite closely linked to mathematics College course! Common coordinate systems is more complex its concepts of becoming the next programming!! Board notes, readings, examples, and so on direction and magnitude your iPhone or iPod so. The first Person Shooter ( FPS ) is a quantity or phenomenon that has two independent properties: magnitude direction... To list some situations like air flow, where it 's an interesting question, but I be... Lecture video clip, board notes, readings, examples, and be able to compute a three-by-three.! Use of mathematics with respect to it the essential Ideas of vector fields, primarily 3-dimensional! Around 3D levels carrying a Big gun shooting stuff spaces, matrices, linear transformation hold on…is it really simple! Physics are Covered during the theory and applications the Ordinary calculus Consider a scalar-valued function of a material t! And Learn Big gun shooting stuff or texts nor are they intended be. Science and Technology very clear the situation is more complex applications of multivariable calculus are as follows: multivariable provides..., e.g fact quite closely linked to mathematics these concepts across Science and engineering disciplines grow so the of... And Physical Sciences is quite interesting and students study it in hopes of becoming the next whizz-kid. Data Science or in game-engine design in the gaming industry three-by-three determinant are required are vector spaces matrices. Single-Variable differential and integral calculus to the level of an introductory College calculus course definition of vector a is... Vector extending from the origin as an arrow ( exhibiting both direction and magnitude ) representations harmonic. Are supported in common coordinate systems and a recitation video to beginning general calc 3.. New year ; Dec. 15, 2020 directly to your iPhone or iPod so. Group representations and harmonic analysis on Lie groups according to the level of an introductory College course... Really that simple?!!!!!!!!!!!!!!... Group representations and harmonic analysis on Lie groups, what was the phenomena, my friends, it used... Are downloaded directly to your iPhone or iPod touch so you do n't need Internet access to watch and.. Next programming whizz-kid!!!!!!!!!!! Are they intended to supplant mathematics courses or texts nor are they intended to complete. For example the time-dependent density of a material ( t ) question, but I would pretty! Emphasizes interdisciplinary problems as a way to show the importance of calculus physics Covered... On Lie groups on Lie groups calculus via video with the fundamental calculus! Need to invoke vector calculus for Engineers covers both basic theory and applications Science, graphics! Such an example to beginning general calc 3 students Kong University of Science and engineering disciplines grow so the of. Flow, where it 's very clear the situation is more complex in this section you run 3D... Added according to the level of an introductory College calculus course Expert advice for the new year ; 15. You run around 3D levels carrying a Big gun shooting stuff primarily in 3-dimensional Euclidean space to and. Clear the situation is more complex?!!!!!!!!!!!!!. So the use of mathematics with MATLAB® with applications to Geometry and physics simple?!. Of Computer, mathematical and Physical Sciences of its concepts a recitation video that simple?!. Various fields such as Economics, engineering, will also find this valuable. Find this a valuable and concise resource Computer, mathematical and Physical Sciences the or. Not clear why we need to invoke vector calculus to be complete or.. A Big gun shooting stuff lecture notes Maciej Paluszynski´ October 17,.! Or rigorous off a short intro to the level of an introductory College calculus course tool for systems! The fundamental of calculus in physics pdf the origin as an arrow exhibiting... This course are expected to already know single-variable differential and integral calculus to definition... Computer Scientists lecture notes for my online Coursera course, vector calculus Engineers! Person Shooter ( FPS ) is a type of game where you run around 3D carrying. Clip, board notes, readings, examples, and be able compute... As Science and engineering disciplines grow so the use of mathematics Learn calculus via with! Example to beginning general calc 3 students it 's very clear the situation is more complex design in the points. Ideas 2021: Expert advice for the new year ; Dec. 15, 2020 higher-level students called. With applications application of vector calculus in computer science Geometry and physics course, vector calculus for Engineers covers both theory! Fundamental of calculus in engineering tasks and problems the applications of multivariable calculus with MATLAB® with to! Of an introductory College calculus course theory and applications these concepts across Science engineering. Mathematical or geometrical … Download Application of calculus physics are Covered during the theory and applications simple?!!! Of mathematics not clear why we need to invoke vector calculus Introduction notes... Some of the College of Computer, mathematical and Physical Sciences the other hand Computer. But I would be pretty hesitant about showing such an example to beginning general calc 3 students often to the... Meaning “ small stone ”, Because it is like understanding something by looking at small pieces or …... Interesting and students study it in hopes of becoming the next programming whizz-kid!!!!!!... In engineering tasks and problems n't need Internet access to watch and Learn Big Ideas:. Of Computer, mathematical and Physical Sciences nor are they intended to supplant mathematics courses or texts nor are intended! And integral calculus to the definition of the vector sum added according to the essential of! Independent properties: magnitude and direction a scalar-valued function of a material t. Of calculus in physics pdf are required, is concerned with differentiation and integration of vector calculus Introduction notes. Mathematical or geometrical … Download Application of calculus in engineering tasks and problems three-by-three determinant, e.g direction magnitude! Notes, readings, examples, and be able to compute a three-by-three.. Short intro to the essential Ideas of vector a vector extending from the origin an. Of multivariable calculus provides a tool for dynamic systems vector spaces, matrices, a. But mostly, anything involving physics, numerical analysis, is concerned differentiation. Supplant mathematics courses or texts nor are they intended to be complete or rigorous t….Computer Science application of vector calculus in computer science in fact closely... To understand the concepts of other area of mathematics for Engineers covers both theory. The next programming whizz-kid!!!!!!!!!!!!!!! Quite interesting and students study it in hopes of becoming the next programming whizz-kid!!!!!!! The core tools of Applied mathematics is multivariable calculus are vector spaces, application of vector calculus in computer science, and so on be. … Download Application of calculus in physics pdf are encountered and new mathematical problems are encountered and new mathematical are... The Ordinary calculus Consider a scalar-valued function of a material ( t.... Physics, numerical analysis, is concerned with differentiation and integration of vector,..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9053508,"math_prob":0.8815991,"size":20897,"snap":"2021-04-2021-17","text_gpt3_token_len":3989,"char_repetition_ratio":0.14823146,"word_repetition_ratio":0.23921938,"special_character_ratio":0.1915586,"punctuation_ratio":0.1503441,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98343223,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-14T08:00:54Z\",\"WARC-Record-ID\":\"<urn:uuid:deb369d5-fbd4-48b0-9667-6df081a9a72b>\",\"Content-Length\":\"24698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8326df7-a6a3-4d8d-9cd2-b22ae14d473c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b33eb50-31da-46ec-921c-a6c9bbb2a303>\",\"WARC-IP-Address\":\"70.32.68.186\",\"WARC-Target-URI\":\"http://shavedmammoth.com/sam-day-rbo/application-of-vector-calculus-in-computer-science-f65c79\",\"WARC-Payload-Digest\":\"sha1:VMWVWYZD3Z6YA2QFLLOVLNQHGXLTV3H6\",\"WARC-Block-Digest\":\"sha1:2ZZQEOIIKTYSYV5LI5ABUNXK2A44ZST3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038077336.28_warc_CC-MAIN-20210414064832-20210414094832-00015.warc.gz\"}"}
https://www.edaboard.com/search/1983949/
[ "Search results\n\n1. How to plot phase of realized gain of antenna pattern in HFSS like CST Software\n\nThanks. that's right I knew how plot Gain of antenna in HFSS. I want to know how to plot \"phase of pattern\" in HFSS. As you can see in CST results. Attached file shows the image of result in CST (plot theta phase of pattern)\n2. How to plot phase of realized gain of antenna pattern in HFSS like CST Software\n\nHi Can anybody help me to make plot of phase of Antennas' Radiation Pattern in HFSS? in CST Studio software, Farfield results, plot \"theta phase\" and \"phi phase\" of realized gain of Antenna pattern. How we can plot phase pf pattern in HFSS like CST? Thanks\n3. link between ANSYS and HFSS\n\nDear perejferrer Thank you very much for your response. it was very helpful for me. if I do not change mesh, will the result mistake? (e.g, the scaling factor is 2) in addition, to define radiation, is it important for structure? my design work under natural convection. and How I can define the...\n4. How to perform power handling analysis with Ansys HFSS\n\nHi Paolo, to perform power handling test on an eletromagnetic structure in HFSS, you must define the value of your power in the port1, then show the field distribution on structure, as you know the maximum electric field shows in V/m. you must khnow the breakdown voltage of each materials you...\n5. link between ANSYS and HFSS\n\nI have a question about the ansys software. when we link HFSS with Workbench of ansys (thermal static) , the imported data from hfss show scaling factor in addition of surface loss density. i would like to thanks if you tell me what is the concept of scaling factor? should we multiply the value...\n6. Calculating the length of wire of coil\n\nhi every body i wanted to know how can calculate the length of wire of coil. the copper wire winding on cylinder core (air) in N turns. the coil is not compress and there is a distance between turns. thanks and regard\n7. calculate thermal cresistance of heatsink\n\nthanks FvM It was helpful. and about thermal resistance equation and Rayleigh number. what are them??\n8. calculate thermal cresistance of heatsink\n\nthanks a lot for their answer. but i have know how calculate thermal resistance in terms of physical characterize and independent of temperature or power dissipated or without testing, when the length of heat sink is changed as shown in many company e.g. Fischer electronic. it plot thermal...\n9. calculate thermal cresistance of heatsink\n\nHi every body i have a question about how can calculate thermal resistance of heatsinks under natural convection air and forced air (both of them) if the equation independent of temperature and depends to physical dimension of hestsink. thanks\n10. how to change incident power in HFSS\n\nthanks for your atention. i would be mind if you tell me what is the diferent of total voltage and incident voltage in HFSS.? and can i use voltage insted power? have different answer?\n11. how to change incident power in HFSS\n\nHI. THANKS in edit source we have two option. 1- incident voltage 2- total voltage. where i enter my voltage?\n12. calculate inductace of coil\n\nthanks excuse me, but i dont want Toroid. i have a cylinderical teflon core. whats the formula for calculate inductance?\n13. calculate inductace of coil\n\nhi all. how can calculate inductance of coil when we have dielectric core (not air core). we have circular core. thanks and regards\n14. how calculate electric field in hankel function for coaxial cable\n\nhi any body i want to know what is the electric field in coaxial cable and how to write electric field in Hankel function fourier series. there are many refrence for waveguide electric field and Hankel. but none of them don't say how to write electric field in hankel function. please help me...\n15. how to change incident power in HFSS\n\nhi. thanks for your attention. in hfss , voltage determined in Edit source. then we should write voltage no power. in hfss Edit source v=1 then i think the power couldn't 1watt. i think hfss help had mistake.\n16. how to change incident power in HFSS\n\nhi, I have a problem. i want change default power (1w) in hfss. i know that is in Edit source/scaling factor in hfss. but i don't know how to change value scaling factor. i mean i want power 20kw, so what value I write this tab?:cry: thanks for your help\n17. bandpass filter design\n\nhello everybody i want design coaxial bandpass filter and the resonator is disk in coaxial cable. i use chebyshev method for design in matthaei book(ch8, p.432), and i use impedance inverter, i want calculatre reactance of disk (X), for solve reactance slope parameter(x). this parameter is...\n18. how to select the substrate\n\nthanks for reply my ask. i visit the McMaster site for polyethylene, but i couldn't find relative permitivity (er=?) of material. is there er=10.2 ? and what is reason for select slab plastic? is there \"er\" for this? i embeded bar metal (like stub short circuit) in coxial cable...\n19. how select substrate?\n\nhi,all thanks for reply my ask. i visit the McMaster site for polyethylene, but i couldn't find relative permitivity (er=?) of material. is there er=10.2 ? and what is reason for select slab plastic? is there \"er\" for this? i embeded bar metal (like stub short circuit) in coxial cable...\n20. how to select the substrate\n\nhi all, i need a substrate that the feature of this substrate is bulky. i mean i will embeded metal post in substrate of coaxial cable. how to select this substrate, please anybody know, help me, that i can design my project a bout coaxial filter.:?: thanks and regarded:-(" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9154413,"math_prob":0.9038964,"size":4367,"snap":"2022-05-2022-21","text_gpt3_token_len":1028,"char_repetition_ratio":0.08801284,"word_repetition_ratio":0.123847164,"special_character_ratio":0.22280742,"punctuation_ratio":0.1513158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706407,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T23:31:17Z\",\"WARC-Record-ID\":\"<urn:uuid:1eebc283-5e12-4dab-afd5-7ed768c92889>\",\"Content-Length\":\"77771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d3e795b-a8e2-4614-8653-82cba8770623>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea1accd3-7bf3-4a7e-8f73-730c8ea8975c>\",\"WARC-IP-Address\":\"67.227.166.80\",\"WARC-Target-URI\":\"https://www.edaboard.com/search/1983949/\",\"WARC-Payload-Digest\":\"sha1:CNQABZLULUL4OEAY47HG2D4ESU6HVNR5\",\"WARC-Block-Digest\":\"sha1:E44V54DVLSQKITCCM6L4Q76UZM7WCH62\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306346.64_warc_CC-MAIN-20220128212503-20220129002503-00266.warc.gz\"}"}
https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/descriptives.html
[ "# 6 Descriptives\n\nDescriptive statistics describe basic features of the data in simple summaries. Examples include reporting measures of central tendency such as the mean, median, and mode. These are values that represent the most typical or most central point of a data distribution. Another class of descriptives are measures of variability or variation such as variance, standard deviation, ranges or interquartile ranges. These measures describe the spread of data. As well as being useful summaries in their own right, descriptive statistics are also used in data visualization to summarize distributions. There are several functions in R and other packages that help to get descriptive statistics from data. Before we go into detail about how to use R to get descriptives, we’ll describe these measures of central tendency and variation in a bit more detail.\n\n## 6.1 Sample vs Population\n\nThe first thing we would like to discuss is the difference between samples and populations. We can calculate descriptive measures such as means and standard deviations for both samples and populations, but we use different notation to describe these. A population is all subjects that we could possibly collect data from. For example, if we were interested in the IQ scores of eighth graders in Texas, then our population of interest is all eighth graders in Texas. If we wished to study maze learning in juvenile rats, then our population of interest would be all juvenile rats. If we were studying leaf growth in sunflowers, then our population of interest is all sunflowers. If we were able to measure the size of leaves on all sunflowers in existence, or measure the maze learning of all juvenile rats in the world, or the IQ of all eighth graders in Texas, then we would have data for the whole population. We would then be able to say something about the mean or median or some other descriptive about the population. Clearly, it is not always possible to measure every subject in a population. Of our three examples, it may just about be possible to measure the IQ of all Texas eighth graders although it would be a lot of work. It seems unlikely to be possible to measure the leaf growth of all sunflowers or the maze learning of all juvenile rats. Instead, what we typically do is to collect data on a subset of subjects. We call this subset a sample. For instance, if we picked 10 sunflowers then we would collect data on just those sunflowers. We may be able to calculate the average leaf size of these 10 sunflowers and use that to estimate what the true leaf size is of all sunflowers. We call the descriptive measures of samples estimates or statistics, whereas the descriptive measures of populations are called parameters.\n\nLet’s now discuss different descriptive measures in turn.\n\n## 6.2 Sample and Population Size\n\nThis isn’t strictly a descriptive measure - but it is worth pointing out that the notation for the size of your data is different depending upon whether you are talking about a sample of a population. If you are taking about a sample, then we use the lower case $$n$$ to refer to the sample size. So, if you see $$n=10$$ this means that the sample size is 10. e.g. you picked 10 sunflowers to collect data on. If you see the upper case $$N$$ this refers to the population size. So if you see that the population size is $$N=1200000$$ this refers to a population size of 1.2 million.\n\n## 6.3 Central Tendency\n\nThe first class of descriptives we will explore are measures of central tendency. These can be thought of as values that describe what is the most common, most typical or most average value in a distribution. Also, here we are using the term distribution to refer to a group of numbers or our data.\n\nWe’ll use the following dataset as an example. Let’s imagine that this is a sample of data:\n\nx <- c(1, 14, 12, 5, 3, 6, 11, 15, 9, 5, 4, 2, 7, 5, 3, 8, 11)\nx\n## 1 14 12 5 3 6 11 15 9 5 4 2 7 5 3 8 11\n\nWe can calculate the sample size using length():\n\nlength(x)\n## 17\n\nWe can see that the sample size is $$n=17$$.\n\n### 6.3.1 Mode\n\nThe mode or modal value of a distribution is the most frequent or most common value in a distribution. The number that appears the most times in our sample of data above is 5. The mode is therefore 5. In our example, it’s possible to manually check all the values, but a quicker way to summarize the frequency count of each value in a vector in R is to use table() like this:\n\ntable(x)\n## x\n## 1 2 3 4 5 6 7 8 9 11 12 14 15\n## 1 1 2 1 3 1 1 1 1 2 1 1 1\n\nWe can see that there are 3 instances of the number 5 making it the mode. There are two instances of 11 and 3, and every other number in the distribution has only 1 instance.\n\nIs 5 really the ‘middle’ of this distribution though? The mode has some serious deficiencies as a measure of central tendency in that although it picks up on the most frequent value, that value isn’t necessarily the most central measure.\n\n### 6.3.2 Median\n\nThe median value is the middle value of the distribution. It represents the value at which 50% of the data lies above the median, and 50% lies below the data.\n\nOne way to look at this is to visualize our distribution as a dot plot:", null, "We have 17 datapoints, which is an odd number. In this case, we want the number/dot at which half the remaining datapoints (8) are below the median, and half (the other 8) are above the median. You can see in the image, that the median value is therefore 6. This leaves 8 dots below it and 8 dots above it.\n\nTo do this by hand, we would first order the data, and then work from the outside to the inside of the distribution, crossing off one from each end at a time. The image below shows how we’re doing that using different colors to show the crossing out:", null, "In R, we have a quick shortcut for calculating the median, and it’s to use the function called median():\n\nmedian(x)\n## 6\n\nIf we have an even number of values in our distribution, then we take the average of the middle two values. For example, look at the image below. It has 12 numbers in the distribution, so we take the average of the 6th and 7th number:", null, "Once we’ve crossed out each number going from outside to in, we’re left with the 6th and 7th numbers being 10 and 15. The average of these numbers is 12.5, so the median is 12.5. We can see that with median():\n\ny <- c(5,7,7,9,9,10,15,16,16,21,21,22)\nmedian(y)\n## 12.5\n\n### 6.3.3 Mean\n\nThe mean, or arithmetic mean is the measure that most people think about when they think of the average value in a dataset or distribution. There are actually various different ways of calculating means, so the one that we will focus on is called the arithmetic mean. This is calculated by adding up all the numbers in a distribution and then dividing by the number of datapoints. You can write this as a formula. For a sample, it looks like this:\n\n$$\\overline{x} = \\frac{\\Sigma{x}}{n}$$\n\nAnd for a population it looks like this:\n\n$$\\mu = \\frac{\\Sigma{x}}{N}$$\n\nNotice that we use $$\\overline{x}$$ to denote the mean of a sample, and $$\\mu$$ to denote the mean of a population. Despite these notation differences, the formula is essentially exactly the same.\n\nLet’s calculate the arithmetic mean of our distribution x:\n\nsum_of_x <- 1 + 14 + 12 + 5 + 3 + 6 + 11 + 15 + 9 + 5 + 4 + 2 + 7 + 5 + 3 + 8 + 11\n\nsum_of_x\n## 121\n\nSo, here $$\\Sigma{x}=121$$. That makes the mean:\n\nsum_of_x / 17\n## 7.117647\n\nThis makes the mean $$\\overline{x}=7.12$$. The shortcut way of doing this in R, is to use the function mean():\n\nmean(x)\n## 7.117647\n\nWe’ll talk more about the pros and cons of the mean and median in future chapters.\n\n## 6.4 Variation\n\nAs well as describing the central tendency of data distributions, the other key way in which we should describe a distribution is to summarize the variation in data. This family of measures look at how much spread there is in the data. Another way of thinking about this is that these measure give us a sense of how clumped or how spread out the data are.\n\n### 6.4.1 Range\n\nThe simplest measure of spread is the range. This simply is the difference between the minimum and maximum value in a dataset. Looking at our distribution x, the minimum value is 1, and the maximum value is 15 - therefore the range is $$15-1 = 14$$.", null, "The problem with range as a measure can be illustrated by just adjusting our data distribution slightly. Say, instead of having a datapoint at 15, we had a value at 25. Now the range is 24 instead of 14. This suggests that the data is much more spread out, but in reality it is just one datapoint that is forcing the range to be much higher - the rest of the data is no more clumped or spread out. This is the major drawback of the range - it can be easily influenced by outliers - as illustrated below.", null, "In R, we can calculate the minimum, maximum and range of a distribution using the functions min(), max() and range(). Although range() just gives us the minimum and maximum values, we have to do the rest:\n\nmin(x)\n## 1\nmax(x)\n## 15\nrange(x)\n## 1 15\n\n### 6.4.2 Interquartile Range\n\nThe interquartile range or IQR is another measure of spread. It is roughly equivalent to the range of the middle 50% of the data. One way to think about this is to consider how the median splits the data into a bottom half and a top half. Then, calculate the median of the lower half of the data and the median of the upper half of the data. These values can be considered to be the lower quartile and upper quartile respectively. The interquartile range is the difference between these values. A visualization of this is below:", null, "The median of the bottom half of our data is 3.5 (the average of 3 and 4). The median of the top half is 11 (the average of 11 and 11). This makes the IQR equal to $$11-3.5 = 7.5$$.\n\nIf we start with an even number of numbers in our distribution, then we include each of the middle numbers in their respective lower and upper halves. The image below represents this:", null, "With this distribution, we calculated the median to be 12.5 as the numbers 10 and 15th were the middle two values. Because of this, we include 10 in the bottom half and 15 in the top half. When we work out to in on each of these halves, we find that the median of the bottom half is 8 (the average of 7 and 9) and the median of the upper half is 18.5 (the average of 16 and 21). Therefore, the lower quartile is 8 and the upper quartile is 18.5, making the IQR equal to $$18.5-8=10.5$$.\n\nThe above explanation of how to calculate the IQR is actually just one way of trying to estimate the “middle 50%” of the data. With this way of doing it, the lower quartile represents the 25% percentile of the data (25% of values being lower than it and 75% of values being higher). The upper quartile represents the 75% percentile of the data (75% of values being lower than it and 25% being higher).\n\nUnfortunately, there are several ways of calculating the lower and upper quartiles and estimating where these 25% and 75% percentiles are. When we calculate them in R, the default method it uses is actually different to our ‘by hand’ method. To calculate quartiles, we use the function quantile() (note - not quartile!) but we have to put a second argument to say if we want the lower quartile or upper quartile.\n\nquantile(x, 0.25) #0.25 means lower quartile\n## 25%\n## 4\nquantile(x, 0.75) #0.75 means upper quartile\n## 75%\n## 11\n\nYou can see these values are slightly different to our ‘by hand’ method. The upper quartile of x agrees with our method being 11. By hand we got the lower quartile to be 3.5, but R gives it as 4. This would make the IQR equal to $$11-4 =7$$. The quick way of getting that in R is to use IQR():\n\nIQR(x)\n## 7\n\nWe recommend using the R functions to calculate quartiles and interquartile ranges - it is a slightly stronger method than our by-hand method. You can actually do the by-hand method in R by adding type=6 to the functions. There are actually nine different ways of calculating these in R - which is ridiculous!\n\nquantile(x, 0.25, type = 6)\n## 25%\n## 3.5\nquantile(x, 0.75, type = 6)\n## 75%\n## 11\nIQR(x, type = 6)\n## 7.5\n\n### 6.4.3 Average Deviation\n\nAn alternative way of looking at spread is to ask how far from the center of the data distribution (i.e. the mean) is each datapoint on average. Distributions that are highly clumped will have most datapoints very close to the distribution’s mean. Distributions that are spread out will have several datapoints that are far away from the mean.\n\nLook at the two distributions below. Both of them have means of 10. The top distribution (A) however is much more clumped than the bottom distribution (B) which is more spread out.", null, "Let’s look at these in more detail. We’ll start with distribution A. We can calculate the difference of each datapoint from the mean (10) like this:\n\nA <- c(5,8,8,9,9,10,10,11,12,12,12,14)\nA - mean(A)\n## -5 -2 -2 -1 -1 0 0 1 2 2 2 4\n\nIf we add up all of those differences from the mean, then they will equal 0. We can show this like this:\n\nsum(A - mean(A))\n## 0\n\nA way to count up all the differences from the mean and to make sure that they count is to make each number positive regardless of its sign. We can do this using abs() in R:\n\nabs(A - mean(A))\n## 5 2 2 1 1 0 0 1 2 2 2 4\n\nWhen we sum all of these values up, we get the total of all the differences from the mean of each datapoint:\n\nsum(abs(A - mean(A)))\n## 22\n\nWe see that this total is 22. In formula notation, we find that $$\\Sigma|(x - \\overline{\\mu})|$$. Here $$x$$ represents each datapoint. $$\\overline{\\mu}$$ represents the population mean. $$| |$$ represents ‘take the absolute values of’, and $$\\Sigma$$ means “sum up”.\n\nTo get the “average deviation” we simply divide our sum of difference scores by the number of datapoints, which is 12 in this case. The formula for average deviation is:\n\n$$AD = \\frac{\\Sigma|(x - \\overline{\\mu})|}{N}$$\n\n22/12\n## 1.833333\n\nOur average deviation is therefore 1.83. This can be interpreted as each datapoint being on average 1.83 units away from the mean of 10.\n\nAnother way to have got the $$N$$ would have been to use length() which counts the number of datapoints:\n\nsum(abs(A - mean(A))) / length(A)\n## 1.833333\n\nWe could do the same for distribution B. Calculate the sum of all the difference scores, and then divide by the $$N$$:\n\nB <- c(2,3,4,6,8,10,11,12,13,15,17,19)\n\n#difference scores\nB - mean(B)\n## -8 -7 -6 -4 -2 0 1 2 3 5 7 9\n# absolute difference scores\nabs(B - mean(B))\n## 8 7 6 4 2 0 1 2 3 5 7 9\n# sum of absolute difference scores\nsum(abs(B - mean(B)))\n## 54\n# average deviation\nsum(abs(B - mean(B))) / length(B)\n## 4.5\n\nHere, the total sum of differences from the mean is 54. The average deviation is 4.5. This value being higher than 1.83, shows that distribution B is more spread out than distribution A, which makes sense just looking at the dotplots of the data.\n\n### 6.4.4 Standard Deviation\n\nAn alternative, and much more common method of calculating the ‘deviation’ from the mean of the average datapoint is the standard deviation. This is very similar to the absolute deviation, but the method of making the difference scores positive is different. In average deviation, we just ignore the sign of the difference scores and we make everything positive (this is called taking the absolute value). In standard deviation the method used to make these difference scores positive is to square them.\n\nLet’s look at how this one works for our two distributions. We’ll start with distribution A again.\n\nFirst step, is again to get the difference scores, by taking each datapoint away from the mean of the distribution:\n\nA - mean(A)\n## -5 -2 -2 -1 -1 0 0 1 2 2 2 4\n\nNext, we square these difference scores to get positive values:\n\n(A - mean(A))^2\n## 25 4 4 1 1 0 0 1 4 4 4 16\n\nNotice that the datapoints that are furthest from the mean get proportionally larger than values that are close to the mean. Squaring has this effect.\n\nWe need to sum these “squared differences” to get a measure of how much deviation there is in total - this figure can also be called the Sum of Squares or Sum of Squared Differences:\n\nsum((A - mean(A))^2)\n## 64\n\nThe total of the squared differences is 64. The notation for this is:\n\n$$\\Sigma(x-\\mu)^2$$\n\nTo get a sense of the average squared difference, we then divide the total of the squared differences by our $$N$$:\n\nsum((A - mean(A))^2) / 12\n## 5.333333\n\nThe average squared difference is 5.33. The notation for this is $$\\frac{\\Sigma(x-\\mu)^2}{N}$$.\n\nThis is a useful measure of deviation, but unfortunately it is still in the units of “squared differences”. To get it back to the original units of the distribution we just square root it and we call this the “standard deviation”:\n\nsqrt(5.333333) \n## 2.309401\n\nThe standard deviation $$\\sigma=2.309$$. The notation for this is:\n\n$$\\sigma = \\sqrt{\\frac{\\Sigma(x-\\mu)^2}{N}}$$\n\nWe are using $$\\sigma$$ to represent the population standard deviation.\n\nWe can do the same thing for our population B. Let’s calculate the difference scores, then square them, then add them up, then divide by $$N$$, and finally square root:\n\n# difference scores\nB - mean(B)\n## -8 -7 -6 -4 -2 0 1 2 3 5 7 9\n# squared difference scores\n(B - mean(B))^2\n## 64 49 36 16 4 0 1 4 9 25 49 81\n# Sum of squared differences\nsum((B - mean(B))^2)\n## 338\n# Average squared difference\nsum((B - mean(B))^2) / 12\n## 28.16667\n# Standard Deviation\nsqrt(sum((B - mean(B))^2) / 12)\n## 5.307228\n\nThe population standard deviation $$\\sigma$$ for population B is 5.31. Again, as this value is higher than 2.31 this suggests that population B is more spread out than population A, because its datapoints are on average further from the mean.\n\n### 6.4.5 Variance\n\nVariance is related to standard deviation. In fact, it is just standard deviation squared. It can be calculated by the formula:\n\n$$\\sigma^2 = \\sigma^2$$\n\nwhich is a bit ridiculous. Variance is denoted by $$\\sigma^2$$ and is calculated by squaring the standard deviation $$\\sigma$$.\n\nIt’s actually the value you get before you do the square root step when calculating standard deviation. Therefore, we can actually say:\n\n$$\\sigma^2 = \\frac{\\Sigma(x-\\mu)^2}{N}$$\n\n### 6.4.6 Average versus Standard Deviation\n\nSo why do we have two methods for calculating the deviation from the mean. We have the “average deviation” and the “standard deviation”. One thing you should notice is that the standard deviation is larger than the average deviation. Distribution A had an average deviation of 4.5 and a standard deviation of 5.3. Distribution B had an average deviation of 1.83 and a standard deviation of 2.3. The reason for this is that squaring difference scores leads to larger values than just taking absolute values. So why do we do the squaring thing? The main reason is that it emphasizes datapoints that are further away from the mean, and this can be an important aspect of spread that we need to take account for. Because of that, it is favored to use the ‘standard deviation’ above the ‘average deviation’.\n\n### 6.4.7 Sample Standard Deviation\n\nSomething that is often confusing in introductory statistics is that there are two different formulas for calculating the standard deviation. The one we have already introduced above is called the population standard deviation and its formula is:\n\n$$\\sigma = \\sqrt{\\frac{\\Sigma(x-\\mu)^2}{N}}$$\n\nBut, we use a different formula when we are calculating the standard deviation for a sample. This is called the sample standard deviation:\n\n$$s = \\sqrt{\\frac{\\Sigma(x-\\mu)^2}{n-1}}$$\n\nNotice two things. First, we use the notation $$s$$ to indicate a sample standard deviation. Second, instead of dividing by $$N$$ in the formula, we divide by $$n-1$$.\n\nSo, for our example data distribution of A, this is how we would calculate $$s$$:\n\nFirst, we get the difference scores, by subtracting the mean of the distribution from each score:\n\n#difference scores\nA - mean(A)\n## -5 -2 -2 -1 -1 0 0 1 2 2 2 4\n\nSecond, we square these difference scores to make them positive and to emphasize larger difference scores:\n\n#square the difference scores\n(A - mean(A))^2\n## 25 4 4 1 1 0 0 1 4 4 4 16\n\nThird, we sum up all the squared difference scores:\n\n#sum the squared difference scores\nsum((A - mean(A))^2)\n## 64\n\nFourth, we divide this sum by $$n-1$$, which technically gives us the variance:\n\n#divide by n-1 to get the variance\n# the 'n' is 12 here\n(sum((A - mean(A))^2))/(12-1)\n## 5.818182\n\nFinally, we square root this value to get the sample standard deviation - a measure of the typical deviation of each datapoint from the mean:\n\n#square root to get the SD\nsqrt((sum((A - mean(A))^2))/(12-1))\n## 2.412091\n\nHere we have manually calculated the sample standard deviation $$s=2.412$$. Earlier in this chapter we calculated the population standard deviation of this same distribution to be $$\\sigma=2.309$$. Notice that the sample standard deviation $$s$$ is larger than the population standard deviation $$\\sigma$$. This is because $$n-1$$ will always be smaller than $$N$$, inflating the final result.\n\nSo far, we haven’t shown you the shortcut for calculating the standard deviation in R. It’s actually just the function sd():\n\nsd(A)\n## 2.412091\n\nHopefully you notice that the output of sd() is the sample standard deviation and not the population standard deviation.\n\nThere is actually no built in function for calculating the population standard deviation $$\\sigma$$ in R. The below code is a custom function to calculate it that we made. It’s called pop.sd().\n\n# this is a custom function to calculate\n# the population standard deviation\npop.sd <- function(s) {\nsqrt(sum((s - mean(s))^2)/length(s))\n} \n\nWhen we look at the population standard deviation of A we can see that it matches what we worked out by hand earlier:\n\npop.sd(A) \n## 2.309401\n\nLet’s look at distribution B for its sample and population standard deviations:\n\nsd(B)\n## 5.543219\npop.sd(B)\n## 5.307228\n\nAgain you can see that $$s=5.543$$ is greater than $$\\sigma=5.307$$. Both these values are higher than the standard deviations for distribution A, indicating that distribution B is more spread out and less clumped than distribution A.\n\n### 6.4.8 Sample versus Population Standard Deviation\n\nSo why are there two formulas, and why do we divide by $$n-1$$ in the sample standard deviation? The short answer is that whenever we determine the standard deviation for a sample, our goal is technically not to ‘calculate’ the standard deviation just for that sample. The bigger goal is that we are trying to estimate the population standard deviation $$\\sigma$$. However, when we use the population SD formula on samples, we consistently underestimate the real population standard deviation. Why is this? Basically it’s for two reasons. First, within any one sample we typically have much less variation than we do in our population, so we tend to underestimate the true variation. Secondly, when we have our sample we use our sample mean $$\\overline{x}$$ as an estimate of the population mean $$\\mu$$. Because $$\\overline{x}$$ will be usually slightly different from $$\\mu$$ we will be usually underestimating the true deviation from the mean in the population.\n\nThe bottom line is this: using the population SD formula for a sample generally gives an underestimate of the true population standard deviation $$\\sigma$$. The solution is to use a fudge-factor of dividing by $$n-1$$ which bumps up the standard deviation. This is what we do in the sample standard deviation formula.\n\nIn the sections below, we are going to visually demonstrate this. Hopefully this helps to show you that dividing by $$n-1$$ works. Don’t worry too much about any code here, the aim isn’t for you to learn how to run simulations such as these, but we want you to be able to visually see what’s going on.\n\n#### 6.4.8.1 Comparing population and sample means\n\nBefore we get to why we need a separate formula for the sample standard deviation, let’s show you why we don’t need a separate formula for the sample mean compared to the population mean. Both of these formulas are essentially the same:\n\nSample mean: $$\\Large\\overline{x} = \\frac{\\Sigma{x}}{n}$$\n\nPopulation mean:\n$$\\Large \\mu = \\frac{\\Sigma{x}}{N}$$\n\nLet’s assume the following data distribution is our population, we’ll call it pop. The following code creates a population of 10000 numbers drawn from a random normal distribution (see section 7.0.3) with a population mean of 8 and population standard deviation of 2. Because we’re randomly drawing numbers to make our population, the final population won’t have a mean and standard deviation that are precisely 8 and 2, but we can calculate what they turn out to be:\n\nset.seed(1) # just so we all get the same results\n\npop <- rnorm(10000, mean = 8, sd = 2) #100 random numbers with mean of 8, popSD of 2.\n\nWe now have our population of size $$N=10000$$. We can precisely calculate the population mean $$\\mu$$ and population standard deviation $$\\sigma$$ of our 10000 numbers using mean() and pop.sd():\n\nmean(pop) \n## 7.986926\npop.sd(pop) \n## 2.024612\n\nSo our population has a mean $$\\mu=7.99$$ and population standard deviation $$\\sigma=2.02$$.\n\nLet’s now start taking samples. We’ll just choose samples of size $$n=10$$. We can get samples using sample() in R. Let’s look at the sample mean of each sample:\n\n#first sample\nsamp1 <- sample(pop, size = 10, replace = T)\nsamp1\n## 3.722781 8.785220 6.045566 11.705993 7.297500 7.399121 11.976971\n## 8.401657 6.306361 7.152555\nmean(samp1)\n## 7.879373\n\nHere our sample mean $$\\overline{x}=8.62$$ which is close-ish, but a fair bit above $$\\mu=7.99$$.\n\nLet’s do it again:\n\n#second sample\nsamp2 <- sample(pop, size = 10, replace = T)\nsamp2\n## 6.142884 7.614448 9.575279 9.072075 4.108463 10.599279 8.224608\n## 6.735289 7.004465 7.791237\nmean(samp2)\n## 7.686803\n\nAgain our value of $$\\overline{x}=8.10$$ is above $$\\mu=7.99$$, but this time much closer.\n\nLet’s do a third sample:\n\n#third sample\nsamp3 <- sample(pop, size = 10, replace = T)\nsamp3\n## 8.352818 12.802444 10.304643 6.061141 8.633719 8.028558 7.350359\n## 5.904344 6.875784 11.512439\nmean(samp3)\n## 8.582625\n\nThis time our value of $$\\overline{x}=7.86$$ is a bit below $$\\mu=7.99$$.\n\nWhat if we did this thousands and thousands of times? Would our sample mean be more often lower or higher than the population mean $$\\mu=7.99$$?\n\nThis is what the code below is doing - it’s effectively grabbing a sample of size 10 and then calculating the sample mean, but it’s doing this 20,000 times. It’s storing all the sample means in an object called results.means.\n\nNote: you don’t need to know how this code works! though do reach out if you are interested.\n\nresults.means<- vector('list',20000)\n\nfor(i in 1:20000){\nsamp <- sample(pop, size = 10, replace = T)\nresults.means[[i]] <- mean(samp)\n}\n\nLet’s look at 10 of these sample means we just collected:\n\nunlist(results.means)[1:10]\n## 7.457507 7.804544 8.099342 7.886644 7.642569 8.228794 8.581173 7.417380\n## 7.098994 7.458659\n\nSome are above and some are below $$\\mu=7.99$$.\n\nLet’s calculate the mean of all the 20,000 sample means:\n\nmean(unlist(results.means))\n## 7.990952\n\nIt turns out that the average sample mean that we collect using the formula $$\\Large \\overline{x} = \\frac{\\Sigma{x}}{n}$$ is 7.99 which is the same as the population mean $$\\mu$$. What this means is that this formula is perfectly fine to use to estimate the population mean. It is what we call an unbiased estimator. Over the long run, it gives us a very good estimate of the population mean $$\\mu$$. Here is a histogram of our sample means from our 20,000 samples:", null, "The vertical solid black line represents $$\\mu=7.99$$. This histogram is centered on this value, showing that our sample mean formula is unbiased in estimating the population mean - overall, it isn’t under- or over-estimating the population mean.\n\nAs a side note - what we just did in the exercise above was to calculate a sampling distribution of sample means - something we’ll discuss much more in section 7.2.\n\n#### 6.4.8.2 Sample standard deviation as an unbiased estimator\n\nLet’s do something similar with our two formulas for calculating standard deviation. We’ll take samples of size $$n=10$$ and use the sample standard deviation $$s$$ and the population standard deviation $$\\sigma$$ formulas to estimate the true $$\\sigma=2.02$$.\n\n#first sample\nsd(samp1)\n## 2.510835\npop.sd(samp1)\n## 2.381987\n\nWith the first sample, both estimates are lower than $$\\sigma=2.02$$, although the sample standard deviation is a bit closer to $$\\sigma=2.02$$.\n\n#second sample\nsd(samp2)\n## 1.850897\npop.sd(samp2)\n## 1.755915\n\nWith the second sample of 10, both estimates are even lower than $$\\sigma=2.02$$. Again, the sample standard deviation formula produces a result that is closer to 2.02 than does the population deviation formula.\n\nWhat if we did this for 20,000 samples of size 10? We’ll save the estimates using the sample SD formula in the object results.samp.sd and the estimates using the population SD formula in results.pop.sd. Again, don’t worry about the code here - just focus on the output:\n\nresults.samp.sd<- vector('list',20000)\nresults.pop.sd<- vector('list',20000)\n\nfor(i in 1:20000){\nsamp <- sample(pop, size = 10, replace = T)\nresults.samp.sd[[i]] <- sd(samp)\nresults.pop.sd[[i]] <- pop.sd(samp)\n}\n\nWe can work out the average estimate of the standard deviation across all 20,000 samples:\n\nmean(unlist(results.samp.sd))\n## 1.967842\nmean(unlist(results.pop.sd))\n## 1.866859\n\nSo, over 20,000 samples both formulas actually overall underestimate the true population standard deviation of $$\\sigma=2.02$$, however, the sample standard deviation formula is closer with it’s average being 1.97 compared to the population standard deviation’s formula being at 1.87.\n\nWe can graph this like this:", null, "This visualization shows us a few things. First, over all 20,000 samples, some of our estimates of the true standard deviation $$\\sigma$$ are higher and some are lower regardless of which formula we use. However, when we use the population formula (dividing by $$N$$), we have far more samples with estimates of the standard deviation $$\\sigma$$ which are too low. The distribution is clearly not symmetrical. If we consider the right histogram, when we use the sample SD formula (dividing by $$n-1$$), we correct this by and large. This histogram is closer to symmetrical, and we are not underestimating the true population standard deviation nearly as much. In this way, we called the sample standard deviation $$s$$ an unbiased estimator.\n\nIf we were to take larger sample sizes, then our estimates of the population standard deviation $$\\sigma$$, would get better and better when using the sample standard deviation formula.\n\n## 6.5 Descriptive Statistics in R\n\nThe above sections interweaved some theory with how to get descriptive information using R. In this section we’ll summarize how to get descriptive summaries from real data in R.\n\nThe dataset that we’ll use is a year’s worth of temperature data from Austin, TX.\n\natx <- read_csv(\"data/austin_weather.csv\")\nhead(atx) # first 6 rows\n## # A tibble: 6 x 4\n## month day year temp\n## <dbl> <dbl> <dbl> <dbl>\n## 1 1 1 2019 43.3\n## 2 1 2 2019 39.4\n## 3 1 3 2019 41.2\n## 4 1 4 2019 44.1\n## 5 1 5 2019 48.6\n## 6 1 6 2019 48.8\n\nThe temp column shows the average temperature for that day of the year in 2019. Here is a histogram showing the distribution. It is often hot in Texas.\n\nggplot(atx, aes(x= temp)) +\ngeom_histogram(color=\"black\", fill=\"lightseagreen\", binwidth = 2)+\ntheme_classic()+\nxlab(\"Average temperature\")", null, "Basic Descriptives\n\nHere is a list of some the basic descriptive commands such as calculating the $$n$$, the minimum, maximum and range. We apply each function to the whole column of data atx$temp, i.e. all the numbers of the distribution: length(atx$temp) # length this tells you the 'n'\n## 365\nrange(atx$temp) # range ## 34.5 89.2 min(atx$temp) # minimum\n## 34.5\nmax(atx$temp) # maximum ## 89.2 Mean, Median, and Mode These mean and median are straightforward in R: mean(atx$temp) # mean\n## 68.78767\nmedian(atx$temp) # median ## 70.8 For some descriptives, like mode, there is not a function already built into R. One option is to use table() to get frequencies - but this isn’t useful when you have relatively large datasets. The output is too large. Another option is to use tidyverse methods. Here, we use group_by() to get each temperature, then we use count() to count how many of each temperature we have, and then arrange() to determine which is most frequent: atx %>% group_by(temp) %>% count() %>% arrange(-n) ## # A tibble: 248 x 2 ## # Groups: temp ## temp n ## <dbl> <int> ## 1 84.8 5 ## 2 51.6 4 ## 3 75.5 4 ## 4 83.7 4 ## 5 84.2 4 ## 6 84.3 4 ## 7 84.9 4 ## 8 86.1 4 ## 9 42.3 3 ## 10 52 3 ## # ... with 238 more rows This shows us that the modal value is 84.8F. In reality however, the mode is never something that you will calculate outside of an introductory stats class. Variation The default standard variation measure in R is the sample standard deviation sd(), and is the one you should pretty much always use: sd(atx$temp) # sample standard deviation\n## 14.90662\n\nVariance can also be calculated using var() - remember this is the standard deviation squared. When you calculate this using the sample standard deviation $$s$$ the formula notation for the variance is $$s^2$$:\n\nvar(atx$temp) # variance ## 222.2072 The lower quartile, upper quartile and inter-quartile range can be calculated like this: quantile(atx$temp, .25) # this is the lower quartile\n## 25%\n## 56.5\nquantile(atx$temp, .75) # this is the upper quartile ## 75% ## 83.3 IQR(atx$temp) # this is the inter-quartile range.\n## 26.8\n\nRemember there are several ways of calculating the quartiles (see above).\n\n### 6.5.1 Dealing with Missing Data\n\nOften in datasets we have missing data. In R, missing data in our dataframes or vectors is represented by NA or sometimes NaN. A slightly annoying feature of many of the descriptive summary functions is that they do not work if there is missing data.\n\nHere’s an illustration. We’ve created a vector of data called q that has some numbers but also a ‘missing’ piece of data:\n\nq <- c(5, 10, 8, 3, NA, 7, 1, 2)\nq\n## 5 10 8 3 NA 7 1 2\n\nIf we try and calculate some descriptives, R will not like it:\n\nmean(q)\n## NA\nsd(q)\n## NA\nrange(q)\n## NA NA\nmedian(q)\n## NA\n\nWhat we have to do in these situations is to override the missing data. We need to tell it that we really do want to get these values and it should remove the missing data before doing that. We do that by adding the argument na.rm=T to the end of each function:\n\nmean(q, na.rm=T)\n## 5.142857\nsd(q, na.rm=T)\n## 3.338092\nrange(q, na.rm=T)\n## 1 10\nmedian(q, na.rm=T)\n## 5\n\nNow R is happy to do what we want.\n\nThe only ‘gotcha’ that you need to watch out for is length() which we sometimes use to calculate the $$n$$ of a vector. If we do this for q, we’ll get 8, which includes our missing value:\n\nlength(q)\n## 8\n\nThis is a way of getting around that - it looks odd, so we’ve just put it here for reference. It’s not necessary for you to remember this. It’s essentially asking what is the length of q when you don’t include the NA:\n\nlength(q[!is.na(q)])\n## 7\n\n## 6.6 Descriptives for Datasets\n\nOften in studies, we are interested in many different outcome variables at once. We are also interested in how groups differ in various descriptive statistics. The following code will show you how to get descriptive statistics for several columns. In the next section we’ll discuss getting descriptives for different groups from data.\n\nFirst read in these data that are looking at various sales of different video games.\n\nvg <- read_csv(\"data/videogames.csv\")\n\nhead(vg)\n## # A tibble: 6 x 12\n## name platform year genre publisher NA_sales EU_sales JP_sales global_sales\n## <chr> <chr> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 Wii Sp~ Wii 2006 Spor~ Nintendo 41.4 29.0 3.77 82.5\n## 2 Mario ~ Wii 2008 Raci~ Nintendo 15.7 12.8 3.79 35.6\n## 3 Wii Sp~ Wii 2009 Spor~ Nintendo 15.6 11.0 3.28 32.8\n## 4 Wii Fit Wii 2007 Spor~ Nintendo 8.92 8.03 3.6 22.7\n## 5 Wii Fi~ Wii 2009 Spor~ Nintendo 9.01 8.49 2.53 21.8\n## 6 Grand ~ PS3 2013 Acti~ Take-Two~ 7.02 9.14 0.98 21.1\n## # ... with 3 more variables: critic <dbl>, user <dbl>, rating <chr>\n\nOne way to get quick summary information is to use the R function summary() like this:\n\nsummary(vg)\n## name platform year genre\n## Length:2502 Length:2502 Min. :1992 Length:2502\n## Class :character Class :character 1st Qu.:2005 Class :character\n## Mode :character Mode :character Median :2008 Mode :character\n## Mean :2008\n## 3rd Qu.:2011\n## Max. :2016\n## publisher NA_sales EU_sales JP_sales\n## Length:2502 Min. : 0.0000 Min. : 0.0000 Min. :0.00000\n## Class :character 1st Qu.: 0.0700 1st Qu.: 0.0200 1st Qu.:0.00000\n## Mode :character Median : 0.1800 Median : 0.1000 Median :0.00000\n## Mean : 0.4852 Mean : 0.3012 Mean :0.04023\n## 3rd Qu.: 0.4675 3rd Qu.: 0.2800 3rd Qu.:0.01000\n## Max. :41.3600 Max. :28.9600 Max. :3.79000\n## global_sales critic user rating\n## Min. : 0.0100 Min. :13.00 Min. :0.700 Length:2502\n## 1st Qu.: 0.1500 1st Qu.:61.00 1st Qu.:6.200 Class :character\n## Median : 0.3800 Median :72.00 Median :7.400 Mode :character\n## Mean : 0.9463 Mean :69.98 Mean :7.027\n## 3rd Qu.: 0.9275 3rd Qu.:81.00 3rd Qu.:8.100\n## Max. :82.5400 Max. :98.00 Max. :9.300\n\nYou’ll notice here that it just gives you some summary information for different columns, even those that have no numerical data in them. It’s also not broken down by groups. However, summary() can be a quick way to get some summary information.\n\nA slightly better function is Describe() in the psych package. Remember to install the psych package before using it. Also, here we are telling it only to provide summaries of the relevant numeric columns (which are the 6th through 11th columns):\n\nlibrary(psych)\ndescribe(vg[c(6:11)])\n## vars n mean sd median trimmed mad min max range skew\n## NA_sales 1 2502 0.49 1.29 0.18 0.27 0.21 0.00 41.36 41.36 15.79\n## EU_sales 2 2502 0.30 0.90 0.10 0.15 0.13 0.00 28.96 28.96 16.63\n## JP_sales 3 2502 0.04 0.20 0.00 0.01 0.00 0.00 3.79 3.79 12.05\n## global_sales 4 2502 0.95 2.56 0.38 0.53 0.42 0.01 82.54 82.53 16.52\n## critic 5 2502 69.98 14.34 72.00 71.03 14.83 13.00 98.00 85.00 -0.67\n## user 6 2502 7.03 1.44 7.40 7.19 1.33 0.70 9.30 8.60 -1.06\n## kurtosis se\n## NA_sales 422.23 0.03\n## EU_sales 444.50 0.02\n## JP_sales 191.56 0.00\n## global_sales 441.74 0.05\n## critic 0.07 0.29\n## user 1.06 0.03\n\nThis function also includes some descriptives that we don’t necessarily need to worry about right now, but it does contain most of the ones we are concerned with.\n\n### 6.6.1 Descriptives for Groups\n\nThere are a few ways of getting descriptives for different groups. In our videogame dataset vg, we have a column called genre. We can use the function table() to get the $$n$$ for all groups.\n\ntable(vg$genre) ## ## Action Racing Shooter Sports ## 997 349 583 573 We have four different groups of genres, and we might want to get descriptives for each. We can use the function describeBy() from the psych package to get a very quick and easy, but a bit annoying, look at group summaries. It also ignores missing data which is helpful. We dictate which group to get summaries by using the group = \"genre\" argument: describeBy(vg[c(4,6:11)], group = \"genre\") ## ## Descriptive statistics by group ## genre: Action ## vars n mean sd median trimmed mad min max range skew ## genre* 1 997 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 NaN ## NA_sales 2 997 0.41 0.82 0.17 0.24 0.19 0.00 9.66 9.66 6.19 ## EU_sales 3 997 0.27 0.58 0.10 0.15 0.13 0.00 9.14 9.14 7.14 ## JP_sales 4 997 0.05 0.14 0.00 0.01 0.00 0.00 1.13 1.13 4.44 ## global_sales 5 997 0.83 1.67 0.36 0.50 0.39 0.01 21.12 21.11 6.66 ## critic 6 997 68.02 14.21 70.00 68.63 14.83 20.00 98.00 78.00 -0.38 ## user 7 997 7.12 1.34 7.40 7.26 1.19 1.70 9.30 7.60 -1.04 ## kurtosis se ## genre* NaN 0.00 ## NA_sales 52.60 0.03 ## EU_sales 77.32 0.02 ## JP_sales 22.38 0.00 ## global_sales 61.52 0.05 ## critic -0.31 0.45 ## user 1.05 0.04 ## ------------------------------------------------------------ ## genre: Racing ## vars n mean sd median trimmed mad min max range skew ## genre* 1 349 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 NaN ## NA_sales 2 349 0.40 1.03 0.13 0.22 0.18 0.00 15.68 15.68 10.36 ## EU_sales 3 349 0.31 0.85 0.10 0.17 0.13 0.00 12.80 12.80 10.23 ## JP_sales 4 349 0.03 0.24 0.00 0.00 0.00 0.00 3.79 3.79 12.65 ## global_sales 5 349 0.86 2.36 0.30 0.48 0.36 0.01 35.57 35.56 10.34 ## critic 6 349 69.84 14.01 72.00 71.18 14.83 13.00 95.00 82.00 -0.92 ## user 7 349 6.99 1.50 7.30 7.16 1.48 1.00 9.20 8.20 -1.05 ## kurtosis se ## genre* NaN 0.00 ## NA_sales 140.24 0.06 ## EU_sales 135.03 0.05 ## JP_sales 180.01 0.01 ## global_sales 136.42 0.13 ## critic 0.87 0.75 ## user 1.06 0.08 ## ------------------------------------------------------------ ## genre: Shooter ## vars n mean sd median trimmed mad min max range skew ## genre* 1 583 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 NaN ## NA_sales 2 583 0.56 1.22 0.16 0.27 0.21 0.00 9.73 9.73 4.41 ## EU_sales 3 583 0.33 0.67 0.10 0.18 0.13 0.00 5.73 5.73 4.46 ## JP_sales 4 583 0.02 0.07 0.00 0.01 0.00 0.00 0.88 0.88 6.43 ## global_sales 5 583 1.02 2.09 0.35 0.54 0.43 0.01 14.77 14.76 4.20 ## critic 6 583 70.49 15.12 73.00 71.71 14.83 27.00 96.00 69.00 -0.68 ## user 7 583 6.95 1.54 7.30 7.14 1.33 1.20 9.30 8.10 -1.14 ## kurtosis se ## genre* NaN 0.00 ## NA_sales 22.41 0.05 ## EU_sales 24.55 0.03 ## JP_sales 52.31 0.00 ## global_sales 19.81 0.09 ## critic -0.18 0.63 ## user 1.08 0.06 ## ------------------------------------------------------------ ## genre: Sports ## vars n mean sd median trimmed mad min max range skew ## genre* 1 573 1.00 0.00 1.00 1.00 0.00 1.00 1.00 0.00 NaN ## NA_sales 2 573 0.60 1.99 0.27 0.35 0.28 0.00 41.36 41.36 16.15 ## EU_sales 3 573 0.33 1.44 0.08 0.13 0.12 0.00 28.96 28.96 15.26 ## JP_sales 4 573 0.05 0.30 0.00 0.00 0.00 0.00 3.77 3.77 9.54 ## global_sales 5 573 1.11 4.00 0.50 0.64 0.50 0.01 82.54 82.53 16.17 ## critic 6 573 72.95 13.44 76.00 74.42 10.38 19.00 97.00 78.00 -1.10 ## user 7 573 6.97 1.46 7.30 7.12 1.33 0.70 9.20 8.50 -0.92 ## kurtosis se ## genre* NaN 0.00 ## NA_sales 312.92 0.08 ## EU_sales 282.22 0.06 ## JP_sales 100.35 0.01 ## global_sales 307.46 0.17 ## critic 1.26 0.56 ## user 0.68 0.06 The above is a quick and dirty way of getting summary information by group. But it is messy. We suggest an alternative method which is to write code using the tidyverse package. This can give us descriptive statistics in a more organized way. For instance, if we wanted to get the mean of the column NA_sales by genre we would use group_by() and summarise() in this way: vg %>% group_by(genre) %>% summarise(meanNA = mean(NA_sales)) ## # A tibble: 4 x 2 ## genre meanNA ## <chr> <dbl> ## 1 Action 0.407 ## 2 Racing 0.397 ## 3 Shooter 0.555 ## 4 Sports 0.603 The above code can be read as taking the dataset vg, and then grouping it by the column genre, and then summarizing the data to get the mean of the NA_sales column by group/genre. Please not the British spelling of summarise(). The tidyverse was originally written using British spelling, and although R is usually fine with British or US spelling, this is one situation in which it is usually helpful to stick with the British spelling for boring reasons. If you had missing data, you’d do it like this. vg %>% group_by(genre) %>% summarise(meanNA = mean(NA_sales, na.rm = T)) ## # A tibble: 4 x 2 ## genre meanNA ## <chr> <dbl> ## 1 Action 0.407 ## 2 Racing 0.397 ## 3 Shooter 0.555 ## 4 Sports 0.603 You can do several summaries at once like this. Here we are getting the means and sample standard deviations of the NA_sales and EU_sales columns by genre: vg %>% group_by(genre) %>% summarise(meanNA = mean(NA_sales), sd_NA = sd(NA_sales), meanEU = mean(EU_sales), sd_EU = sd(EU_sales)) ## # A tibble: 4 x 5 ## genre meanNA sd_NA meanEU sd_EU ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 Action 0.407 0.819 0.267 0.577 ## 2 Racing 0.397 1.03 0.309 0.852 ## 3 Shooter 0.555 1.22 0.331 0.667 ## 4 Sports 0.603 1.99 0.326 1.44 To save time, you can tell it to just get the summary of all numeric columns by using summarise_if(). vg$year <- as.factor(vg$year) # just need to make year non-numeric first so doesn't get included in the numeric columns vg %>% group_by(genre) %>% summarise_if(is.numeric, mean, na.rm = T) %>% as.data.frame() ## genre NA_sales EU_sales JP_sales global_sales critic user ## 1 Action 0.4071013 0.2668004 0.04857573 0.8349649 68.01605 7.117954 ## 2 Racing 0.3967908 0.3085100 0.03246418 0.8630946 69.84241 6.991691 ## 3 Shooter 0.5554717 0.3310292 0.02288165 1.0245969 70.48714 6.948885 ## 4 Sports 0.6034031 0.3260733 0.04808028 1.1108028 72.94764 6.970157 vg %>% group_by(genre) %>% summarise_if(is.numeric, sd, na.rm = TRUE) %>% as.data.frame() ## genre NA_sales EU_sales JP_sales global_sales critic user ## 1 Action 0.8189867 0.5772930 0.14425469 1.670552 14.20560 1.339989 ## 2 Racing 1.0315188 0.8522769 0.24017403 2.363348 14.00640 1.497225 ## 3 Shooter 1.2165479 0.6674269 0.07343275 2.087551 15.11614 1.540249 ## 4 Sports 1.9868151 1.4352370 0.30421425 3.996394 13.44039 1.463734 ### 6.6.2 Counts by Group Another common use of group_by() is to get counts of how many we have of each categorical variable. For instance, let’s look more at the videogames dataset vg. We have previously seen that we can use table() to count simple frequencies. For instance, the following: table(vg$genre)\n##\n## Action Racing Shooter Sports\n## 997 349 583 573\n\nshows us how many observations of each genre we have. We have 997 Action games, 349 Racing games, 583 shooter games and 573 sports games.\n\nWe can look at how these breakdown by platform by adding one more argument into our table() function which relates to our second column of interest:\n\ntable(vg$genre, vg$platform)\n##\n## PC PS2 PS3 Wii X360\n## Action 144 247 237 127 242\n## Racing 48 135 63 30 73\n## Shooter 144 128 123 30 158\n## Sports 32 203 116 79 143\n\nWe can see here that we have 32 Sports games on the PC, 135 racing games on the PS2 and so on.\n\nThis is a nice and straightforward way of doing this. It’s also possible to do it using the tidyverse() which can come in handy sometimes in some circumstances. To do it this way, we make use of group_by() and count(). We tell it the two columns we wish to group our data by (in this case it is the genre and the platform columns), and then tell it to count how many observations we have:\n\nvg %>%\ngroup_by(genre, platform) %>%\ncount()\n## # A tibble: 20 x 3\n## # Groups: genre, platform \n## genre platform n\n## <chr> <chr> <int>\n## 1 Action PC 144\n## 2 Action PS2 247\n## 3 Action PS3 237\n## 4 Action Wii 127\n## 5 Action X360 242\n## 6 Racing PC 48\n## 7 Racing PS2 135\n## 8 Racing PS3 63\n## 9 Racing Wii 30\n## 10 Racing X360 73\n## 11 Shooter PC 144\n## 12 Shooter PS2 128\n## 13 Shooter PS3 123\n## 14 Shooter Wii 30\n## 15 Shooter X360 158\n## 16 Sports PC 32\n## 17 Sports PS2 203\n## 18 Sports PS3 116\n## 19 Sports Wii 79\n## 20 Sports X360 143\n\nThese data are presented in a slightly different way. The count of each combination is shown in the new n column. The nice thing about this tidy approach is that we can further manipulate the data. This is better illustrated with an even busier dataset, the catcolor.csv dataset:\n\ncats <- read_csv(\"data/catcolor.csv\")\nhead(cats)\n## # A tibble: 6 x 7\n## animal_id monthyear name color1 color2 sex breed\n## <chr> <dttm> <chr> <chr> <chr> <chr> <chr>\n## 1 A685067 2014-08-14 18:45:00 Lucy blue white Female domestic shorth~\n## 2 A678580 2014-06-29 17:45:00 Frida white black Female domestic shorth~\n## 3 A675405 2014-03-28 14:55:00 Stella Lu~ black white Female domestic medium~\n## 4 A684460 2014-08-13 15:04:00 Elsa brown <NA> Female domestic shorth~\n## 5 A686497 2014-08-31 15:45:00 Chester black <NA> Male domestic shorth~\n## 6 A687965 2014-10-31 18:29:00 Oliver orange <NA> Male domestic shorth~\n\nSay we want to know how many male and female cats of each breed we have. With tidyverse, we would do it like this:\n\ncats %>%\ngroup_by(breed,sex) %>%\ncount()\n## # A tibble: 87 x 3\n## # Groups: breed, sex \n## breed sex n\n## <chr> <chr> <int>\n## 1 abyssinian Female 2\n## 2 abyssinian Male 1\n## 3 american curl shorthair Female 4\n## 4 american curl shorthair Male 1\n## 5 american shorthair Female 28\n## 6 american shorthair Male 44\n## 7 angora Female 4\n## 8 angora Male 2\n## 9 angora/persian Male 1\n## 10 balinese Female 3\n## # ... with 77 more rows\n\nThis gives us a lot of information. In fact, we have 87 rows of data. However, we could next sort by the newly created n column, to see which sex/breed combination we have the highest amount of. We can use arrange() to do this:\n\ncats %>%\ngroup_by(breed,sex) %>%\ncount() %>%\narrange(-n)\n## # A tibble: 87 x 3\n## # Groups: breed, sex \n## breed sex n\n## <chr> <chr> <int>\n## 1 domestic shorthair Male 6303\n## 2 domestic shorthair Female 4757\n## 3 domestic mediumhair Male 702\n## 4 domestic mediumhair Female 512\n## 5 domestic longhair Male 381\n## 6 domestic longhair Female 328\n## 7 siamese Male 214\n## 8 siamese Female 135\n## 9 maine coon Male 54\n## 10 snowshoe Male 45\n## # ... with 77 more rows\n\nAnother thing we can do is to count how many there are of a given category or categories that satisfy certain conditions. For example, let’s say we wanted to know the most popular name of each breed for orange cats. We could first filter the data by color1 to only keep orange cats, then group by name and breed and then use count() and arrange():\n\ncats %>%\nfilter(color1 == \"orange\") %>%\ngroup_by(name,breed) %>%\ncount() %>%\narrange(-n)\n## # A tibble: 1,304 x 3\n## # Groups: name, breed [1,304]\n## name breed n\n## <chr> <chr> <int>\n## 1 Oliver domestic shorthair 16\n## 2 Oscar domestic shorthair 12\n## 3 Ginger domestic shorthair 11\n## 4 Sam domestic shorthair 11\n## 5 Garfield domestic shorthair 10\n## 6 Simba domestic shorthair 10\n## 7 Tiger domestic shorthair 10\n## 8 Toby domestic shorthair 10\n## 9 Charlie domestic shorthair 9\n## 10 Milo domestic shorthair 9\n## # ... with 1,294 more rows\n\nIt turns out that the most popular names overall for orange cats are for domestic shorthairs who are called Oliver, then Oscar, Ginger, Sam, Garfield and so on.\n\nTo do exactly what we said above, we can do something a bit different. After we’ve done all the above, we can then tell the chain that we only want to group by breed this time, and we want to keep the highest value with top_n(1). This returns the following:\n\ncats %>%\nfilter(color1 == \"orange\") %>%\ngroup_by(name,breed) %>%\ncount() %>%\narrange(-n) %>%\ngroup_by(breed) %>%\ntop_n(1)\n## # A tibble: 23 x 3\n## # Groups: breed \n## name breed n\n## <chr> <chr> <int>\n## 1 Oliver domestic shorthair 16\n## 2 Boris domestic mediumhair 5\n## 3 Pumpkin domestic mediumhair 5\n## 4 Charlie domestic longhair 3\n## 5 Gilbert domestic longhair 3\n## 6 Pumpkin domestic longhair 3\n## 7 Amos american shorthair 2\n## 8 Roxy cymric 2\n## 9 Alise manx 1\n## 10 Ami british shorthair 1\n## # ... with 13 more rows\n\nCharlie, Gilbert and Pumpkin are all the most common names for orange domestic longhairs!" ]
[ null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des1.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des2.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des3.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des4.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des5.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des6.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des7.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/img/des8.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/_main_files/figure-html/unnamed-chunk-327-1.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/_main_files/figure-html/unnamed-chunk-332-1.png", null, "https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/_main_files/figure-html/unnamed-chunk-334-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82878697,"math_prob":0.9828887,"size":51117,"snap":"2022-05-2022-21","text_gpt3_token_len":15589,"char_repetition_ratio":0.15706377,"word_repetition_ratio":0.06743567,"special_character_ratio":0.35477436,"punctuation_ratio":0.1429961,"nsfw_num_words":6,"has_unicode_error":false,"math_prob_llama3":0.9957067,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T11:14:23Z\",\"WARC-Record-ID\":\"<urn:uuid:fc2b9894-06cd-4fa6-8b9c-2f1b57d570a0>\",\"Content-Length\":\"164135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbe3147f-583a-428f-bf3c-b0058106ed94>\",\"WARC-Concurrent-To\":\"<urn:uuid:f041f0eb-d633-43c8-bed3-84114189559a>\",\"WARC-IP-Address\":\"54.210.120.159\",\"WARC-Target-URI\":\"https://bookdown.org/content/50286a34-7e39-4500-8dd2-62bf686c1710/descriptives.html\",\"WARC-Payload-Digest\":\"sha1:K265FU3CQ4UJ47FIYYYXW77534RBTYTX\",\"WARC-Block-Digest\":\"sha1:DZ5JVG4JBDEMJSC3ZXA3EITAUYFXAIMF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545326.51_warc_CC-MAIN-20220522094818-20220522124818-00747.warc.gz\"}"}
https://de.mathworks.com/matlabcentral/cody/problems/2560-expand-intervals-vol-2/solutions/2737720
[ "Cody\n\n# Problem 2560. expand intervals vol.2\n\nSolution 2737720\n\nSubmitted on 24 Jul 2020 by Rafael S.T. Vieira\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nbounds = [1 5 3 9 24 32]; elements = [1 2 3 4 5 6 7 8 9 24 25 26 27 28 29 30 31 32]; assert(isequal(ExpandIntervals(bounds),elements))\n\n2   Pass\nbounds = [11 11 9 9]; elements = [9 11]; assert(isequal(ExpandIntervals(bounds),elements))\n\n3   Pass\nbounds = [200 400 100 300]; elements = [100:400]; assert(isequal(ExpandIntervals(bounds),elements))\n\n4   Pass\ntemp = [-10:9; -9:10]; bounds = temp(:)'; elements = -10:10; assert(isequal(ExpandIntervals(bounds),elements))\n\n5   Pass\nbounds = [-10 10]; elements = -10:10; assert(isequal(ExpandIntervals(bounds),elements))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5684226,"math_prob":0.9978177,"size":842,"snap":"2020-34-2020-40","text_gpt3_token_len":277,"char_repetition_ratio":0.16348448,"word_repetition_ratio":0.015748031,"special_character_ratio":0.4038005,"punctuation_ratio":0.15568863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9593138,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T13:22:42Z\",\"WARC-Record-ID\":\"<urn:uuid:593c6f8f-d00e-4f67-878f-ee52fe69e51e>\",\"Content-Length\":\"78366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a75a89f3-5cf1-4c0e-9545-66b8f47eec6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:059e3f7b-87fe-46e3-ad38-91dd4b1d92bb>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/2560-expand-intervals-vol-2/solutions/2737720\",\"WARC-Payload-Digest\":\"sha1:W43R25LVNNGR3OI4YUKRXYU3JAVVE3M6\",\"WARC-Block-Digest\":\"sha1:IPPAZBTZGGCSHWLZZ7YCPKJAVTYWYNOH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401600771.78_warc_CC-MAIN-20200928104328-20200928134328-00779.warc.gz\"}"}
https://www.frontiersin.org/articles/10.3389/fmech.2017.00016/full
[ "World-class research. Ultimate impact.\nMore on impact ›\n\n# Frontiers in Mechanical Engineering", null, "## Original Research ARTICLE\n\nFront. Mech. Eng., 21 November 2017 | https://doi.org/10.3389/fmech.2017.00016\n\n# Heat Transfer Characteristics during Boiling of Immiscible Liquids Flowing in Narrow Rectangular Heated Channels", null, "Yasuhisa Shinmoto,", null, "Daijiro Yamamoto,", null, "Daisuke Fujii and", null, "Haruhiko Ohta*\n• Department of Aeronautics and Astronautics, Kyushu University, Nishi-ku, Fukuoka, Japan\n\nThe use of immiscible liquids for cooling of surfaces with high heat generation density is proposed based on the experimental verification of its superior cooling characteristics in fundamental systems of pool boiling and flow boiling in a tube. For the purpose of practical applications, however, heat transfer characteristics due to flow boiling in narrow rectangular channels with different small gap sizes need to be investigated. The immiscible liquids employed here are FC72 and water, and the gap size is varied as 2, 1, and 0.5 mm between parallel rectangular plates of 30 mm × 175 mm, where one plate is heated. To evaluate the effect of gap size, the heat transfer characteristics are compared at the same inlet velocity. The generation of large flattened bubbles in a narrow gap results in two opposite trends of the heat transfer enhancement due to thin liquid film evaporation and of the deterioration due to the extension of dry patch in the liquid film. The situation is the same as that observed for pure liquids. The latter negative effect is emphasized for extremely small gap sizes if the flow rate ratio of more-volatile liquid to the total is not reduced. The addition of small flow rate of less-volatile liquid can increase the critical heat flux (CHF) of pure more-volatile liquid, while the surface temperature increases at the same time and assume the values between those for more-volatile and less-volatile liquids. By the selection of small flow rate ratio of more-volatile liquid, the surface temperature of pure less-volatile liquid can be decreased without reducing high CHF inherent in the less-volatile liquid employed. The trend of heat transfer characteristics for flow boiling of immiscible mixtures in narrow channels is more sensitive to the composition compared to the flow boiling in a round tube.\n\n## Introduction\n\nNon-azeotropic miscible mixtures have been widely used after the regulation in the production of Freon for the reason of ozone layer destruction. The mixing of working fluids realizes the desired vapor-pressure curve similar to that of a discontinued coolant by the adjustment of mixture concentration. However, from the viewpoint of boiling heat transfer, there is no advantage at all except the increase of critical heat flux (CHF) under limited conditions. It is well known that the heat transfer deterioration occurs due to the existence of mass transfer resistance inherent in the non-azeotropic mixtures. Because of the preferential evaporation of more-volatile liquid, the evaporative interfacial temperature increases from the equilibrium temperature at the bulk concentration. The reduction of effective temperature difference between the heating surface and the increased substantial saturation temperature results in the reduction of heat transfer rate at the given nominal temperature difference between the heating surface and the liquid at the bulk concentration.\n\nOn the other hand, by using non-azeotropic miscible mixture, experiments on heat pipes were performed concentrating on the Marangoni effect, which was expected for the dilute aqueous solutions of alcohol (Abe, 2005). The surface tension is a function of concentration and temperature. The effect of the concentration gradient, in general, is much larger than that of the temperature gradient on the Marangoni force. The value of surface tension decreases with increasing the temperature in most cases. However, for alcohols with a high carbon number, the surface tension increases with increasing temperature (Vochten and Petre, 2005). In such a case, both of concentration and temperature gradients increases the surface tension toward the three-phase interline along the surface of thin liquid film underneath bubbles, provided that the mixture is positive, i.e., the more volatile liquid has smaller surface tension. By such a characteristic of the mixture in a heat pipe, the limitation of the heat transfer rate due to the dryout at the evaporating section was increased (Abe, 2005). The observed Marangoni effect is referred to as “self-rewetting” by them.\n\nThe Marangoni effect on boiling heat transfer was investigated already in the existing experiments by using a heating wire. Significant increase of CHF was confirmed in the presence of Marangoni effect (Van Stralen, 1956). The increase of CHF seemed to be due to the decrease of dry patch areas extended underneath bubbles by the enhanced liquid supply toward three-phase interline by Marangoni force. If this is true, the heat transfer coefficient should be increased at the same time. The nucleate boiling heat transfer, in general, has conflicting trends of the heat transfer enhancement by the extension of thin liquid film, i.e., microlayer and the heat transfer deterioration due to the extension of dry patch in the center of the liquid film. The trend is obvious in nucleate boiling in microgravity (Ohta, 2003) or in heated narrow gaps between flat plates, where the bubble base areas on the heating surface are enlarged and both effects are emphasized compared to the pool boiling. However, by using a flat heating surface, the experimental results using alcohol aqueous solutions showed no noticeable increase of CHF, but the slight enhancement in heat transfer was observed at very low concentrations of alcohols (Sakai et al., 2010). The existence of heat transfer enhancement in addition to the well known heat transfer deterioration inherent in non-azeotropic miscible mixtures was confirmed. However, the former positive effect is very small compared to the latter negative effect. From these results, it was concluded that the use of miscible liquids in practical boiling systems had no advantage at all.\n\nBy the way, if the liquid temperature is adjusted between the maximum thermo-stable temperature for the conventional Si semiconductors and the maximum temperature of ambient air available as the final heat sink for the dissipated heat, the operation pressure becomes lower than the atmospheric by using, e.g., water as a cooling medium. In such a case, the undesired mixing of air into the cooling loop is expected during its operation. And the decrease of partial pressure of water in the vicinity of condensing interface reduces locally the saturation temperature of water, which results in the substantial decrease of temperature difference between the water vapor and ambient air as a driving force of the condensation. The deteriorated condensation heat transfer, in turn, results in the increase of system temperature and reduces the temperature difference between the surface of heat generation and the saturation temperature of liquid.\n\nTo avoid such a situation, the use of immiscible mixtures was proposed by a part of the present authors, in which one liquid is compressed excessively by the vapor pressure of another liquid without increasing liquid temperature. Through the experiments, many other advantageous characteristics inherent in immiscible mixtures, essentially different from non-azeotropic miscible mixtures, were clarified. However, in the existing past researches, the experiments on boiling of immiscible mixtures were attempted to simulate the chemical processes, such as the fractional distillation, while almost no attempt in its application to the cooling systems was performed so far.\n\nAmong the experiments concerning pool boiling heat transfer to immiscible mixtures, a lot of reports on nucleate boiling of water/oil mixtures related to the petroleum industries were found. Filipczak et al. (2011) investigated the heat transfer to emulsions of oil and water. At different levels of heat flux, the distribution of immiscible liquids and the vapor was clarified. They found that for high oil concentrations the heat transfer coefficients were far smaller than those for pure water. The result was attributed to the increased contribution of free convection to oil, because the higher surface temperature is needed for free convection of oil than for boiling of water due to the difference in the heat transfer coefficients. At the initiation of nucleate boiling, foaming was observed prior to the transition of immiscible liquid mixture to the emulsion. Roesle and Kulacki (2012) studied liquid–liquid distribution and corresponding heat transfer in nucleate boiling of FC72/water and pentane/water on a horizontal thin heated wire. The more-volatile component of FC72 or pentane was dispersed as a discontinuous phase in a continuous phase of water. At the concentrations 0.2–1.0% and 0.5–2.0% of FC72 and pentane, respectively, experiments were performed. Depending on the heat flux level, either of nucleate boiling of dispersed component or of dispersed and continuous components was observed. Enhanced heat transfer was observed due to nucleate boiling of dispersed liquid when its volumetric fraction was larger than 1%. Bulanov and Gasanov (2006) investigated boiling heat transfer to four different emulsions, n-pentane/glycerin, diethyl ether/water, R113/water, and water/oil. For all immiscible mixtures, more-volatile liquid was dispersed in the continuous phase of less-volatile liquid. The addition of more-volatile component to the less-volatile one reduced surface superheat at the boiling initiation.\n\nOn the other hand, the investigations on nucleate boiling of immiscible mixtures, in which the stratification of liquid layers under the unheated condition, are very limited. There are very old studies by Bonilla and Eisenbuerg (1948), Bragg and Westwater (1970), Sump and Westwater (1979). Bragg and Westwater investigated the classification of heat transfer modes for the stratified layers of immiscible liquids. Limited knowledge was derived from these experimental results because they reported the experimental data without detailed consideration on the phenomena. Detailed study for boiling of immiscible mixtures was performed by Gorenflo et al. (2001). They conducted the experiment on nucleate boiling of water/1-butanol on a horizontal heated tube. Depending on the mixture concentration, temperature, and pressure, the mixture became soluble or partially soluble. They reported insensitivity of nucleate boiling heat transfer to the solubility based on the experimental data for different combinations of concentration and pressure.\n\nAs regards flow boiling of immiscible mixtures, the flow characteristics of immiscible mixture using oil as one component were widely investigated concerning petroleum industries, e.g., Brauner (2003) and Abubakar et al. (2015). The enhancement of heat transfer due to forced convection to less-volatile liquid by the generated bubbles of more-volatile component was investigated by Hijikata et al. (1985), where fine droplets of R113 are almost uniformly dispersed in water flowing in a vertically oriented rectangular duct with a cross section of 30 mm × 6 mm. Under the same heat flux conditions, the surface temperature was reduced compared to pure water with increasing flow rate of R113. They explained observed heat transfer enhancement by the increase of liquid-vapor mixture flow velocity. Shiina and Sakaguchi (1997) proposed correlations for the heat transfer coefficients in flow boiling of R113/water immiscible mixture.\n\nIn recent years, the development of semiconductor technology requires also cooling technologies of more strict conditions. On the other hand, the popularization of electric automobiles including hybrid vehicles requires smaller and lighter cooling systems for the reduction of energy loss and, in turn, for the global environmental conservation. The present authors proposed the employment of immiscible mixtures as working fluids for flow boiling cooling systems for these applications. From the experimental results in pool boiling (Kobayashi et al., 2012; Ohnishi et al., 2013; Kita et al., 2014), the advantages of the immiscible mixtures are obvious and summarized as follows.\n\n(i) Under the co-existence of vapor and immiscible liquid mixtures at the equilibrium state, either of component liquids is compressed above the saturation pressure, i.e., partial vapor pressure corresponding to the equilibrium temperature, by the addition of the partial vapor pressure of the other component. In other words, the equilibrium temperature is lower than either of saturation temperatures for components corresponding to the total pressure. As a consequence, the self-sustaining subcooling is given to both liquids. For pure liquids and miscible liquids, subcooled boiling becomes possible only when the liquid is compressed mechanically by the aid of an accumulator or the liquid is cooled by the aid of additional cooling loop. On the other hand, for immiscible liquid mixtures, the self-sustaining subcooled boiling is always possible. The value of CHF is increased by the imposed subcooling because the suppression of the dry patch extension underneath bubbles is possible by the restriction of bubble growth. In fact, CHF of 300 W/cm2 was realized on the horizontal flat heating surface by using FC72/water immiscible mixture as shown in Figure 1A, where the heights of liquid layers for more- and less-volatile liquids on the horizontal flat surface before heating were varied as an important parameter. The height 0 mm of FC72 liquid layer implies that small amount of FC72 liquid is carried on the heating surface from the outer side of the heating block by the disturbance of liquid flow in the vessel (Ohta et al., 2015).\n\n(ii) Because boiling is initiated from more-volatile liquid, the addition of its small amount to less-volatile liquid decreases the surface temperature at the boiling incipience without changing the heat transfer characteristics of less-volatile component, e.g., large value of CHF. Then, the large hysteresis of surface temperature in the low heat flux region for boiling of less-volatile component can be avoided. The characteristic is required for the cooling under a large fluctuation of thermal load encountered in the cooling system of, e.g., automobile inverters.\n\n(iii) The generation of bubbles from more-volatile liquid enhances the heat transfer to less-volatile liquid due to natural convection at moderate heat flux and due to nucleate boiling at high heat flux. The reduction of surface temperature is observed if the enclosed quantity of more-volatile liquid is adjusted appropriately. An example of such situations is introduced in Figure 1A for 5 mm height of FC72 liquid.\n\n(iv) Under the fixed system pressure, the equilibrium temperature is lower than either of saturation temperatures of pure components. In other words, the liquid temperature can be easily adjusted between the thermo-stable maximum temperature of semiconductors and the heat sink temperature, e.g., usually temperature of ambient air, even under the system pressure larger than the atmospheric pressure. The situation prevents the undesired mixing of air as a non-condensable gas and then the degradation of condensation heat transfer, which is required for the reliable long-term operation of the cooling system. If pure water is employed, the system pressure should be lower than the atmospheric for the cooling of Si semiconductors with an allowable maximum temperature of 80–100°C. For example, the mixing of FC72 into water keeps the total pressure at 0.1 MPa and low equilibrium temperature at 52°C, i.e., the bulk liquid temperature.\n\nFIGURE 1", null, "Figure 1. Typical results already obtained by the present authors: (A) significant increase of critical heat flux by using immiscible mixture from 1.4 × 106 W/m2 for pure water to 3.0 × 106 W/m2 by the small addition of FC72 observed in pool boiling experiment (Ohta et al., 2015), (B) enhancement of heat transfer due to forced convection of water by nucleate boiling of FC72 observed in flow boiling experiments using a round tube (Yamasaki et al., 2015), (C) dominating force regime map by Bond, Weber and Froude numbers, obtained from flow boiling experiments using mini-tubes with different orientations (Baba et al., 2012).\n\nThe immiscible mixtures as cooling fluids have superior heat transfer characteristics when they are used under nucleate boiling conditions in a pool or an enclosed vessel. However, for practical applications, the system of flow boiling is more general and useful, because it separates the location of final heat dissipation, i.e., condenser section, from the location of heat generation, i.e., cold plate. By a part of present authors, flow boiling of immiscible mixture in a round tube was investigated (Yamasaki et al., 2015; Ohta et al., 2016). The experiments were performed by using a horizontal heated tube with an inner diameter of 7 mm and a heated length of 310 mm. The local heat transfer coefficients along the tube axis were measured by using an immiscible mixture of FC72/water under the conditions of pressure 0.1 MPa, inlet temperature 47°C, and total flow rate of 0.5 L/min. Before heating, flow pattern of liquid–liquid and vapor of the components was clarified. The flow pattern was not largely changed by the generation of FC72 bubbles because the generated bubbles did not penetrate into the liquid of water. For a small ratio of FC72 flow rate to the total, emulsion-like flow was observed under the unheated condition, where almost uniform distribution of fine FC72 droplet in the cross section of tube was exceptionally realized. Measurements of heat transfer coefficients were performed under different flow rate combinations. For all ratios of FC72 flow rate to the total varied as 0.2–0.8, values of CHF were increased from pure FC72, and heat transfer coefficients were enhanced from those of water in the entire heat flux range tested. The increment of the heat transfer coefficients became maximum at the FC72 flow rate ratio of 0.2, where emulsion-like flow was observed under the unheated condition. The results were attributed to the distribution of FC72 liquid along entire circumferential area of tube wall.\n\nBecause the distribution of both component liquids with different densities is not uniformly distributed across the cross section of a horizontal tube due to the existence of gravity, the heating of both components is not uniform in the circumferential direction of a tube even under the uniform heating from the tube wall. Furthermore, an additional thermal resistance is unavoidable when the dissipated heat is transferred to the tube wall from the flat surfaces with heat generation, such as surfaces of semiconductors or of heat spreaders. Therefore, the use of a rectangular channel is desired and tested here. From the experimental results of pool boiling, the role of more-volatile liquid is very important for the initiation of boiling at low surface temperature, and for the enhancement of heat transfer compared with pure less-volatile component, as explained in (ii) and (iii), respectively, in the preceding paragraph. If the gap size is small for the rectangular channel horizontally oriented, both component liquids contact the heating surface located at the bottom of channel even if the density of more-volatile liquid is lower than the less-volatile liquid. The use of narrow channels has another application. Because the representative length of the channel is reduced approximately to two times of the gap size for a large channel width compared to the gap size, the behaviors of liquid–vapor interface are dominated by surface tension rather than the body force. As a consequence, the phenomena become independent of gravity, which is an advantageous feature for the use of cooling system in space after the verification of the performance on the ground, or in automobiles accompanied by the frequent change of gravity vector during the operation.\n\nThere are many researches concerning boiling in narrow spaces. Fujita et al. (1989) conducted the experiment in narrow gaps between flat plates for various orientations including the vertical one immersed in a pool of water. They clarified the heat transfer enhancement by the reduction of gap size between plates, but it turned to the heat transfer deterioration by further decrease of the gap size due to the extension of large dry patches underneath large flattened bubbles. Willingham and Mudawar (1992) clarified the relation between CHF and gap sizes in narrow channels. Under the constant inlet velocity of liquid FC72, CHF values take a maximum at the intermediate gap size between 2 and 10 mm. They explained the reason of the trend by the coexistence of the positive effect due to the increase of liquid–vapor mixture velocity and the negative effect by the extension of dry patches with decreasing gap size. Lee and Lee (2001) investigated flow boiling heat transfer in narrow rectangular channels. They clarified the effect of gap size under various combinations of mass velocity, vapor quality, and heat flux. Their data showed that, in the region of two-phase forced convection, the heat transfer was enhanced and the effect of mass velocity is decreased with the reduction of gap size.\n\nKandlikar (2006) summarized the results of flow boiling in microchannels and minichannels with reference to the existing papers. According to his explanation, an expanding-bubble which occupied the entire cross section of channel formed the liquid film between the bubble and the heated wall, and the behavior of liquid film was similar to nucleate boiling and was quite different from the annular liquid film flow observed in normal channels. The heat transfer mechanism during flow boiling in microchannels and minichannels was regarded as being similar to nucleate boiling. He indicated that the behavior of three-phase interline was directly related to CHF mechanism, and the importance of “vapor-cutback” phenomenon to separate liquid film from the heated wall by the momentum of vapor due to the evaporation was proposed. Kandlikar et al. (2013) summarized five different instabilities or origin of instabilities possible for flow boiling in microchannels, where the rapid bubble growth toward upstream, the upstream compressible volume due to the existence of incondensable gas, and the CHF condition to restrict liquid flow were related to the experiments performed here by the present authors. A part of present authors already checked the effect of bubble growth toward upstream in a mini-tube of 0.51 mm in diameter by using FC72, in which the results with and without upstream compressible volume were compared (Ohta et al., 2009). The compressible volume was realized by the installation of a buffer tank with a built-in bellow whose back was exposed to the atmosphere. With the compressible volume, heat transfer deterioration in the region of two-phase forced convection occurs due to the periodical flow fluctuation, while the heat transfer characteristics was qualitatively similar to those of normal tubes without compressible volume.\n\nThe application to the cooling of semiconductor chips is targeted in most recent researches for flow boiling in microchannels, where the width of rectangular channels is comparable with its depth, and the one-dimensional behaviors of bubbles are concerned, once they occupy the entire cross section of the channel. However, in the narrow channel discussed here, the ratio of width to depth is quite large, and bubbles continue to grow in the transverse direction perpendicular to the flow even after they contact the surface located opposite to the heating surface.\n\nBefore the experiments, the gap sizes adopted here are examined on the regime map of dominant forces of body force, surface tension and inertia as shown in Figure 1C, where Bond, Weber, and Froude numbers are selected as parameters. The inertia force varies with mass velocity and vapor quality and is evaluated by the mean density of liquid–vapor mixture under the assumption of no slip between the phases. The boundaries of dominant regimes were decided from the results of flow boiling experiments by a part of the present authors, in which Bond number was reduced by the employment of mini-tubes and its orientation was varied to examine the influence of gravity on the heat transfer (Baba et al., 2012). The keys represent the range of experimental conditions of pure components, FC72 and water described later.\n\nTo clarify the heat transfer performance in boiling of immiscible liquids for the application to the practical cooling systems, the experiments on flow boiling in narrow rectangular channels with different gap sizes are conducted.\n\n## Experimental Apparatus and Procedure\n\nThe test loop, shown in Figure 2A is composed of one pump circulating both component liquids, flow meters, preheater, test section, condenser, and separation tank. The pressures inside the test loop is adjusted by the flow rate of water as a coolant in the condenser with reference to the heat inputs from the preheater and the test section. The magnet gear pump (IWAKI, MDG-M4T6A100) has a maximum discharge rate of 4.6 L/min. Its revolution is controlled by the inverter with reference to the pulse signal from the main flow meter located at the downstream of the pump. The test loop has three flow meters, i.e., one (OVAL, LFS45) at the downstream and two (OVAL, LFS40) at the upstream of the pump. All of flow meters have oval gears whose pulse signal is converted to the voltage to control the pump revolution. The error of flow rate in this system is ±1.6% evaluated from the specification of the flow meters. The preheater is made in the laboratory of the authors, where the sheathe heaters (SAKAGUCHI E.H., ELECTRIC, A-16, O.D. 8 mm) are wound around a copper tube. The preheater is wrapped by sheets of glass wool to reduce the heat loss as much as possible for the accurate evaluation of the inlet condition of the heated test section by the heat balance. The condenser is composed of a commercial-grade heat exchanger (HISAKA WORKS, BXN-024-NU-30), where cold water is supplied from a chiller unit (ORION, RKE1500B-V-G2-P, cooling capacity: 5.3 kW at water temperature of 20°C, minimum flow rate: 21 L/min) at desired temperature and flow rate. The separation tank is an important component for the experiment using the immiscible liquids. The vertical Pyrex glass cylinder has two aluminum flanges at the top and bottom of it. To prevent the leak of liquid, O-rings are inserted in them. From the bottom flange, a tube connected to the downstream is extended upwards to the thick layer of liquid with smaller density to suck it, while the liquid of larger density is introduced to the downstream separately from a hole located at the bottom flange. By the structure, each component liquid is introduced independently to the circulating pump. The flow rates of both liquids are controlled by using manual valves with reference to the flow meters. The pressure of the test loop is monitored by Bourdon pressure gauge (NAGANO KEIKI, AC10-173-3000, 0–1.1 MPa) for the safety of the experiment. The inlet pressure of the test section and pressure drop across it are measured by the pressure transducers (VALIDYNE, P55D 1-E-2-48-W-4-A, 0–350 kPa and P55D 1-E-2-46-W-4-A, 0–550 kPa, respectively), where the error of ±1.4 and ±0.9% are expected. The temperatures inside the loop are measured by K-type thermocouples (SANKO, T-35, sheath diameter: 1.6 mm) and their cold junctions are kept at 0°C by the controller of reference temperature (COPER ELECTRONICS, Zero-con ZC-114). The error of temperature measurement is ±0.3 K taking account also of the accuracy of data acquisition system described later.\n\nFIGURE 2", null, "Figure 2. Test loop and test section: (A) test loop with one pump circulating both component liquids of immiscible mixture, (B) test section with a heating surface for the measurement of local heat transfer characteristics, (C) outlook of heated test section with a horizontal heated channel.\n\nThe structure of the test section and its photo are shown in Figures 2B,C, respectively. The test section is composed of a narrow channel between two flat plates, where one is a heating surface and the other is a glass plate for the observation. The channel has a rectangular cross section of 30 mm in width and its gap size is varied as 2, 1, and 0.5 mm. The test section is assembled by using stainless flanges and O-rings. The heating surface assembly, which has a heated area with a width of 30 mm and a length in the flow direction of 175 mm located at the bottom of the horizontal channel, consists of seven segmental aluminum blocks with a length of 25 mm cut out in one unit body to prevent the preferential nucleation at the boundaries of the blocks. If the neighboring segment surfaces are soldered, the bubble nucleation occurs preferentially at the boundaries of segments because many defects such as fine holes and crevices are activated as nucleation sites. Each block has thermocouples at different depths of 1.5, 8.5, and 15.5 mm to evaluate local heat transfer coefficients. Local surface temperatures and local heat fluxes are evaluated by using measured temperature gradients. Because the heat flux is not evaluated from the power input to the cartridge heaters, the accuracy of evaluated local heat fluxes is influenced also by the accuracy of thermocouple locations.\n\nThe evaluated error of heat flux is ±2.9% independent of heat flux level. The error of surface temperature, obtained by the extrapolation of temperature gradient to the surface, is estimated as ±0.07 and ±0.75 K at 5 × 104 and 5 × 105 W/m2, respectively. The error of the surface temperature, evaluated as the summation of the errors due to measurement, i.e., ±0.3 K and due to the uncertainty of thermocouple location above mentioned, becomes ±0.37 and ±1.05 K at 5 × 104 and 5 × 105 W/m2, respectively. The largest heat transfer coefficients for these heat fluxes are 6.6 × 103 and 2.6 × 104 W/m2 K from the experimental results shown later (cf. Figure 12) and the corresponding smallest temperature differences are 6.25 and 16.6 K, respectively. As a consequence, the maximum error of heat transfer coefficients, evaluated from the errors of the heat flux, i.e., ±2.9% and the errors of temperature difference ±5.0 and ±5.4%, are ±8.3 and ±8.9% for the representative heat fluxes, 5 × 104 and 5 × 105 W/m2, respectively, provided that the local fluid temperatures used for the definition of the local heat transfer coefficients is exactly estimated by the heat balance equation. There is unavoidable ambiguity in the evaluation of fluid temperatures inherent in immiscible mixtures due to the complicated evaporation process accompanied by the non-thermal equilibrium state as described in the beginning of Section “Evaluation of Fluid Temperature Distribution in Flow Direction.”\n\nTo prevent the heat conduction toward the flow direction, six slits are located between segmental aluminum blocks which are separated except the thin part at the top, and the individual blocks are almost thermally isolated each other. The cartridge heaters are inserted in the copper blocks attached to each bottom of the aluminum block. To avoid the excessive temperature increase in the copper block and protect the heaters from damage, both blocks are connected tightly by the aid of screws. The structure can change the heated length in the case of dry out at the downstream and the experiment at higher heat fluxes could be possible by switching off the power input to the segments located at the downstream. The test section is installed in the test loop to realize the horizontal channel flow. Before the filling of test liquids, the inside of the test loop is evacuated by a vacuum pump. The evacuation is repeated under the unheated condition before the start of heating to ensure the degassing. The dissolved air especially in a test liquid of FC72 seriously scatters the data acquired. The temperatures and pressures are measurement by a data logger (KEITHLEY, 3706). The images of liquid–vapor behaviors are recorded thorough the glass plate located at the top of the duct by using a high-speed video camera (IDT, MotionXtra N3, Max frame rate: 1,000 fps).\n\nTest liquid is an insoluble mixture of FC72/water. Under the condition of coexistence of both liquids and their vapor, the total pressure is the summation of partial pressures of the components. The situation is represented by vapor pressure curves in Figure 3. One component liquid is subcooled under the compression by vapor pressure of the other component. The liquid of more-volatile component is compressed slightly by low vapor pressure of less-volatile component and small subcooling is imposed. On the other hand, the liquid of less-volatile component is compressed largely by high vapor pressure of more-volatile component and high subcooling is given. The degrees of subcooling for both component liquids are summarized in Table 1. The pressure at the exit of the test section is adjusted at 0.1 MPa. The experimental conditions are listed in Table 2. Bond numbers corresponding to the size of channel gaps and the ranges of Weber numbers are shown in Figure 1C for pure FC72 and water. It is clear that the phenomena will be in the range between gravity-dominated and inertia-dominated except the boiling of water flowing in a channel of 0.5 mm in gap size which is dominated by surface tension or inertia. The representative lengths in the dimensionless groups are evaluated by the equivalent diameters, i.e., doubled gap sizes.\n\nFIGURE 3\nTABLE 1", null, "Table 1. Equilibrium temperature and values of liquid subcooling for both component liquids in immiscible mixtures of FC72/water at 0.1 MPa.\n\nTABLE 2\n\n## Evaluation of Fluid Temperature Distribution in Flow Direction\n\nAlthough the devised structure of the heating surface assembly mentioned in the preceding section makes possible the evaluation of local surface temperatures, it does not mean the feasible evaluation of local heat transfer coefficients for immiscible mixtures. The estimation of fluid temperature distribution along the flow direction from the measured temperature at the inlet and the outlet of the heated test section is needed to obtain a local value at each segment. There may be individual temperature distributions of FC72 and water along the flow direction, because of the velocity slip between the components which promotes the non-equilibrium state between liquid–liquid and liquid–vapor, and then non-uniform temperature especially between the components at each cross section of local positions. We have no information about the details of the velocity and temperature fields in a channel and the knowledge about interaction between the components are expected in further studies. However, it is at least true that subcooled liquid of FC72 becomes saturated much earlier than water. To apply the conventional energy balance method, three different regions are taken into account depending on the outlet state of mixtures. Along the manner for a round tube (Yamasaki et al., 2015), the uniform temperature across every cross section of the channel is assumed again. Three regions are classified by the different states of more-volatile component, FC72, while the less-volatile component, water, is assumed to be subcooled liquid for all regions.\n\n(Region A) FC72 (Fluid 1): subcooled liquid/water (Fluid 2): subcooled liquid\n\n$ξ ΔQ=ρl,1 Vl,1in cpl,1 ΔT,$\n$(1−ξ) ΔQ=ρl,2 Vl,2in cpl,2 ΔT,$\n\nwhere ΔQ: heat supplied between neighboring local positions (W), ΔT: temperature increment between neighboring local positions (K), Vl: liquid volumetric flow rate (m3/s), ρl: liquid density (kg/m3), cpl: liquid isobaric specific heat (J/kg K). A parameter ξ represents the rate of heat transferred to FC72 to the total (–). The suffixes 1 and 2 denote the more-volatile component (FC72) and less-volatile component (water), respectively. Once the flow rates of both liquids at the inlet of the test section and the electric power supplied to the heated section are given, the value of ξ is uniquely determined in Region A under the assumption of uniform temperature distribution across the cross section of the heated channel.\n\n(Region B) FC72 (1): superheated liquid and superheated vapor/water (2): subcooled liquid\n\n$ξ ΔQ=ρl,1 Vl,1in[x1(hfg,1+cpv,1 ΔT)+(1−x1)cpl,1 ΔT],$\n$(1−ξ) ΔQ=ρl,2 Vl,2in cpl,2 ΔT,$\n\nwhere, hfg: latent heat of vaporization (J/kg), cpv: isobaric specific heat of vapor (J/kg K). Vapor quality x1 (–) is defined by\n\n$x1=ρv,1Vv,1ρl,1inVl,1in=ρv,1Vv,1ρv,1Vv,1+ρl,1Vl,1.$\n\nIf water is also evaporating (not applied here), vapor quality x (–) for both components is defined by\n\n$x=ρv,1Vv,1+ρv,2Vv,2ρl,1Vl,1in+ρl,2Vl,2in=ρv,1Vv,1+ρv,2Vv,2(ρv,1Vv,1+ρl,1Vl,1)+(ρv,2Vv,2+ρl,2Vl,2).$\n\nThe boundary between the region A and the region B is assumed to be given by the equilibrium temperature, slightly lower than the saturation temperature of FC72, because initiation of nucleate boiling of FC72 is possible at low subcooling as confirmed in Table 1. In Region B, the state of FC72 is regarded as the mixture of superheated liquid and vapor. The temperature of subcooled water increases monotonically along the flow direction even after it exceeds the saturation temperature of FC72. And, also the temperature of FC72 liquid and vapor mixture is assumed to be increased by the same increment with water under the assumption of a uniform temperature in the cross section of the channel. The degree of liquid superheat, estimated from the outlet fluid temperature shown later, is possible values in the range of the quasi-stable state. In Region B, the parameter ξ is evaluated so that the calculated exit temperature coincides with the measured one.\n\n(Region C) FC72 (1): superheated vapor/water (2): subcooled liquid\n\n$ξ ΔQ=ρl,1 Vl,1in cpv,1 ΔT,$\n$(1−ξ) ΔQ=ρl,2in Vl,2 cpl,2 ΔT.$\n\nIn this region, liquid of FC72 is completely evaporated. Similar to Region A, ξ is uniquely determined by using properties of vapor and liquid for FC72 and water, respectively.\n\nThe calculated exit temperatures are compared with measured values in Figure 4, where the measured exit temperatures are shown by symbols and the calculated temperatures are represented by lines. In the figure, if the exit condition of FC72 is subcooled (Region A), direct comparison between the experimental data and the calculated values by Eqs 1 and 2 are possible. When the exit condition is located in the quality region of FC72 (Region B), which is easily known from the heat balance Eqs 1 and 2 for Region A by the check of exit temperature being larger or smaller than the temperature of boiling initiation, the values of ξ are calculated by using Eqs 3 and 4. After the determination of ξ from scattered values in Figure 5, the calculation is performed again to obtain the calculated exit temperatures for different combinations of flow rates as shown in Figure 4. Further increase in heat flux results in the complete evaporation of FC72, and FC72 is superheated at the exit (Region C). The measured exit temperature increases with increasing heat flux. The calculated exit temperatures are assumed to be the same as the measured ones to eliminate the discrepancy between the values which is resulted from the initiation of water subcooled boiling. The procedure is applied to all conditions in which the increment of exit temperature becomes larger with increasing heat flux at high heat fluxes in Figure 4. For immiscible mixtures, except the data of assumed agreement of exit temperature, the maximum discrepancy between the data and calculated exit temperature is within about ±1 K (to be referred in the latter section as Error #1) under the simplified assumption of uniform temperature across the cross section of the channel. The discrepancy is acceptable for the calculation of the temperature distribution, which is needed for the definition of local heat transfer coefficients.\n\nFIGURE 4", null, "Figure 4. Examples of measured and calculated temperatures at the outlet of test section for various combinations of component liquid flow rates at the inlet of heated test section: (A) entire heat flux range, (B) magnified range in low heat flux.\n\nFIGURE 5", null, "Figure 5. The variation of parameter ξ representing the ratio of heat transferred to more-volatile component FC72 to the total in Region B.\n\nOn the other hand, reasons for the discrepancy between the calculated and measured temperatures for pure components are deduced as follows. For both pure components, the calculate value of exit temperature increases monotonously with increasing heat flux under the subcooled liquid condition at the exit, while it becomes saturation temperature at higher heat flux. For FC72, the measured exit temperatures are far smaller than the calculated values. The occurrence of subcooled boiling consumes the supplied heat and reduces the sensible heat transferred to the liquid. The situation is also true for water. The discrepancy becomes smaller by the reduction of subcooling at higher heat flux. In addition, the measured liquid temperature in the bulk flow can become lower by the sensible heat accumulated in the superheated layer when the mixing of liquid is not enough especially at low heat flux. By these reasons, the measured temperatures become smaller than the prediction by the heat balance Eqs 1 and 2. To check the reproducibility of the experimental data for pure water, the experiment was repeated again. However, the trend of the data was the same. In despite of such discrepancy, the calculated temperature distribution along the flow direction is used for both of pure components ignoring the existence of non-equilibrium state.\n\nThe value of ξ in Region B is of the most interest in the present analysis. To evaluate the value, the data in which the exit condition is regarded as that of Region B is referred. In Figure 5, the data points at different heat fluxes are plotted by using the same key for one combination of flow rates. The values of ξ is dependent on the combination of flow rates to some extent, while they are almost independent of heat flux. The values of ξ near unity are clear from Figure 6. The distribution of liquid FC72 relative to the location of the surface for heat transfer is quite different between a round tube (Yamasaki et al., 2015) and a rectangular channel. Because of higher density of liquid FC72 than water, most part of heat is transferred directly to FC72 from the heating surface located at the bottom of channel at least for a large gap size. Because the values of ξ are determined from the small temperature increment along the flow direction as shown in Eqs 3 and 4, the sensitivity of ξ against the measured outlet temperature is high. Under the existence of uncertainty caused by the assumption of uniform temperature in the channel cross section, a value ξ = 0.99 is applied here independent of flow rate combination and heat flux for the evaluation of local fluid temperatures at different measurement points on the segmented heating surfaces. In the case of higher ratio of FC72 flow rate to the total, the calculated values of ξ from the measured exit temperature exceed unity. The contradictory trend is caused by the measured exit temperature is lower than the exact value because of the flow fluctuation often observed under such flow rate conditions. For the data at H = 0.5 mm and Vtotal = 0.13 L/min, the most extreme case in Figure 5, the difference between the values of ξ = 1.11 and ξ = 0.99 corresponds to the error of the measured temperature by −2.3 K (Error #2).\n\nFIGURE 6", null, "Figure 6. Difference in the distribution of more-volatile liquid with higher density on the heating surface between a round tube and a rectangular channel with heating of a bottom surface.\n\nAn example of fluid temperature distribution along the flow direction is shown in Figure 7A, which is used for the evaluation of local heat transfer coefficients at each segment. The value of ξ = 0.99 is applied to all combinations of flow rates. After FC72 is completely evaporated (Region C) and subcooled boiling of water is expected apart from the assumption in Eqs 7 and 8, linearly interpolated values of measured fluid temperature at the exit of test section and calculated temperature at the end of Region B are used to determine the fluid temperature distribution. It is clear that the increment of temperature in the quality region of mixed FC72 (Region B) is very small because only 1% of heat is transferred to water as sensible heat. As the ratio of FC72 flow rate to the total is increased, the length of Regime B along flow direction increases as is expected. The difference in the levels of temperatures for the quality region of mixed FC72 between different combinations of flow rates is caused by the small difference in the system pressure during the experiments. To examine the effect of heat flux, a trial to increase heat flux from q = 2 × 105 to q = 4 × 105W/m2 is performed keeping other conditions for Figure 7A unchanged. The result is shown in Figure 7B, where the exit temperatures for VFC72 = 0.5 L/min, Vwater = 0 L/min and VFC72 = 0.4 L/min, Vwater = 0.1 L/min at q = 2 × 105 W/m2 were applied again because of no measured exit temperatures at q = 4 × 105 W/m2, which is larger than CHF values for these fluids. It is clear that the length of Region B, in which the evaporation of mixed FC72 occurs, is reduced by the increase of heat flux.\n\nFIGURE 7", null, "Figure 7. Examples of evaluated temperature distribution along the flow direction for various combinations of component liquid flow rates at the inlet of heated test section. (A) q = 200 W/m2, (B) q = 400 W/m2.\n\nThe error of pressure measurement at the inlet of the heated test section is already evaluated as ±1.4% in Section “Experimental Apparatus and Procedure,” which corresponds to ±0.0014 MPa for the system pressure of 0.1 MPa. Because the sensitivity of equilibrium temperature to the pressure is 1.98 × 102 K/MPa independent of concentration for immiscible mixtures, the error of measured pressure corresponds to the error of calculated temperature of ±0.28 K (Error #3). By the way, for pure liquids of FC72 and water, the sensitivity of saturation temperature to the pressure is 2.93 × 102 and 2.80 × 102 K/MPa, which results in the errors of ±0.41 and ±0.39 K, respectively.\n\n## Experimental Results and Discussion\n\nTypical results for gap size of H = 1 mm is represented in Figures 811, where (a) the surface temperature Tw versus heat flux q, (b) heat transfer coefficient α versus heat flux q, (c) liquid–vapor behaviors at selected heat fluxes are shown. For the purpose of the practical application, the use of surface temperature instead of the temperature difference between the heating surface and fluid is attempted along the method of data reduction for pool boiling, cf. Figure 1. In pool boiling, because the equilibrium temperature is lower than either of saturation temperatures of the components under a given system pressure, the performance of heat transfer due to nucleate boiling is underestimated if it is evaluated by using the temperature difference between the heating surface and subcooled liquid. In flow boiling, however, heat transfer coefficients are defined by using the fluid temperature regardless of its state. Depending on the combination of flow rates and heat flux, the deviation of inlet pressure from the adjusted outlet value of 0.1 MPa is unavoidable. Such influence is reflected to the experimental data implicitly. In a series of data in Figures 811, the total volumetric flow rate is fixed at Vtotal = 0.5 L/min.\n\nFIGURE 8", null, "Figure 8. Distribution of local heat transfer coefficients along the flow direction versus heat flux, and corresponding flow behaviors (H = 1 mm, Vtotal = 0.5 L/min, pure FC72), (A) Twq, (B) α − q, (C) liquid–vapor behaviors at selected heat fluxes.\n\nFIGURE 9", null, "Figure 9. Distribution of local heat transfer coefficients along the flow direction versus heat flux, and corresponding flow behaviors (H = 1 mm, Vtotal = 0.5 L/min, VFC72 = 0.4 L/min, Vwater = 0.1 L/min): (A) Twq, (B) α − q, (C) liquid–vapor behaviors at selected heat fluxes.\n\nFIGURE 10", null, "Figure 10. Distribution of local heat transfer coefficients along the flow direction versus heat flux, and corresponding flow behaviors (H = 1 mm, Vtotal = 0.5 L/min, VFC72 = 0.1 L/min, Vwater = 0.4 L/min): (A) Twq, (B) α − q, (C) liquid–vapor behaviors at selected heat fluxes.\n\nFIGURE 11", null, "Figure 11. Distribution of local heat transfer coefficients along the flow direction versus heat flux, and corresponding flow behaviors (H = 1 mm, Vtotal = 0.5 L/min, pure water): (A) Twq, (B) α − q, (C) liquid–vapor behaviors at selected heat fluxes.\n\nFigure 8 shows the result for pure FC72 (VFC72 = 0.5 L/min, Vwater = 0 L/min). The surface temperatures do not change largely with increasing heat flux because nucleate boiling dominates the heat transfer. This is confirmed again by the trend of heat transfer coefficients dependent strongly on the heat flux as is observed in pool boiling experiments (Figure 8B). The increasing level of the heat transfer coefficients defined by the fluid temperature is resulted from the decrease in liquid subcooling toward the flow direction. The condition of CHF is observed at around 1.3 × 105W/m2 in the downstream locations. As shown in Figure 8C, the distinct bubble generation due to nucleate boiling is started from the both side edges of the channel at the upstream because the flow velocity of liquid is small near the side edges resulting in thicker thermal boundary layer required for the initiation of nucleate boiling. By using movie images, the bubble nucleation is also confirmed along the center of the heating surface even in the upstream region. However, because of the subcooling of FC72 liquid at the upstream of the test section, small bubbles do not grow. The saturated condition is extended from both side edges toward the center. These behaviors are usual for flow boiling in narrow channels.\n\nIn Figure 9, the results for VFC72 = 0.4 L/min, Vwater = 0.1 L/min are shown. The surface temperatures start to increase significantly at heat flux of around 1.3 × 105 W/m2 in all locations of measurement on the heating surface. The heat transfer is dominated mainly by subcooled nucleate boiling of FC72 at the heat fluxes below 1.3 × 105 W/m2. At the relevant boundary heat flux, a symptom of heat transfer deterioration due to the extension of dry patches under the flattened bubbles of FC72 is deduced by the change in the gradients of characteristic curves in both of Figures 9A,B. The phenomena is already known as “intermediate heat flux burnout” by the present authors during the pool boiling experiments, where the accumulation of bubbles composed of more-volatile component becomes a trigger of small surface temperature excursion (Kobayashi et al., 2012; Ohnishi et al., 2013; Kita et al., 2014; Ohta et al., 2015). Even in such a case, the serious heat transfer deterioration accompanied by the catastrophic surface temperature excursion can be avoided by the lateral penetration of less-volatile liquid as an alternative cooling medium in place of more-volatile liquid. The coexistence of the dried area underneath FC72 bubbles and the rewetted area by water flow makes possible the stable heat transfer at higher heat fluxes, but the behavior results in the larger increment of surface temperature with increasing heat flux. As a result, heat transfer coefficient temporarily tends to decrease as shown in Figure 9B. However, by the addition of water at small flow rate to the flow of FC72, CHF conditions at 1.3 × 105 W/m2 observed for pure FC72 in Figure 8 can be avoided. As shown in Figure 9B, the heat transfer deterioration is increased because the area of dry patches is extended with increasing heat flux up to 2 × 105 W/m2. The heat transfer coefficient, dominated mainly by forced convection of water, starts to increase again by further increase of heat flux, because the flow velocity of water is increased by the volume of generated FC72 bubbles. In Figure 9C, both of liquid flow of water and the generation of FC72 flattened bubbles are observed at 1 × 105 W/m2, and the heat transfer is dominated by nucleate boiling of FC72 as is confirmed also from the trend of heat transfer data in Figures 9A,B. At heat fluxes 3 × 105 W/m2, the distinct flow of water is observed at the edge of the rectangular duct and along the opposite unheated plate because of the phenomena corresponding to the annular flow in a tube at high vapor flow rate of evaporated FC72. Under such a case, only a part of water seems to flow along the heating surface. During the experiment, the liquid–vapor behavior is never steady but periodical at high heat fluxes because of the bubble expansion also toward the upstream direction in the narrow channel as is observed in usual in mini- and micro-channels. If the flow rate of water is further increased by decreasing the flow rate of FC72 under the same total volumetric flow rate, the temporal heat transfer deterioration due to the increased area of dry patches underneath the flattened bubbles of FC72 might be suppressed at the boundary heat flux at which the dominated heat transfer is changed from the nucleate boiling of FC72 to the forced convection of water.\n\nWith increasing water flow rate as VFC72 = 0.4, 0.3, 0.2, 0.1 L/min and Vwater = 0.1, 0.2, 0.3, 0.4 L/min, respectively, the increase of surface temperature is suppressed and the heat transfer coefficients take higher values. In Figure 10, the results for VFC72 = 0.1 L/min and Vwater = 0.4 L/min are shown. It is noteworthy that the heat transfer coefficients at the downstream location are clearly higher than those for the upstream under high heat flux conditions. This is because only the positive effect of FC72 bubbles on the heat transfer becomes emphasized by the reduction of FC72 flow rate. The increased agitation and the squeeze of water film by FC72 flattened bubbles enhance the transient heat conduction from the heating surface to the film, and the positive effect overcomes the negative effect of dry patch extension underneath FC72 flattened bubbles at high heat flux. From Figure 10C, large flattened bubbles of FC72 are observed at all locations of the heating surface. At the downstream, nucleate boiling of water occurs at 3 × 105 W/m2 which also enhances the heat transfer because the flattened bubbles of FC72 are not excessively large under the smaller flow rate of FC72.\n\nFor pure water (VFC72 = 0 L/min, Vwater = 5 L/min), as shown in Figures 11A,B, the surface temperatures increase with increasing heat flux before the initiation of boiling at high heat flux. The heat transfer is dominated by forced convection. The heat fluxes for the initiation of boiling are around 2 × 105 W/m2 in the midstream and the downstream locations. The heat flux needed for the boiling initiation at the upstream are higher because of large subcooling. At low heat flux, the heat transfer is dominated by forced convection, and the heat transfer coefficient takes higher value at the upstream as shown in Figure 11B because of thinner thermal boundary layer near the entrance. The heat transfer coefficients in nucleate boiling are higher in the downstream because of the positive effect by flattened bubbles. However, the symptom of CHF condition is observed in the heat transfer coefficient for Location 7 at the highest heat flux. In Figure 11C, nucleate boiling of water is clearly confirmed in the downstream location at 4 × 105 W/m2, while the heat transfer is dominated by forced convection at the upstream.\n\nThe effect of channel gap size is summarized in Figures 1214 for H = 2, 1, and 0.5 mm, respectively. In these figures, the data for Location 6, cf. Figure 2B, are represented because the characteristics of flow boiling in narrow channels are emphasized in the downstream region. Inlet velocity is unified among the data shown here, and the total flow rates are Vtotal = 0.5, and 0.25, and 0.13 L/min for H = 2, 1, and 0.5 mm, respectively. The errors of heat transfer coefficient, estimated by the manner described in Section “Experimental Apparatus and Procedure” are shown in the figures by using error bars of solid vertical lines and maximum error values at selected heat fluxes are also given in the caption of figure. The accuracy of data depends on the errors of heat flux, surface temperatures and fluid temperature. The former two are reflected in this error estimation, while the error of fluid temperature could not be reflected because of the unknown discrepancy between the real temperature distribution in the flow direction and the calculated one under the assumption of uniform temperature across each cross section of channel. However, as known from the discussion in Section “Evaluation of Fluid Temperature Distribution in Flow Direction,” the error of fluid temperature at the exit of the heated test section is around ±1 K (Error #1), and the error of temperature during the evaporation of FC72 caused by the error of pressure-dependent equilibrium temperature is estimated as ±0.28 K (Error #3), respectively, for mixtures. Furthermore, the error of fluid temperature caused by the error of ξ in the most extreme case is around −2.3 K (Error #2). To reflect these errors in the evaluation of fluid temperature, the error bars are extended as a trial in two steps taking account of additional errors of ±1 and ±3 K, respectively, by vertical dotted lines. The evaluation of accurate fluid temperature distribution is required in further studies on flow boiling of immiscible mixtures by using an experimental setup optimized for the direct measurement of fluid temperature distributions in both the flow direction and the channel cross section.\n\nFIGURE 12", null, "Figure 12. Effect of gap size on the heat transfer coefficient in the downstream under the constant liquid flow velocity at the inlet of heated test section (H = 2 mm, Vtotal = 0.5L/min): (A) qTw, (B) α − q. (Maximum errors caused by the uncertainty of heat flux and surface temperature measurement: heat flux ±2.9% independent of heat flux level, surface temperature ±5.0 and ±5.4%, heat transfer coefficient assuming accurate fluid temperature ±8.3 and ±8.9% at heat fluxes 5 × 104 and 5 × 105 W/m2, respectively. The error of heat transfer coefficient is further increased by taking account of the error in the estimation of fluid temperature, and the error bars by solid lines reflecting only uncertainties in heat flux and surface temperature are extended by dotted lines in two steps for the assumed additional errors of fluid temperature as ±1 and ±3 K, respectively, along the discussion in Sections “Evaluation of Fluid Temperature Distribution in Flow Direction” and “Experimental Results and Discussion.”).\n\nIn Figure 12, for H = 2 mm, the value of CHF for pure FC72 is around 1.4 × 105 W/m2. The CHF value can be increased by only the addition of water flow and it is increased up to 2.2 × 105 W/m2 for VFC72 = 0.4 L/min, Vwater = 0.1 L/min. The values for other flow rate conditions could not be measured because of flow fluctuation at high heat flux near CHF conditions. However, at least the value larger than 4.5 × 105 W/m2 is confirmed for the water flow rates larger than Vwater = 0.2 L/min or 40% of the total. Even at the small flow rate of FC72, e.g., VFC72 = 0.05 L/min, Vwater = 0.45 L/min, generated bubbles of FC72 coalesce at the midstream and produces vapor core flow squeezing liquid water by the interfacial shear stress to flow along the inner channel surfaces. The distribution of both phases is similar to annular flow in a tube. A large oscillation pushes periodically the flow toward the upstream and there is an instance when the flow is completely stopped. At high heat flux, the large extension of dry patches is observed in the midstream and downstream regions leaving many liquid droplets on the heating surface. The evaporation of the droplets prevents the transition to the CHF condition before the quenching of dry patches by the restarting of liquid flow. Smaller volumetric flow rate of FC72 mixed in water is expected to increase CHF for pure water also in flow boiling because of self-sustained subcooling inherent in boiling of immiscible mixtures. At moderate heat flux, the surface temperatures for immiscible mixtures are clearly lower than pure water but larger than FC72, which indicates that the heat transfer is dominated by the simultaneous nucleate boiling of FC72 and the forced convection of water. On the other hand, the surface temperature increases larger than water at high heat flux, and the trend is emphasized as the flow rate of FC72 is larger. In the upstream, the surface temperatures for these immiscible mixtures, larger than pure water at high heat flux, are not observed because the size of FC72 bubbles is still smaller and the instantaneous extension of dry patches by the existing flow oscillation is small. For VFC72 = 0.4 L/min, Vwater = 0.1 L/min, the heat transfer deterioration due to the intermediate heat flux burnout, cf., Figure 1A, is observed at around 1.3 × 105 W/m2. However, it tends to disappear with increasing flow rate of water. The reduction of surface temperature compared to water observed in immiscible mixtures at moderate heat fluxes is caused also by the increased velocity and the agitation of water flow due to the generation of FC72 flattened bubbles in addition to the evaporation of FC72. In Figure 12B, at heat flux larger than 2 × 105 W/m2 for immiscible mixtures of VFC72 = 0.2 L/min, Vwater = 0.3 L/min and VFC72 = 0.1 L/min, Vwater = 0.4 L/min, the heat transfer deterioration compared to pure water is observed in despite of smaller surface temperatures at heat fluxes less than 3 × 105 W/m2 as shown in Figure 12A. This contradictory trend is caused by the lower fluid temperature near the equilibrium one of immiscible mixtures due to the evaporation of FC72 as shown in Figure 7. As known from Figures 12A,B, no discontinuity in the trend of data for immiscible mixtures along the change of compositions is observed in the range of heat flux larger than 1.5 × 105 W/m2.\n\nIn Figure 13 for H = 1 mm, shortage in the data points of pure FC72 is due to the dry out at very low heat flux. The CHF values for FC72 are around 1.3 × 105, 1.2 × 105, and 6.6 × 104 W/m2 in the upstream (Location 2), midstream (Location 4), and downstream (Location 6), respectively, where the value for the downstream is confirmed in the figure. The devised segmented structure of heating block shown in Figure 2B makes possible the change of heated length during the experiment in the case of dry out which propagates from the downstream as the increase of heat flux. At high heat flux, the heat transfer deterioration, regarded as a symptom of dry out, is obvious for pure water as confirmed in Figure 13B. By the small addition of water to the flow of FC72 at VFC72 = 0.2 L/min, Vwater = 0.05 L/min and VFC72 = 0.15 L/min, Vwater = 0.1 L/min, the marked increase in CHF from the value for pure FC72 is observed, however, the surface temperature also increases seriously with the increase of heat flux. The large increase in the surface temperature is caused by the reduction of gap size up to 1 mm. At the midstream, the quick growth of FC72 flattened bubbles promotes the extension of dry patches on the heating surface and, at the same time, pushes water around FC72 bubbles to rewet the dried areas. The penetration of water deactivates the nucleation sites for boiling of FC72 at moderate heat flux, and also has a trend to suppress boiling of water even at high heat flux. On the other hand, the penetration of FC72 results in the instantaneous growth of FC72 bubbles which pushes surrounding liquid and bubbles of FC72 and liquid water. Due to the small gap size of 1 mm, the quenching of dry patches also by water occurs frequently in addition to the quenching by liquid FC72 by the increased secondary flow, and the quenching frequency is increased. As a consequence, the flow oscillation becomes smaller than the case of 2 mm gap size. At the downstream, the flow of liquid and vapor becomes rivulets oscillating in the transverse direction and it seems to accelerate quenching of dry patches. When the flow rate of water is increased, i.e., VFC72 = 0.1 L/min, Vwater = 0.15 L/min and VFC72 = 0.05 L/min, Vwater = 0.2 L/min, the surface temperatures are kept at lower than pure water in the moderate heat flux because the extension of dry patches due to the generation of FC72 bubbles is decreased. However, the heat transfer coefficients are superficially deteriorated from pure water at moderate heat flux because of the lower fluid temperature as is the gap size of 2 mm. When the heat transfer coefficients of H = 1 mm for VFC72 = 0.1 L/min, Vwater = 0.15 L/min and VFC72 = 0.05 L/min, Vwater = 0.2 L/min are compared with those for H = 2 mm at the same flow rate ratios and the same inlet liquid velocity, i.e., VFC72 = 0.2 L/min, Vwater = 0.3 L/min and VFC72 = 0.1 L/min, Vwater = 0.4 L/min in Figure 12, the surface temperatures are lower and the heat transfer coefficients are higher for H = 1 mm than for H = 2 mm at high heat flux. The reduction of gap size results in no serious heat transfer deterioration provided that the flow rate ratio of more-volatile liquid to the total is kept small to suppress the excessive generation of its flattened bubbles. The decreased flow oscillation by the increased frequency to quench dry patches and the enhanced penetration of liquid water prevents large extension of dry patches underneath FC72 bubbles. As a result, the positive effect of flattened bubbles to promote the heat transfer becomes more clear in this gap size.\n\nFIGURE 13", null, "Figure 13. Effect of gap size on the heat transfer coefficient in the downstream under the constant liquid flow velocity at the inlet of heated test section (H = 1 mm, Vtotal = 0.25 L/min): (A) qTw, (B) α − q. (Maximum errors caused by the uncertainty of heat flux and surface temperature measurement: heat flux ±2.9% independent of heat flux level, surface temperature ±5.0 and ±5.6%, heat transfer coefficient assuming accurate fluid temperature ±8.3 and ±9.1% at heat fluxes 5 × 104 and 5 × 105 W/m2, respectively. The error of heat transfer coefficient is further increased by taking account of the error in the estimation of fluid temperature, and the error bars by solid lines reflecting only uncertainties in heat flux and surface temperature are extended by dotted lines in two steps for the assumed additional errors of fluid temperature as ±1 and ±3 K, respectively, along the discussion in Sections “Evaluation of Fluid Temperature Distribution in Flow Direction” and “Experimental Results and Discussion.”).\n\nThe heat transfer characteristics are quite different for gap size H = 0.5 mm as shown in Figure 14. The CHF values for pure FC72 are around 1.1 × 105, 8.0 × 104, and 6.2 × 104 W/m2 in the upstream (Location 2), midstream (Location 4), and downstream (Location 6), respectively, in which the value for the downstream is confirmed in the figure. Compared to H = 1 mm, the reduction of CHF in the midstream is clear, while the CHF values are not seriously decreased in the downstream. This is because, in the downstream, the dry patches are quickly quenched by penetrating liquid as a result of the rapid exchange of bubble and liquid on the heating surface. For pure water, the deterioration of heat transfer coefficient is observed at high heat flux. For all immiscible mixtures tested here, the surface temperature becomes higher and the heat transfer coefficients take lower values than water with the increase of flow rate ratio of FC72 to the total. The deteriorated heat transfer is clear in Figure 14B at high heat fluxes for all immiscible mixtures compared to pure water, where dry patches are extended quickly under the flattened bubbles of FC72 and they are rewetted at high frequency by the penetration of liquid mostly occupied by water from various directions. As a consequence, a large temperature fluctuation due to the repeated quenching process is observed without causing temperature excursion of CHF condition. The extremely small flow rate of FC72 beyond the present experimental range still has a possibility to enhance the heat transfer for pure water if heat flux is not high.\n\nFIGURE 14", null, "Figure 14. Effect of gap size on the heat transfer coefficient in the downstream under the constant liquid flow velocity at the inlet of heated test section (H = 0.5 mm, Vtotal = 0.13 L/min): (A) qTw, (B) α − q. (Maximum errors caused by the uncertainty of heat flux and surface temperature measurement: heat flux ±2.9% independent of heat flux level, surface temperature ±3.7 and ±3.8%, heat transfer coefficient assuming accurate fluid temperature ±6.9 and ±7.0% at heat fluxes 5 × 104 and 5 × 105 W/m2, respectively. The error of heat transfer coefficient is further increased by taking account of the error in the estimation of fluid temperature, and the error bars by solid lines reflecting only uncertainties in heat flux and surface temperature are extended by dotted lines in two steps for the assumed additional errors of fluid temperature as ±1 and ±3 K, respectively, along the discussion in Sections “Evaluation of Fluid Temperature Distribution in Flow Direction” and “Experimental Results and Discussion.”).\n\nFor the higher performance of cooling, the flow rate of more-volatile liquid should be decreased with the reduction of gap size. As is confirmed in Figures 1214, if the flow rate ratio of FC72 is small, i.e., 20 and 40% of the total volumetric flow rate, also the heat transfer coefficients for immiscible mixtures take the maximum with the reduction of gap size from 2 to 0.5 mm in the high heat flux range under the same inlet liquid velocity condition. It is clear that the selection of an optimum gap size is also needed to obtain larger heat transfer coefficients.\n\nFigure 15 summarizes the effects of gap size, where the data for the mixture of 20 vol% FC72 at the same inlet velocity is shown in addition to the data for pure FC72 and pure water at Location 4 in the midstream. The total volumetric flow rates are Vtotal = VFC72 + Vwater = 0.5, 0.25, and 0.13 L/min for gap sizes of H = 2, 1, and 0.5 mm, respectively. In the data for pure FC72, the trend of heat transfer deterioration is clear for gap size of H = 0.5 mm, and the dry out occurs at around 8 × 104 W/m2, which is smaller than the values of 1.5 × 105 and 1.2 × 105 W/m2 for H = 2 and 1 mm, respectively. For pure water, the surface temperature for H = 1 and 0.5 mm is smaller than for H = 2 mm at moderate heat flux, while a symptom of heat transfer deterioration is observed at the high heat fluxes. For the immiscible mixture, for H = 2 and 1 mm, the reduction of surface temperature is clear at moderate heat flux compared to water, where the surface temperature becomes higher in the order of H = 0.5, 2, and 1 mm. However, the reduction of surface temperature compared to water tends to disappear at high heat flux. It is clear from the results that the generation of flattened bubbles has inherent positive and negative effects on the heat transfer, and these effects are pronounced in narrow heated channels by the enlarged base area due to bubble deformation. The use of immiscible mixtures further emphasizes these trends, because the flattened bubbles of more-volatile component promote the enhancement of heat transfer to less-volatile liquid in the low and moderate heat flux regions and promote the deteriorated heat transfer in the high heat flux region. In most cases, the substantial enhancement of heat transfer to less-volatile liquid by the generation of bubbles from more-volatile liquid, however, cannot be reflected to the values of heat transfer coefficients defined by using the fluid temperature which is lower than either of the saturation temperatures of pure components. Compared to the flow boiling of immiscible mixtures in a tube of normal size, cf., Figure 1B, the heat transfer characteristics to immiscible mixtures in narrow channels are more sensitive to the composition.\n\nFIGURE 15", null, "Figure 15. Typical heat transfer performance of immiscible mixtures in flow boiling for different gap sizes of rectangular channel.\n\n## Conclusion\n\nExperiments on flow boiling of immiscible mixture of FC72 and water as the more-volatile and the less-volatile components, respectively, in horizontal narrow rectangular channels of gap sizes 2, 1, 0.5 mm were conducted. The following conclusions were obtained.\n\n1. Most part of supplied heat is transferred to the more-volatile liquid with higher density as the latent heat of vaporization, once the temperature of mixture increased near its saturation temperature and boiling is initiated.\n\n2. Generated flattened bubbles of more-volatile component has a positive effect on the heat transfer due to forced convection to less-volatile liquid by the contribution of nucleate boiling heat transfer to more-volatile liquid in addition to the increased liquid velocity and the agitation of less-volatile liquid flow by the generation of bubbles.\n\n3. In nucleate boiling heat transfer to less volatile component, the bubble generation from more-volatile component substantially enhance the heat transfer by the extension of large flattened bubbles remaining a thin liquid film on the heating surface, however, the enhancement tends to the deterioration with the increase of heat flux by the extension of dry patches underneath these flattened bubbles.\n\n4. Such a situation indicates the positive and negative effects of more-volatile component on the heat transfer to less-volatile component depending on the heat flux level and a gap size. The heat transfer deterioration can be reduced or eliminated by the reduction of flow rate of more-volatile liquid especially for smaller gaps. To obtain high heat transfer performance, there is an optimum composition of immiscible mixtures for each gap size or an optimum gap size for a given composition of mixtures.\n\n5. Low value of CHF inherent in the pure more-volatile component is easily increased by the additional small flow rate of less-volatile liquid, while the increase of surface temperature from its level of pure more-volatile component is unavoidable.\n\n6. The relation of heat flux versus surface temperature is changeable by using immiscible mixtures. Along the requirements from the surface to be cooled, both the level of CHF and the level of surface temperature can be varied by the selection of composition and/or the combination of immiscible mixtures.\n\n7. For the heated channel with extreme small gap size, the very small concentration of more-volatile component or the use of more-volatile component with lower volatility is needed to prevent the heat transfer deterioration caused by the excessive growth of flattened bubbles.\n\n8. The heat transfer characteristics for boiling of immiscible mixtures in narrow channels are more sensitive to the composition than those in a tube of normal size.\n\nAs regards the four advantageous heat transfer characteristics of immiscible mixtures in pool boiling described in the section of introduction, the following comments are possible for flow boiling in narrow channels. (i) Increase of CHF larger than pure less-volatile component is almost impossible because of the excessive generation of flattened bubbles from the less-volatile liquid. (ii) The reduction of surface temperature is possible because of the equilibrium temperature lower than either of saturation temperatures of the components. (iii) The substantial enhancement of heat transfer to the less-volatile liquid is possible by the generation of bubbles from more-volatile liquid except the case of high heat flux. However, as is the case of pool boiling, the enhanced heat transfer cannot easily be reflected to the values of heat transfer coefficient defined by the fluid temperature kept lower at near the equilibrium temperature. (iv) The increase of system pressure is possible keeping the liquid temperature lower than the saturation temperature of less-volatile component. Three advantages (ii)–(iv) observed in pool boiling are also true for the flow boiling in narrow rectangular channels tested here.\n\n## Nomenclature\n\n Bo Bond number (–) cp Liquid isobaric specific heat (J/kg K) di Inner tube diameter or equivalent diameter (m) Fr Froude number (–) G Mass velocity (kg/m2 s) g Gravitational acceleration (m/s2) hfg Latent heat of vaporization (J/kg) H Gap size of channel between plates (m) P Pressure (Pa) q Heat flux (W/m2) T Temperature (°C) Te Equilibrium temperature of immiscible liquids (°C) Tsat Saturation temperature of component (°C) Tw Temperature of heating surface (°C) um Mean velocity of liquid and vapor mixture (m/s) V Volumetric flow rate (m3/s) We Weber number (–) x1 Quality for more-volatile component vapor only (–) x Quality for both component vapor (–) z Distance along the flow direction (m) Greek symbols α Heat transfer coefficient (W/m2 K) ΔQ Heat supplied between neighboring local positions (W) ΔT Temperature increment between neighboring local positions (K) ∆Tsub Degree of subcooling (K) ρ Density (kg/m3) ξ Ratio of heat transferred to more-volatile component to the total (–) σ Surface tension (N/m) Suffixes 1 More-volatile component 2 Less-volatile component ave Average value of both components in Inlet of heated test section l Liquid out Outlet of heated test section total Summation of values for more-volatile and less-volatile components v Vapor\n\n## Author Contributions\n\nYS designed and assembled the experimental equipment and analyzed data. DY conducted experiments and analysis of data. DF assisted the experiments and contributed to the preparation of manuscript. HO produced the concept of the present research and supervised the manuscript.\n\n## Conflict of Interest Statement\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n## Acknowledgments\n\nThe present research was conducted under the research program of JSPS KAKENHI (Grant-in-Aid for Challenging Exploratory Research) Grant Number JP15K13887. The authors appreciate the support.\n\n## References\n\nAbe, Y. (2005). “Heat management with phase change of self-rewetting fluids,” in Proceedings of the ASME 2005 International Mechanical Engineering Congress and Exposition, Orland, USA, 391–398. doi: 10.1115/IMECE2005-79174\n\nAbubakar, A., Al-Wahaibi, Y., Al-Wahaibi, T., Al-Hashmi, A., Al-Ajmi, A., and Eshrati, M. (2015). Effect of low interfacial tension on flow patterns, pressure gradients and holdups of medium-viscosity oil/water flow in horizontal pipe. Exp. Therm. Fluid Sci. 68, 58–67. doi:10.1016/j.expthermflusci.2015.02.017\n\nBaba, S., Ohtania, N., Kawanami, O., Inoue, K., and Ohta, H. (2012). Experiments on dominant force regimes in flow boiling using mini-tubes. Front. Heat Mass Transf. 3:043002. doi:10.5098/hmt.v3.4.3002\n\nBonilla, C. F., and Eisenbuerg, A. A. (1948). Heat transmission to boiling binary mixtures. Ind. Eng. Chem. 40, 1113–1122. doi:10.1021/ie50462a026\n\nBragg, J. R., and Westwater, J. W. (1970). “Film boiling of immiscible liquid mixture on a horizontal plate. Heat transfer 1970,” in Proceedings of the 4th International Heat Transfer Conference (France: Paris-Versailies), Vol. 6, B7.1. Available at: http://www.ihtcdigitallibrary.com/conferences/0c7302a61c102806,00982fb204779f3b,177ff1d00afeafea.html\n\nBrauner, N. (2003). “Liquid-liquid two-phase flow systems,” in Modelling and Experimentation in Two-Phase Flow, eds V. Bertola, International Centre for Mechanical Sciences (Courses and Lectures), Vol. 450 (Vienna: Springer), 221–279.\n\nBulanov, N. V., and Gasanov, B. M. (2006). Peculiarities of boiling of emulsions with a low-boiling disperse phase. High Temp. 44, 267–282. doi:10.1007/s10740-006-0033-z\n\nFilipczak, G., Troniewski, L., and Witczak, S. (2011). Pool boiling of liquid-liquid multiphase systems, evaporation, condensation and heat transfer. InTech 6, 123–150. doi:10.5772/24046\n\nFujita, Y., Ohta, H., Uchida, S., and Nishikawa, K. (1989). Nucleate boiling heat transfer and critical heat flux in narrow space between rectangular surfaces. Int. J. Heat Mass Transf. 31, 229–239. doi:10.1016/0017-9310(88)90004-X\n\nGorenflo, D., Gremer, F., Danger, E., and Luke, A. (2001). Pool boiling heat transfer to binary mixtures with miscibility gap: experimental results for a horizontal copper tube with 4.35 mm O.D. Exp. Therm. Fluid Sci. 25, 243–254. doi:10.1016/S0894-1777(01)00072-3\n\nHijikata, K., Mori, Y., and Ito, H. (1985). Experimental study on convective boiling of immiscible two-component mixture. Trans. Japan Soc. Mech. Eng. Ser. B 51, 1277–1284. (in Japanese). doi:10.1299/kikaib.51.1277\n\nKandlikar, S. G. (2006). “Flow boiling in minichannels and microchannels,” in Heat Transfer and Fluid Flow in Minichannels and Microchannels, Chap. 5, eds S. G. Kandlikar, S. Garimella, D. Li, S. Colin, and M. R. King (Elsevier), 175–226. doi:10.1016/B978-008044527-4/50007-4\n\nKandlikar, S. G., Colin, S., Peles, Y., Garimella, S., Pease, R. F., Brandner, J. J., et al. (2013). Heat transfer in microchannels – 2012 status and research needs. J. Heat Transf. 135, 091001-1–18. doi:10.1115/1.4024354\n\nKita, S., Ohnishi, S., Fukuyama, Y., and Ohta, H. (2014). “Improvement of nucleate boiling heat transfer characteristics by using immiscible mixtures,” in Proceedings of the 15th International Heat Transfer Conference, Kyoto, Japan, IHTC15-8941, 6261–6275. doi:10.1615/IHTC15.pbl.008941\n\nKobayashi, H., Ohtani, N., and Ohta, H. (2012). “Boiling heat transfer characteristics of immiscible liquid mixtures,” in Proceedings of the 9th International Conference on Heat Transfer, Fluid Mechanics and Thermodynamics, Malta, HEFAT2012, 771–776. Available at: https://repository.up.ac.za/dspace/bitstream/handle/2263/42977/kobayashi_boiling_2014.pdf?sequence=1\n\nLee, H. J., and Lee, S. Y. (2001). Heat transfer correlation for boiling flows in small rectangular horizontal channels with low aspect ratios. Int. J. Multiphase Flow 27, 2043–2062. doi:10.1016/S0301-9322(01)00054-4\n\nOhnishi, S., Ohta, H., Ohtani, N., Fukuyama, Y., and Kobayashi, H. (2013). Boiling heat transfer by nucleate boiling of immiscible liquids. Interfacial Phenom. Heat Transf. 1, 63–83. doi:10.1615/InterfacPhenomHeatTransfer.2013007205\n\nOhta, H. (2003). Microgravity heat transfer in flow boiling. Adv. Heat Transf. 37, 1–76. doi:10.1016/S0065-2717(03)37001-7\n\nOhta, H., Inoue, K., Ando, M., and Watanabe, K. (2009). Experimental investigation on observed scattering in heat transfer characteristics for flow boiling in a small diameter tube. Heat Transf. Eng. 30, 19–27. doi:10.1080/01457630802290080\n\nOhta, H., Iwata, K., Yamamoto, D., and Shinmoto, Y. (2015). “Superior heat transfer characteristics in boiling of immiscible mixtures,” in Proceedings of the 26th International Symposium on Transport Phenomena, Leoben, Austria, Vol. 89.\n\nOhta, H., Shinmoto, Y., Yamamoto, D., and Iwata, K. (2016). “Boiling of immiscible mixtures for cooling of electronics,” in Electronics Cooling, Chap. 2, ed. S. M. Sohel Murshed (InTech), 11–29. doi:10.5772/62341\n\nRoesle, M. L., and Kulacki, F. A. (2012). An experimental study of boiling in dilute emulsions. Part A: heat transfer. Int. J. Heat Mass Transf. 55, 2160–2165. doi:10.1016/j.ijheatmasstransfer.2011.12.020\n\nSakai, T., Yoshii, S., Kajimoto, K., Kobayashi, H., Shinmoto, Y., and Ohta, H. (2010). “Heat transfer enhancement observed in nucleate boiling of alcohol aqueous solutions at very low concentration,” in Proceedings of the 14th International Heat Transfer Conference, Washington, DC, Vol. 2010. IHTC14-22737. doi:10.1115/IHTC14-22737\n\nShiina, K., and Sakaguchi, S. (1997). Boiling heat transfer characteristics in liquid-liquid direct contact parallel flow of immiscible liquid: heat transfer flow pattern and empirical correlation. Trans. Japan Soc. Mech. Eng. Ser. B 63, 970–978. (in Japanese). doi:10.1002/(SICI)1520-6556(1997)26:8<493:AID-HTJ1>3.0.CO;2-R\n\nSump, G. D., and Westwater, J. W. (1979). Boiling heat transfer from a tube to immiscible liquid-liquid mixtures. Int. J. Heat Mass Transf. 14, 767–779. doi:10.1016/0017-9310(71)90106-2\n\nVan Stralen, S. J. D. (1956). Heat transfer to boiling binary liquid mixtures at atmospheric and subatmospheric pressures. Chem. Eng. Sci. 5, 290–296. doi:10.1016/0009-2509(56)80004-3\n\nVochten, R., and Petre, G. (2005). Study of the heat of reversible adsorption at the air-solution interface. J. Colloid Interface Sci. 42, 320–327. doi:10.1016/0021-9797(73)90295-6\n\nWillingham, T. C., and Mudawar, I. (1992). Channel height effects on forced-convection boiling and critical heat flux from a linear array of discrete heat sources. Int. J. Heat Mass Transf. 8, 1865–1880. doi:10.1016/0017-9310(92)90190-4\n\nYamasaki, Y., Kita, S., Iwata, K., Shinmoto, Y., and Ohta, H. (2015). Heat transfer in boiling of immiscible mixtures. Interfacial Phenom. Heat Transf. 3, 19–39. doi:10.1615/InterfacPhenomHeatTransfer.2015012699\n\nKeywords: immiscible mixture, flow boiling, narrow channel, narrow gap, heat transfer enhancement, heat transfer deterioratio\n\nCitation: Shinmoto Y, Yamamoto D, Fujii D and Ohta H (2017) Heat Transfer Characteristics during Boiling of Immiscible Liquids Flowing in Narrow Rectangular Heated Channels. Front. Mech. Eng. 3:16. doi: 10.3389/fmech.2017.00016\n\nReceived: 09 March 2017; Accepted: 23 October 2017;\nPublished: 21 November 2017\n\nEdited by:\n\nSatish Kumar, Georgia Institute of Technology, United States\n\nReviewed by:\n\nAlexander S. Rattner, Pennsylvania State University, United States\nAmy Rachel Betz, Kansas State University, United States\n\nCopyright: © 2017 Shinmoto, Yamamoto, Fujii and Ohta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.\n\n*Correspondence: Haruhiko Ohta, [email protected]" ]
[ null, "https://crossmark-cdn.crossref.org/widget/v2.0/logos/CROSSMARK_Color_square.svg", null, "https://loop.frontiersin.org/images/profile/418353/24", null, "https://f96a1a95aaa960e01625-a34624e694c43cdf8b40aa048a644ca4.ssl.cf2.rackcdn.com/Design/Images/newprofile_default_profileimage_new.jpg", null, "https://f96a1a95aaa960e01625-a34624e694c43cdf8b40aa048a644ca4.ssl.cf2.rackcdn.com/Design/Images/newprofile_default_profileimage_new.jpg", null, "https://loop.frontiersin.org/images/profile/161970/24", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g001.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g002.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-t001.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g004.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g005.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g006.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g007.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g008.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g009.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g010.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g011.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g012.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g013.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g014.gif", null, "https://www.frontiersin.org/files/Articles/264752/fmech-03-00016-HTML/image_t/fmech-03-00016-g015.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91138464,"math_prob":0.9190754,"size":86060,"snap":"2020-10-2020-16","text_gpt3_token_len":19261,"char_repetition_ratio":0.20487823,"word_repetition_ratio":0.1066394,"special_character_ratio":0.22244945,"punctuation_ratio":0.12017035,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9533801,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,3,null,null,null,null,null,3,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T22:39:44Z\",\"WARC-Record-ID\":\"<urn:uuid:b9da8312-4ab6-429f-b052-2d7e98e9f4f3>\",\"Content-Length\":\"242901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3def7ac8-1cd5-447a-be09-9a9f1e4187bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd4f7f71-3ea5-44fd-8918-f7149f8b5e00>\",\"WARC-IP-Address\":\"134.213.70.247\",\"WARC-Target-URI\":\"https://www.frontiersin.org/articles/10.3389/fmech.2017.00016/full\",\"WARC-Payload-Digest\":\"sha1:A7APNRPGRCQASCQF342BCP3OI75IBRPX\",\"WARC-Block-Digest\":\"sha1:WSBE6MMW5DBAXR6V3RFTL5COQQTB3N2Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147647.2_warc_CC-MAIN-20200228200903-20200228230903-00140.warc.gz\"}"}
http://karooza.net/attitude-estimation-using-imu-sensors
[ "", null, "# Attitude estimation using IMU sensors\n\nIn this post I want to talk a bit about the use of accelerometer and gyroscope sensors to perform attitude (roll, pitch and yaw) estimations. Systems that perform these kind of tasks are often referred to as AHRS (Attitude and Heading Reference System) and is commonly used where it is important to know the attitude and heading of an object. Some of the best examples for me are probably in the use of autopilot or flight stabilization controllers. But the applications for such systems are not limited to flight and can also be used on land, like for example in balancing robots, head tracking in virtual reality headsets or special video game controllers.\n\nWe will start by quickly looking at the basics of gyroscope and accelerometer sensors and how each one can be used to estimate tilt. Then we will get more practical and use a sensor breakout board together with an Arduino Uno to put the theory to the test. The sensor that we will be using is the popular MPU6050 which has both a 3-axis gyroscope and a 3-axis accelerometer on the same chip. These kind of chips are often referred to as an IMU (Inertia Measurement Unit) sensor and many times also includes a magnetometer (compass), like for the MPU9250. The term Degrees of Freedom (DoF) is also widely used to indicate the number of directions an IMU sensor can measure. The MPU6050 has 6 DoF consisting of 3 accelerometer and 3 gyroscope axes, if we added a 3 axis magnetometer (compass) we would have 9 DoF (this is the case for the MPU9250).\n\nSo let’s start by looking at these two types of sensors and the theory behind how they could be used.\n\n## Gyroscopes\n\nA gyroscope or just gyro in short is a type of sensor that measures angular velocity, or in other words how fast the sensor is turning around a certain axis (X,Y or Z). It therefore also makes sense that the output from a gyro is measured in degrees per second (°/sec) or sometimes also abbreviated to “dps”. One of the key things to always keep in mind is that a gyro measures velocity (speed) so if the sensor is not moving (turning around some axis) it will output a low value. In a perfect world a gyro at rest should output 0°/sec. and a gyro turning at a constant speed would output a constant degrees per second value.\n\nSo how can we use angular velocity to determine tilt? Well if we rotate an object around an axis at 5 degrees per second for 2 seconds then that object would be 10° rotated or tilted from it’s initial position. Or another example, if you rotated it at the same rate for 2 seconds in one direction and then another 2 seconds in the opposite direction then it would be 0° rotated. As you might have noticed, what we are doing here is to integrate the angular velocity around an axis over time and this gives us the tilt around that axis.\nIn the plot below I have rotated the sensor about 45° and then kept it in that position:", null, "What we see is an increase in the velocity as I start to rotate it and then as it gets closer to the desired angle the change in velocity decrease again. The end result is this peak that we see, but remember that although the reading went back to almost 0 again the sensor was still rotated at about 45°.\n\nLike we mentioned, to estimate the tilt we need to integrate the output of the sensor. To do that we would need to break the output up into discrete samples over time. In the above plot the sensor values was read every 100ms (giving a sample rate of 10Hz) so this would be the time we need to use for each sample. All that is left to do now is to sum the product between each sensor reading and sample time. In other words, take the first sample multiply it with the sample time then add that to the second sample multiplied with the sample time and continue like that. If you kept your time units in seconds and the angular units in degrees then the result would be the tilt in degrees. So let’s do this with the plot, the table below gives the readings from sample 17 to 28 (the rest of samples has a negligible influence on the result):", null, "Seems like my aim was a bit off and I actually rotated it just over 49°.\n\nSomething to note is that in this example we sampled at 10Hz which may or may not be enough for your project. By sampling faster we can improve the accuracy of the estimated tilt since the discrete blocks would more closely represent the actual rotation.\n\n## Accelerometers\n\nAs the name of these sensors suggests, they measure acceleration in a certain direction and provide an output in meters per second per second (m/s/s) in a given direction. When a car stands still it has an acceleration of 0 m/s/s and when that same car drives at a constant speed of 60km/h it still has an acceleration of 0 m/s/s. The acceleration will only change when the car starts moving or starts slowing down if it was previously moving.\n\nHow can we then use accelerometers to determine tilt? The answer lies in a force that we are all familiar with, gravity, which as we know pulls everything on earth towards it at a known rate of about 9.8m/s/s or 1g (“g” is used to denote the acceleration force of gravity). If we have a sensor which can tell us the acceleration that it is experiencing in a given direction and point it towards earth it would read 1g. Pointing this sensor perpendicular to the earth it would give us 0g.\n\nSo as we can see, by monitoring the effect of gravity on our sensor we can estimate the tilt of the sensor.\nIn the graph below I have rotated an accelerometer around it’s x-axis and plotted the result:", null, "The response we see here closely resembles that of a sine function, so this means that by taking the inverse sine of the accelerometer output we can estimate the tilt. The sensor started in a horizontal position and read close to 0g, the inverse sine of 0 matches with our expected tilt of 0°. As the sensor started to rotate the output decreased until it reached a minimum of -1g and was positioned vertically, again the inverse sine of -1 matches with our expected tilt of -90°. Then continuing the rotation, the output increased back to 0 as it reached the horizontal position again. The same happens for the second half of the rotation with only the values inverted.\n\nThis is pretty handy but we still have a problem, if the sensor outputs 0.2g we can calculate the tilt but we don’t know in which direction (forward or backwards) it is tilted. A second problem is that if you look closely at the previous plot you will notice that the same change in the accelerometer values will not always provide the same change in tilt. When the sensor is close to alignment with gravity (points 9 and 21) the changes in acceleration provides smaller tilt angles than changes when the sensor is perpendicular to gravity (points 1 and 15). This means the response of the tilt output is not completely linear. To solve this we need to add another axis to our calculations. If this axis is following or leading the direction of tilt by 90° then we would have a much more linear response. This can be even further improved by adding a third axis which then provides us with a linear response. I will skip the math of how the final formula is derived for now, and just provide the formulas that we can use to obtain roll and pitch:", null, "## Using the MPU6050\n\nSo now that we have some background, lets move on to the practical side and apply this to real sensors. For this discussion we will use a breakout board for the MPU6050 chip. As mentioned in the beginning this chip contains both a 3 axis gyroscope and 3 axis accelerometer. We will use an Arduino Uno board and communicate with it using the I2C protocol. Below is a picture of the breakout board we will use:", null, "Since the MPU6050 is a 3.3V device, the board contains a small regulator so that we can power it with 5V from the Uno. But what about the signal lines, if the MPU6050 runs on 3.3V and the Uno on 5V, would we not destroy the MPU6050 when connecting the I2C lines? To answer this we have to remember that I2C hardware can only pull the signal lines low and therefore a pull-up resistor (typically 4.7k Ohm) is always required in the I2C circuit. The trick here is that the breakout board already has these pull-up resistors and they are connected to 3.3V. This means we can safely connect the I2C lines of our 3.3V MPU6050 breakout board to our 5V Uno.\n\nWe will wire the Uno and breakout board together like this:", null, "## Writing the code\n\nThe MPU6050 chip can be configured by writing directly to it’s registers and measurement readings obtained by reading directly from it’s registers. These operations on the registers happens over the I2C bus and therefore we use the Arduino Wire library. Lets take a look at some basic code to get us up and running:\n\n ```/* * Tilt estimation using the MPU6050 example code * Jan Swanepoel * 2018 */ #include const uint8_t MPU_addr = 0x68; // I2C address of the MPU-6050 const uint8_t REG_PWR_MGMT_1 = 0x6B; // Power Management Register 1 const uint8_t REG_GYRO_CONFIG = 0x1B; // Gyro scale configuration const uint8_t REG_ACCEL_CONFIG = 0x1C; // Accelerometer scale configuration const uint8_t REG_ACCEL_XOUT_H = 0x3B; // Fisrt byte of the sensor readings // Container for the IMU data struct senseValues { float AcX; // Accelerometer X-axis reading float AcY; // Accelerometer Y-axis reading float AcZ; // Accelerometer Z-axis reading float Tmp; // Temperature reading float GyX; // Gyroscope X-axis reading float GyY; // Gyroscope Y-axis reading float GyZ; // Gyroscope Z-axis reading } MPU_Data; // Reads the contents of a register to the value parameter void ReadRegister(uint8_t address, uint8_t* value, uint8_t len = 1) { Wire.beginTransmission(MPU_addr); Wire.write(address); // Address to start reading from Wire.endTransmission(false); // Transmit bytes and keep connection alive Wire.requestFrom(MPU_addr, len, (uint8_t)true); // Request a total of len registers to be read next for (uint8_t i = 0; i < len; i++) { *value = Wire.read(); value++; } } // Writes a value to the specified register void WriteRegister(uint8_t address, uint8_t value) { Wire.beginTransmission(MPU_addr); // Begin to setup I2C transmission Wire.write(address); // Select register to write to Wire.write(value); // Write value to register Wire.endTransmission(true); // Transmit bytes } // Read all sensor data void ReadSensors() { // Calculate gyro conversion value for deg/sec. and accelerometer for G // Page 31 of Register Map Datasheet. float GyroConversion = 131.0; float AccelConversion = 16384.0; // Each axis is read out as two separate bytes, one upper byte and one // lower. The upper byte is shifted 8 bits to the left and combined with // the lower through a OR operation. Finally the raw value is converted // to an actual reading by dividing with the conversion values. Wire.beginTransmission(MPU_addr); Wire.write(REG_ACCEL_XOUT_H); Wire.endTransmission(false); Wire.requestFrom(MPU_addr, 14, true); MPU_Data.AcX = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.AcY = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.AcZ = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.Tmp = (Wire.read()<<8 | Wire.read()) / 340.00 + 36.53; MPU_Data.GyX = (Wire.read()<<8 | Wire.read()) / GyroConversion; MPU_Data.GyY = (Wire.read()<<8 | Wire.read()) / GyroConversion; MPU_Data.GyZ = (Wire.read()<<8 | Wire.read()) / GyroConversion; } void setup() { // Initialize the I2C library Wire.begin(); // Initialize the serial port Serial.begin(57600); // When the device is powered up it stays in sleep mode. // Wake the device up and enable sensors WriteRegister(REG_PWR_MGMT_1, 0x00); // Configure the gyro full scale range WriteRegister(REG_GYRO_CONFIG, 0x00); // Set to 250 dps scale // Configure the accelerometer full scale range WriteRegister(REG_ACCEL_CONFIG, 0x00); // Set to 2G scale } void loop() { // Read sensor values ReadSensors(); // Print out the measurements to the serial port // Sensor data Serial.print(MPU_Data.AcX); Serial.print(\",\"); Serial.print(MPU_Data.AcY); Serial.print(\",\"); Serial.print(MPU_Data.AcZ); Serial.print(\",\"); Serial.print(MPU_Data.Tmp); Serial.print(\",\"); Serial.print(MPU_Data.GyX); Serial.print(\",\"); Serial.print(MPU_Data.GyY); Serial.print(\",\"); Serial.print(MPU_Data.GyZ); Serial.println(); // Delay before restarting the loop delay(100); }```\n\nI think the code is pretty easy to follow and I have also added extra comments to help. We basically just configure the chip in the setup() function and then regularly poll the registers that contain the latest readings in the loop() function. To keep the readings together a struct was created with an instance called MPU_Data which is updated on each ReadSensors() call. The output from the sensors is then written out to the serial port for us to view them. Using the Serial Monitor from the Arduino IDE we can view the output and should look something like this:", null, "The different readings are comma separated and starts with the accelerometer X,Y and Z axis followed by the temperature and then the X,Y and Z gyroscope readings.\n\n## Calibration\n\nLooking at the results we obtained from the previous code it kind of looks as expected but not completely correct. For example why is the Z accelerometer reading closer to 0.6g when it should be 1g and if the board is not moving why do the gyroscopes give values other than 0. The problem is that the output is not properly calibrated for our environment. The manufacturer will try to get it as close as possible but it is still up to us in the end to make sure the sensor output values are calibrated. Calibration is also something we would need to perform regularly as the environment where the sensors are used might change.\n\nA basic gyro calibration can be performed by keeping the sensor in a fixed position while recording a couple of readings. The average of these readings could then be used as an offset value to correct the gyroscope output with. For the accelerometers we can use gravity as our reference, which will allows us to calculate a scaling and offset value for each axis. To do this we have to record readings for each axis pointing towards gravity and away from gravity. With these values we then first calculate the offset:\n\noffset = (min_value + max_value) / 2\n\nSince we know that when the axis is aligned with gravity it should provide a reading of 1g we can calculate a scaling factor. This can be calculated with:\n\nscale = 1 / (max_value – offset)\n\nAnd that’s it, now equipped with these values we should be able to correct the sensor outputs. The way to apply them is to first subtract the offset value and then multiply with the scaling value.\n\nTo add this functionality to our code we first create a container for the calibration values:\n\n ```// Container for calibration values struct senseCalibration { float offset_AcX = 0.02; float offset_AcY = 0; float offset_AcZ = 0.08; float offset_GyX = 0; float offset_GyY = 0; float offset_GyZ = 0; float scale_AcX = 2.02; float scale_AcY = 2.06; float scale_AcZ = 1.95; } MPU_Cal;```\n\nTypically one would start off by having all calibration variables set to 0 and then only update them after calibration. The values shown here was the calibration values I used and yours would most likely be different.\n\nThen we add three new functions, two for the different calibrations and one for applying the calibration values:\n\n ```// Perform gyroscope calibration routine void CalibrateGyro() { const uint8_t sampleCount = 10; for (int i = 0; i < sampleCount; i++) { ReadSensors(); MPU_Cal.offset_GyX += MPU_Data.GyX; MPU_Cal.offset_GyY += MPU_Data.GyY; MPU_Cal.offset_GyZ += MPU_Data.GyZ; delay(50); } MPU_Cal.offset_GyX /= sampleCount; MPU_Cal.offset_GyY /= sampleCount; MPU_Cal.offset_GyZ /= sampleCount; } // Perform accelerometer calibration routine void CalibrateAccel() { float X_Max, X_Min; float Y_Max, Y_Min; float Z_Max, Z_Min; Serial.println(\"Accelerometer calibration...\"); Serial.println(\"Press enter to start\"); while(Serial.read() == -1); Serial.println(\"Place sensor Z-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); Z_Max = MPU_Data.AcZ; Serial.println(\"Place sensor Z-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); Z_Min = MPU_Data.AcZ; Serial.println(\"Place sensor Y-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); Y_Max = MPU_Data.AcY; Serial.println(\"Place sensor Y-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); Y_Min = MPU_Data.AcY; Serial.println(\"Place sensor X-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); X_Max = MPU_Data.AcX; Serial.println(\"Place sensor X-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); X_Min = MPU_Data.AcX; // Calculate X-axis offset and scale MPU_Cal.offset_AcX = (X_Min + X_Max) / 2.0; MPU_Cal.scale_AcX = 1 / (X_Max - MPU_Cal.offset_AcX); // Calculate Y-axis offset and scale MPU_Cal.offset_AcY = (Y_Min + Y_Max) / 2.0; MPU_Cal.scale_AcY = 1 / (Y_Max - MPU_Cal.offset_AcY); // Calculate Z-axis offset and scale MPU_Cal.offset_AcZ = (Z_Min + Z_Max) / 2.0; MPU_Cal.scale_AcZ = 1 / (Z_Max - MPU_Cal.offset_AcZ); Serial.println(\"Corrections:\"); Serial.print(\"Acc X Offset \"); Serial.println(MPU_Cal.offset_AcX); Serial.print(\"Acc X Scale \"); Serial.println(MPU_Cal.scale_AcX); Serial.print(\"Acc Y Offset \"); Serial.println(MPU_Cal.offset_AcY); Serial.print(\"Acc Y Scale \"); Serial.println(MPU_Cal.scale_AcY); Serial.print(\"Acc Z Offset \"); Serial.println(MPU_Cal.offset_AcZ); Serial.print(\"Acc Z Scale \"); Serial.println(MPU_Cal.scale_AcZ); Serial.println(\"Press enter to end calibration\"); while(Serial.read() == -1); } // Apply calibration values void ApplyCalibration() { MPU_Data.AcX = (MPU_Data.AcX - MPU_Cal.offset_AcX) * MPU_Cal.scale_AcX; MPU_Data.AcY = (MPU_Data.AcY - MPU_Cal.offset_AcY) * MPU_Cal.scale_AcY; MPU_Data.AcZ = (MPU_Data.AcZ - MPU_Cal.offset_AcZ) * MPU_Cal.scale_AcZ; MPU_Data.GyX -= MPU_Cal.offset_GyX; MPU_Data.GyY -= MPU_Cal.offset_GyY; MPU_Data.GyZ -= MPU_Cal.offset_GyZ; }```\n\nWe would also then need to add the calibration functions to setup() and in loop() the function that applies the calibration values.\n\nTo test it, we should upload the new code and open the Serial Monitor window again. It would take about 500ms to get gyro calibration values and then ask us to turn the sensor in different orientations to get the accelerometer calibration values. When it’s done the calibration values will be displayed and the calibrated sensor values would be streamed as before and this time they should look much better:", null, "Looking at the output above we see that the gyroscope readings are now much closer to 0 and the Z axis accelerometer also reads almost exactly 1g as it should. We have the option now to leave the accelerometer calibration call in the setup() function, requiring us to go through the process each time we power the Arduino, or to write the calibration values into the code (senseCalibration structure) and comment the call out until we need it again.\n\n## Tilt estimations\n\nNow that we can get some accurate gyroscope and accelerometer readings we will create functions to convert them to tilts. Starting with the gyroscope, we previously talked about using integration to determine the tilt and will use the following function to achieve it:\n\n ```// Integration function void Integrate(float *output, float sampleValue, float sampleTime) { *output += (sampleValue * sampleTime); }```\n\nIt’s a basic function and just keeps adding the product of the measured value and the sample time. Although the rate at which the sensor provides values are in the kilohertz range we only poll it at 10Hz due to the 100ms delay in the loop() function. Therefore we can use a sample time of 0.1s and call the Integrate() function every cycle.\n\nGetting the tilt from our accelerometers is also very easy and we just need to apply the previously mentioned formula. The function below shows how this can be done:\n\n ```// Calculate tilt from accelerometer readings void AccelToTilt(float *output, uint8_t axis) { // For pitch axis should be 0 else roll will be calculated const double RadToDeg = 57.2958; if(axis == 0) *output = atan2(-MPU_Data.AcX, sqrt(pow(MPU_Data.AcY, 2.0) + pow(MPU_Data.AcZ, 2.0))) * RadToDeg; else *output = atan2(MPU_Data.AcY, sqrt(pow(MPU_Data.AcX, 2.0) + pow(MPU_Data.AcZ, 2.0))) * RadToDeg; }```\n\nHere we can choose to either calculate the roll or pitch.\n\nI have also added some extra outputs to the serial port to monitor all the different values. When running the code now we can see the tilt estimations from both the gyroscopes and accelerometers. To illustrate this more graphically we can use the Serial Plotter function in the Arduino IDE and should provide something like this when we tilt the sensor:", null, "In this plot the normal sensor outputs was removed and only shows the tilt estimations. Blue = GyroTiltX, Red = GyroTiltY, Green = GyroTiltZ, Orange = AccTiltY, and Purple = AccTiltX\n\n## Fusing it all together\n\nIf you play a bit with what we have achieved up to this point you will notice a couple of things. Firstly the tilt estimation provided by the accelerometers seems more accurate but also more noisy and sensitive. When you look at the output over a short period of time you see a lot of small changes or what is called high frequency noise. This means we should only trust the accelerometer tilt estimation on the long term. On the other hand looking at the gyro tilt estimation you don’t see this high frequency noise. But if you look at the output over a longer period of time you will notice another problem, the tilt normally drifts in some direction. Here we have low frequency noise and should thus only trust it over the short term.", null, "Above we can see the noisy accelerometer tilts in orange and purple and the drifting gyroscope tilts in blue, red and green.\n\nLuckily there are a couple of ways to combine the two tilt estimations and get the best of both worlds. One such way is to use what is called a complimentary filter. These type of filters allow us to fuse the high and low frequency inputs together to provide an overall better output. Below is the formula for a basic complementary filter:\n\ntilt = lfw * (pTilt + (lfInput * dt)) + (hfw * hfInput)\n\nwhere:\n\nlfw = low frequency weighting factor\npTilt = previous tilt estimation\nlfInput = low frequency input (gyroscope)\nhfw = high frequency weighting factor\nhfInput = high frequency input (accelerometer tilt)\n\nFor the filter we use the gyroscope tilt estimation over the last sample and therefore the lfInput is actually not the previously calculated gyroscope tilt estimation but the calibrated gyro output. By multiplying it with the sample time we get only the gyroscope tilt estimation over the last sample and therefore the (lfInput * dt) part of the formula. The two weighting factors can be used to adjust how much influence we want to give to each input but they should always add up to 1.\n\nTurning this into a function in our code:\n\n ```// Basic complementary filter void ComplementaryFilter(float *output, float lfInput, float hfInput, float sampleTime) { float lfWeight = 0.98; float hfWeight = 1.0 - lfWeight; // Calculate integral over one time step float lfInt = lfInput * sampleTime; // Filter *output = (lfWeight * (*output + lfInt)) + (hfWeight * hfInput); }```\n\nand then combining all the code:\n\n ```/* * Tilt estimation using the MPU6050 example code * Jan Swanepoel * 2018 */ #include const uint8_t MPU_addr = 0x68; // I2C address of the MPU-6050 const uint8_t REG_PWR_MGMT_1 = 0x6B; // Power Management Register 1 const uint8_t REG_GYRO_CONFIG = 0x1B; // Gyro scale configuration const uint8_t REG_ACCEL_CONFIG = 0x1C; // Accelerometer scale configuration const uint8_t REG_ACCEL_XOUT_H = 0x3B; // Fisrt byte of the sensor readings // Container for the IMU data struct senseValues { float AcX; // Accelerometer X-axis reading float AcY; // Accelerometer Y-axis reading float AcZ; // Accelerometer Z-axis reading float Tmp; // Temperature reading float GyX; // Gyroscope X-axis reading float GyY; // Gyroscope Y-axis reading float GyZ; // Gyroscope Z-axis reading } MPU_Data; // Container for calibration values struct senseCalibration { float offset_AcX = 0.02; float offset_AcY = 0; float offset_AcZ = 0.08; float offset_GyX = 0; float offset_GyY = 0; float offset_GyZ = 0; float scale_AcX = 2.02; float scale_AcY = 2.06; float scale_AcZ = 1.95; } MPU_Cal; // Container for tilt estimations struct tiltEstimations { float tiltGX = 0; // Gyro Roll float tiltGY = 0; // Gyro Pitch float tiltGZ = 0; // Gyro Yaw float tiltAP = 0; // Accelerometer Pitch float tiltAR = 0; // Accelerometer Roll float tiltFP = 0; // Filter Pitch float tiltFR = 0; // Filter Roll } Tilt; // Reads the contents of a register to the value parameter void ReadRegister(uint8_t address, uint8_t* value, uint8_t len = 1) { Wire.beginTransmission(MPU_addr); Wire.write(address); // Address to start reading from Wire.endTransmission(false); // Transmit bytes and keep connection alive Wire.requestFrom(MPU_addr, len, (uint8_t)true); // Request a total of len registers to be read next for (uint8_t i = 0; i < len; i++) { *value = Wire.read(); value++; } } // Writes a value to the specified register void WriteRegister(uint8_t address, uint8_t value) { Wire.beginTransmission(MPU_addr); // Begin to setup I2C transmission Wire.write(address); // Select register to write to Wire.write(value); // Write value to register Wire.endTransmission(true); // Transmit bytes } // Read all sensor data void ReadSensors() { // Calculate gyro conversion value for deg/sec. and accelerometer for G // Page 31 of Register Map Datasheet. float GyroConversion = 131.0; float AccelConversion = 16384.0; // Each axis is read out as two separate bytes, one upper byte and one // lower. The upper byte is shifted 8 bits to the left and combined with // the lower through a OR operation. Finally the raw value is converted // to an actual reading by dividing with the conversion values. Wire.beginTransmission(MPU_addr); Wire.write(REG_ACCEL_XOUT_H); Wire.endTransmission(false); Wire.requestFrom(MPU_addr, 14, true); MPU_Data.AcX = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.AcY = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.AcZ = ((Wire.read()<<8 | Wire.read()) / AccelConversion); MPU_Data.Tmp = (Wire.read()<<8 | Wire.read()) / 340.00 + 36.53; MPU_Data.GyX = (Wire.read()<<8 | Wire.read()) / GyroConversion; MPU_Data.GyY = (Wire.read()<<8 | Wire.read()) / GyroConversion; MPU_Data.GyZ = (Wire.read()<<8 | Wire.read()) / GyroConversion; } // Perform gyroscope calibration routine void CalibrateGyro() { const uint8_t sampleCount = 10; for (int i = 0; i < sampleCount; i++) { ReadSensors(); MPU_Cal.offset_GyX += MPU_Data.GyX; MPU_Cal.offset_GyY += MPU_Data.GyY; MPU_Cal.offset_GyZ += MPU_Data.GyZ; delay(50); } MPU_Cal.offset_GyX /= sampleCount; MPU_Cal.offset_GyY /= sampleCount; MPU_Cal.offset_GyZ /= sampleCount; } // Perform accelerometer calibration routine void CalibrateAccel() { float X_Max, X_Min; float Y_Max, Y_Min; float Z_Max, Z_Min; Serial.println(\"Accelerometer calibration...\"); Serial.println(\"Press enter to start\"); while(Serial.read() == -1); Serial.println(\"Place sensor Z-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); Z_Max = MPU_Data.AcZ; Serial.println(\"Place sensor Z-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); Z_Min = MPU_Data.AcZ; Serial.println(\"Place sensor Y-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); Y_Max = MPU_Data.AcY; Serial.println(\"Place sensor Y-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); Y_Min = MPU_Data.AcY; Serial.println(\"Place sensor X-axis up and press enter\"); while(Serial.read() == -1); ReadSensors(); X_Max = MPU_Data.AcX; Serial.println(\"Place sensor X-axis down and press enter\"); while(Serial.read() == -1); ReadSensors(); X_Min = MPU_Data.AcX; // Calculate X-axis offset and scale MPU_Cal.offset_AcX = (X_Min + X_Max) / 2.0; MPU_Cal.scale_AcX = 1 / (X_Max - MPU_Cal.offset_AcX); // Calculate Y-axis offset and scale MPU_Cal.offset_AcY = (Y_Min + Y_Max) / 2.0; MPU_Cal.scale_AcY = 1 / (Y_Max - MPU_Cal.offset_AcY); // Calculate Z-axis offset and scale MPU_Cal.offset_AcZ = (Z_Min + Z_Max) / 2.0; MPU_Cal.scale_AcZ = 1 / (Z_Max - MPU_Cal.offset_AcZ); Serial.println(\"Corrections:\"); Serial.print(\"Acc X Offset \"); Serial.println(MPU_Cal.offset_AcX); Serial.print(\"Acc X Scale \"); Serial.println(MPU_Cal.scale_AcX); Serial.print(\"Acc Y Offset \"); Serial.println(MPU_Cal.offset_AcY); Serial.print(\"Acc Y Scale \"); Serial.println(MPU_Cal.scale_AcY); Serial.print(\"Acc Z Offset \"); Serial.println(MPU_Cal.offset_AcZ); Serial.print(\"Acc Z Scale \"); Serial.println(MPU_Cal.scale_AcZ); Serial.println(\"Press enter to end calibration\"); while(Serial.read() == -1); } // Apply calibration values void ApplyCalibration() { MPU_Data.AcX = (MPU_Data.AcX - MPU_Cal.offset_AcX) * MPU_Cal.scale_AcX; MPU_Data.AcY = (MPU_Data.AcY - MPU_Cal.offset_AcY) * MPU_Cal.scale_AcY; MPU_Data.AcZ = (MPU_Data.AcZ - MPU_Cal.offset_AcZ) * MPU_Cal.scale_AcZ; MPU_Data.GyX -= MPU_Cal.offset_GyX; MPU_Data.GyY -= MPU_Cal.offset_GyY; MPU_Data.GyZ -= MPU_Cal.offset_GyZ; } // Integration function void Integrate(float *output, float sampleValue, float sampleTime) { *output += (sampleValue * sampleTime); } // Calculate tilt from accelerometer readings void AccelToTilt(float *output, uint8_t axis) { // For pitch axis should be 0 else roll will be calculated const double RadToDeg = 57.2958; if(axis == 0) *output = atan2(-MPU_Data.AcX, sqrt(pow(MPU_Data.AcY, 2.0) + pow(MPU_Data.AcZ, 2.0))) * RadToDeg; else *output = atan2(MPU_Data.AcY, sqrt(pow(MPU_Data.AcX, 2.0) + pow(MPU_Data.AcZ, 2.0))) * RadToDeg; } // Basic complementary filter void ComplementaryFilter(float *output, float lfInput, float hfInput, float sampleTime) { float lfWeight = 0.98; float hfWeight = 1.0 - lfWeight; // Calculate integral over one time step float lfInt = lfInput * sampleTime; // Filter *output = (lfWeight * (*output + lfInt)) + (hfWeight * hfInput); } void setup() { // Initialize the I2C library Wire.begin(); // Initialize the serial port Serial.begin(57600); // When the device is powered up it stays in sleep mode. // Wake the device up and enable sensors WriteRegister(REG_PWR_MGMT_1, 0x00); // Configure the gyro full scale range WriteRegister(REG_GYRO_CONFIG, 0x00); // Set to 250 dps scale // Configure the accelerometer full scale range WriteRegister(REG_ACCEL_CONFIG, 0x00); // Set to 2G scale // Calibrate the gyroscope and accelerometer CalibrateGyro(); //CalibrateAccel(); } void loop() { // Read sensor values ReadSensors(); ApplyCalibration(); Integrate(&Tilt.tiltGX, MPU_Data.GyX, 0.1); Integrate(&Tilt.tiltGY, MPU_Data.GyY, 0.1); Integrate(&Tilt.tiltGZ, MPU_Data.GyZ, 0.1); AccelToTilt(&Tilt.tiltAP,0); AccelToTilt(&Tilt.tiltAR,1); ComplementaryFilter(&Tilt.tiltFP, MPU_Data.GyY, Tilt.tiltAP, 0.1); ComplementaryFilter(&Tilt.tiltFR, MPU_Data.GyX, Tilt.tiltAR, 0.1); // Print out the measurements to the serial port // Sensor data /* Serial.print(MPU_Data.AcX); Serial.print(\",\"); Serial.print(MPU_Data.AcY); Serial.print(\",\"); Serial.print(MPU_Data.AcZ); Serial.print(\",\"); Serial.print(MPU_Data.Tmp); Serial.print(\",\"); Serial.print(MPU_Data.GyX); Serial.print(\",\"); Serial.print(MPU_Data.GyY); Serial.print(\",\"); Serial.print(MPU_Data.GyZ); */ // Gyro tilt estimations Serial.print(\" \"); Serial.print(Tilt.tiltGX); Serial.print(\",\"); Serial.print(Tilt.tiltGY); // Serial.print(\",\"); // Serial.print(Tilt.tiltGZ); // Accelerometer tilt estimations Serial.print(\" \"); Serial.print(Tilt.tiltAP); Serial.print(\",\"); Serial.print(Tilt.tiltAR); // Filtered tilt estimations Serial.print(\" \"); Serial.print(Tilt.tiltFP); Serial.print(\",\"); Serial.print(Tilt.tiltFR); Serial.println(); // Delay before restarting the loop delay(100); }```\n\nThe code above can be uploaded to the Uno and then analyzed further with the Serial Monitor or Plotter tools. Displaying all values at the same time can be overwhelming and it makes sense to comment out the ones that’s not of interest. From the plots one can easily compare the different tilt estimations and see the effect of the complimentary filter. I suggest to also try adjusting the weighting factors and see the effect it has.\n\nIn the plot below I have the gyroscope (blue and red) and accelerometer (green and orange) tilts together with the filtered tilts (purple and gray):", null, "Initially the filtered tilts takes some time to “catch up” but from there on one can see it produces an output less noisy than only the accelerometers and with less drift than only the gyroscopes. Best of both worlds!\n\nSo this brings us to the end of this post. If you are new to this topic I hope you learned something and can find it useful in your next project that requires attitude awareness. The code and processes shown can be optimized and improved but I tried to keep it as simple as possible. If you find any mistakes or have additional questions please let me know in the comments.\n\nA note regarding the MPU6050:\nThe above code provides a starting point for getting some readings from the sensor that can be used to estimate attitude. But one should really spend some time with the datasheet to get familiar with all the features of this chip. For example, in the code provided we continually polled the registers for new readings. Another option is to configure the chip to provide an interrupt pin which triggers when new data is available. We have really just scratched the surface of what this chip can do, and did not even look at the onboard DMP (Digital Motion Processor) features.\n\n## One thought on “Attitude estimation using IMU sensors”\n\n1.", null, "Ed says:\n\nGoeie beskrywing. Dankie!" ]
[ null, "http://karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://i1.wp.com/karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "http://karooza.net/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7508013,"math_prob":0.9514233,"size":33689,"snap":"2020-45-2020-50","text_gpt3_token_len":8351,"char_repetition_ratio":0.17316313,"word_repetition_ratio":0.35199076,"special_character_ratio":0.2559886,"punctuation_ratio":0.15624505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861126,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T20:57:37Z\",\"WARC-Record-ID\":\"<urn:uuid:903609d8-a1cd-4bab-a29d-8ac85e823890>\",\"Content-Length\":\"102875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22a7c527-7928-4b40-96f6-3dcd1956f90d>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cfe9545-c196-461f-ab7b-f91a8259d748>\",\"WARC-IP-Address\":\"217.160.0.111\",\"WARC-Target-URI\":\"http://karooza.net/attitude-estimation-using-imu-sensors\",\"WARC-Payload-Digest\":\"sha1:OREXIW3AIEFRXJQ6XSCX6BAEBIK2YZIU\",\"WARC-Block-Digest\":\"sha1:4FYSXCPUJCQVQ7SCTPNZHWDPHVET52MR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195745.90_warc_CC-MAIN-20201128184858-20201128214858-00327.warc.gz\"}"}
http://ixtrieve.fh-koeln.de/birds/litie/document/8556
[ "# Document (#8556)\n\nAuthor\nChan, L.M.\nTitle\nCataloging and classification : an introduction\nIssue\n2nd ed.\nImprint\nNew York : McGraw-Hill\nYear\n1994\nPages\nXXII,519 S\nIsbn\n0-07-010506-5\nFootnote\n1st ed. 1981\nTheme\nGrundlagen u. Einführungen: Allgemeine Literatur\nFormalerschließung\n\n## Similar documents (author)\n\n1. Chan, L.M.: Year's work in cataloging and classification : 1975 (1976) 4.56\n```4.5562925 = sum of:\n4.5562925 = weight(author_txt:chan in 307) [ClassicSimilarity], result of:\n4.5562925 = fieldWeight in 307, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2900677 = idf(docFreq=78, maxDocs=42596)\n0.625 = fieldNorm(doc=307)\n```\n2. Chan, L.M.: 'American poetry' but 'Satire, American' : the direct and inverted forms of subject headings containing national adjectives (1973) 4.56\n```4.5562925 = sum of:\n4.5562925 = weight(author_txt:chan in 382) [ClassicSimilarity], result of:\n4.5562925 = fieldWeight in 382, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2900677 = idf(docFreq=78, maxDocs=42596)\n0.625 = fieldNorm(doc=382)\n```\n3. Chan, L.M.: Library of Congress Classification as an online retrieval tool : potentials and limitations (1986) 4.56\n```4.5562925 = sum of:\n4.5562925 = weight(author_txt:chan in 1145) [ClassicSimilarity], result of:\n4.5562925 = fieldWeight in 1145, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2900677 = idf(docFreq=78, maxDocs=42596)\n0.625 = fieldNorm(doc=1145)\n```\n4. Chan, L.M.: Library of Congress class numbers in online catalog searching (1989) 4.56\n```4.5562925 = sum of:\n4.5562925 = weight(author_txt:chan in 1146) [ClassicSimilarity], result of:\n4.5562925 = fieldWeight in 1146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2900677 = idf(docFreq=78, maxDocs=42596)\n0.625 = fieldNorm(doc=1146)\n```\n5. Chan, L.M.: Dewey 18: another step in an evolutionary step (1972) 4.56\n```4.5562925 = sum of:\n4.5562925 = weight(author_txt:chan in 1780) [ClassicSimilarity], result of:\n4.5562925 = fieldWeight in 1780, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2900677 = idf(docFreq=78, maxDocs=42596)\n0.625 = fieldNorm(doc=1780)\n```\n\n## Similar documents (content)\n\n1. Soltani, P.: Historical aspects of cataloging and classification in Iran (2002) 1.15\n```1.1525826 = sum of:\n1.1525826 = sum of:\n0.19972138 = weight(abstract_txt:classification in 489) [ClassicSimilarity], result of:\n0.19972138 = score(doc=489,freq=3.0), product of:\n0.36904466 = queryWeight, product of:\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09227498 = queryNorm\n0.54118484 = fieldWeight in 489, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.078125 = fieldNorm(doc=489)\n0.49589637 = weight(abstract_txt:cataloging in 489) [ClassicSimilarity], result of:\n0.49589637 = score(doc=489,freq=5.0), product of:\n0.57074255 = queryWeight, product of:\n1.2435998 = boost\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09227498 = queryNorm\n0.86886173 = fieldWeight in 489, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.078125 = fieldNorm(doc=489)\n0.45696497 = weight(abstract_txt:introduction in 489) [ClassicSimilarity], result of:\n0.45696497 = score(doc=489,freq=2.0), product of:\n0.733525 = queryWeight, product of:\n1.409834 = boost\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09227498 = queryNorm\n0.62297124 = fieldWeight in 489, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.078125 = fieldNorm(doc=489)\n```\n2. Wynar, B.S.; Taylor, A.G.; Miller, D.P.: Introduction to cataloging and classification (2006) 1.02\n```1.0158174 = sum of:\n1.0158174 = sum of:\n0.1153092 = weight(abstract_txt:classification in 3054) [ClassicSimilarity], result of:\n0.1153092 = score(doc=3054,freq=1.0), product of:\n0.36904466 = queryWeight, product of:\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09227498 = queryNorm\n0.31245324 = fieldWeight in 3054, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.078125 = fieldNorm(doc=3054)\n0.4435432 = weight(abstract_txt:cataloging in 3054) [ClassicSimilarity], result of:\n0.4435432 = score(doc=3054,freq=4.0), product of:\n0.57074255 = queryWeight, product of:\n1.2435998 = boost\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09227498 = queryNorm\n0.7771336 = fieldWeight in 3054, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.078125 = fieldNorm(doc=3054)\n0.45696497 = weight(abstract_txt:introduction in 3054) [ClassicSimilarity], result of:\n0.45696497 = score(doc=3054,freq=2.0), product of:\n0.733525 = queryWeight, product of:\n1.409834 = boost\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09227498 = queryNorm\n0.62297124 = fieldWeight in 3054, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.078125 = fieldNorm(doc=3054)\n```\n3. Taylor, A.G.: ¬The organization of information (1999) 0.85\n```0.84955966 = sum of:\n0.84955966 = sum of:\n0.19568618 = weight(abstract_txt:classification in 2633) [ClassicSimilarity], result of:\n0.19568618 = score(doc=2633,freq=2.0), product of:\n0.36904466 = queryWeight, product of:\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09227498 = queryNorm\n0.53025067 = fieldWeight in 2633, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09375 = fieldNorm(doc=2633)\n0.26612592 = weight(abstract_txt:cataloging in 2633) [ClassicSimilarity], result of:\n0.26612592 = score(doc=2633,freq=1.0), product of:\n0.57074255 = queryWeight, product of:\n1.2435998 = boost\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09227498 = queryNorm\n0.46628013 = fieldWeight in 2633, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09375 = fieldNorm(doc=2633)\n0.38774762 = weight(abstract_txt:introduction in 2633) [ClassicSimilarity], result of:\n0.38774762 = score(doc=2633,freq=1.0), product of:\n0.733525 = queryWeight, product of:\n1.409834 = boost\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09227498 = queryNorm\n0.5286086 = fieldWeight in 2633, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09375 = fieldNorm(doc=2633)\n```\n4. Electronic cataloging : AACR2 and metadata for serials and monographs (2003) 0.79\n```0.7928041 = sum of:\n0.7928041 = sum of:\n0.09784309 = weight(abstract_txt:classification in 4083) [ClassicSimilarity], result of:\n0.09784309 = score(doc=4083,freq=2.0), product of:\n0.36904466 = queryWeight, product of:\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09227498 = queryNorm\n0.26512533 = fieldWeight in 4083, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.046875 = fieldNorm(doc=4083)\n0.42078203 = weight(abstract_txt:cataloging in 4083) [ClassicSimilarity], result of:\n0.42078203 = score(doc=4083,freq=10.0), product of:\n0.57074255 = queryWeight, product of:\n1.2435998 = boost\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09227498 = queryNorm\n0.73725367 = fieldWeight in 4083, product of:\n3.1622777 = tf(freq=10.0), with freq of:\n10.0 = termFreq=10.0\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.046875 = fieldNorm(doc=4083)\n0.27417898 = weight(abstract_txt:introduction in 4083) [ClassicSimilarity], result of:\n0.27417898 = score(doc=4083,freq=2.0), product of:\n0.733525 = queryWeight, product of:\n1.409834 = boost\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09227498 = queryNorm\n0.37378275 = fieldWeight in 4083, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.046875 = fieldNorm(doc=4083)\n```\n5. Zhanghua, M.: ¬The education of cataloging and classification in China (2005) 0.74\n```0.74461603 = sum of:\n0.74461603 = sum of:\n0.19972138 = weight(abstract_txt:classification in 751) [ClassicSimilarity], result of:\n0.19972138 = score(doc=751,freq=3.0), product of:\n0.36904466 = queryWeight, product of:\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.09227498 = queryNorm\n0.54118484 = fieldWeight in 751, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.9994013 = idf(docFreq=2121, maxDocs=42596)\n0.078125 = fieldNorm(doc=751)\n0.2217716 = weight(abstract_txt:cataloging in 751) [ClassicSimilarity], result of:\n0.2217716 = score(doc=751,freq=1.0), product of:\n0.57074255 = queryWeight, product of:\n1.2435998 = boost\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.09227498 = queryNorm\n0.3885668 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9736547 = idf(docFreq=800, maxDocs=42596)\n0.078125 = fieldNorm(doc=751)\n0.32312304 = weight(abstract_txt:introduction in 751) [ClassicSimilarity], result of:\n0.32312304 = score(doc=751,freq=1.0), product of:\n0.733525 = queryWeight, product of:\n1.409834 = boost\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.09227498 = queryNorm\n0.4405072 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.638492 = idf(docFreq=411, maxDocs=42596)\n0.078125 = fieldNorm(doc=751)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68931377,"math_prob":0.99692523,"size":8887,"snap":"2020-10-2020-16","text_gpt3_token_len":3325,"char_repetition_ratio":0.19272768,"word_repetition_ratio":0.47144154,"special_character_ratio":0.5146844,"punctuation_ratio":0.2836983,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T21:05:33Z\",\"WARC-Record-ID\":\"<urn:uuid:19129dc0-9eb9-40a3-aed4-35ecdf38b8f7>\",\"Content-Length\":\"18282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e609df4a-c2dd-45ca-b849-72f4f489e059>\",\"WARC-Concurrent-To\":\"<urn:uuid:c25a8045-4326-4550-afdb-c66324479307>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/8556\",\"WARC-Payload-Digest\":\"sha1:ZRMUCMVNXY7UFRZG5CDHCVZT5BY2VV7W\",\"WARC-Block-Digest\":\"sha1:4B6V2XD5OQHNPJ2PV56VLC7RWDGXVU4T\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145981.35_warc_CC-MAIN-20200224193815-20200224223815-00244.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2009/Aug/msg00376.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Create jpg image files of mathematical equations\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg102549] Re: Create jpg image files of mathematical equations\n• From: asdf qwerty <bradc355113 at yahoo.com>\n• Date: Thu, 13 Aug 2009 03:21:43 -0400 (EDT)\n• References: <h5okvi\\$sn\\[email protected]> <h5r8a5\\$l41\\[email protected]>\n\n```I think the following should be close to what you want. Note that png\nfiles will be both smaller and higher quality than jpg files -- it's\nan easy change (see comment in the code for what to change).\n\nmakePicture[\nop_, m_, n_, showAns : (True | False),\nfont_String, fontSize_, bkcolor_\n] :=\nExport[\nFileNameJoin[{ (* construct filename *)\nNotebookDirectory[],(* output in same dir as current notebook *)\nStringJoin @@\nFlatten[{\nToString[op],\nIntegerString[#, 10, 2] & /@ {m, n, op[m, n]},\nToString[showAns],\n\".jpg\" (* \".png\" *)\n}]\n}],\nStyle[ (* format equation *)\nHoldForm[op[m, n]] ==\nIf[showAns, op[m, n], Style[op[m, n], bkcolor]]\n],\nfontSize,\nFontFamily -> font,\nBackground -> bkcolor\n]\n]\n\nDo[makePicture[Plus, m, n, ans, \"Arial\", 72, LightGray],\n{m, 1, 3}, {n, 1, 3}, {ans, {True, False}}]\n\nDo[makePicture[Subtract, m, n, ans, \"Arial\", 72, LightGray],\n{m, 1, 4}, {n, 1, m}, {ans, {True, False}}]\n\nDo[makePicture[Times, m, n, ans, \"Arial\", 72, LightGray],\n{m, 1, 3}, {n, 1, 3}, {ans, {True, False}}]\n\nDo[makePicture[Divide, m, n, ans, \"Arial\", 72, LightGray],\n{m, 1, 4}, {n, Divisors[m]}, {ans, {True, False}}]\n\nOn Aug 12, 1:32 am, Diana <diana.me... at gmail.com> wrote:\n> David,\n>\n> Thank you for explaining how to do the export with ExpressionCell.\n>\n> I have a two part followup:\n>\n> 1) I want to do hundreds of these files, and would like to use a table\n> and variables which change. Is it possible to use \"m\" and \"n\" within\n> the ExpressionCell[Defer[1 + 91}, for example, so that I don't have to\n> rewrite each export statement?\n>\n> 2) I would like to make the file names dynamic. For example, if I am\n> adding 1 + 91 = 92, I would like to name the file 01019192.jpg, where\n> the leading 01 stands for addition, and the remaining string 019192\n> stand for 1, 91, and the sum, respectively.\n>\n> As an example, I would like to be able to write one statement to\n> create 14 jpg files for the \"14's\" times tables, 1 x 14 = 14, 2 x 14 =\n= 28, ..., 14 x 14 = 196.\n>\n> Thank you for your time,\n>\n> Diana M.\n>\n> On Aug 11, 12:57 am, David Reiss <dbre... at gmail.com> wrote:\n>\n> > Here are two examples to get you started (of course the paths to the\n> > files need to be changed for your system)....\n>\n> > Export[\"/Users/dreiss/Desktop/MyEquation.jpg\",\n> > ExpressionCell[Defer[1 + 91], \"Input\", FontSize -> 30,\n> > Background -> LightGray]]\n>\n> > Export[\"/Users/dreiss/Desktop/MyEquation.jpg\",\n> > ExpressionCell[Defer[1 + 91 == #] &[1 + 91], \"Input\", FontSize -=\n> 30,\n> > Background -> LightGray]]\n>\n> > Hope this helps,\n>\n> > Davidhttp://www.scientificarts.com/worklife/\n>\n> > On Aug 10, 4:15 am, Diana <diana.me... at gmail.com> wrote:\n>\n> > > Hi all,\n>\n> > > I want to quickly create many jpg image files of math facts though 24=\n,\n> > > such as\n>\n> > > 12 + 12 = 24\n> > > 30 - 15 = 15\n> > > 3 x 4 = 12\n>\n> > > I would like to create the files with and without answers, with very\n> > > large font, and with an option to choose font and background colors.\n>\n> > > Can someone explain how to export equations, with special characters\n> > > for +, - * and /?\n>\n> > > Thank you,\n>\n> > > Diana\n\n```\n\n• Prev by Date: Re: Create jpg image files of mathematical equations\n• Next by Date: Re: Re: Generalized Fourier localization theorem?\n• Previous by thread: Re: Create jpg image files of mathematical equations\n• Next by thread: Re: Making your own Definitions using Operators without" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7734033,"math_prob":0.6536539,"size":3436,"snap":"2019-35-2019-39","text_gpt3_token_len":1118,"char_repetition_ratio":0.10780886,"word_repetition_ratio":0.116719246,"special_character_ratio":0.38824216,"punctuation_ratio":0.25609756,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9552677,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T08:46:45Z\",\"WARC-Record-ID\":\"<urn:uuid:93e49890-54d3-4ab4-8fac-b4e26e1a01e9>\",\"Content-Length\":\"45863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75d78346-d04f-4960-aa70-85123be31bbc>\",\"WARC-Concurrent-To\":\"<urn:uuid:d307695a-1adc-42ff-bc8a-767d50c49193>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2009/Aug/msg00376.html\",\"WARC-Payload-Digest\":\"sha1:TADOW77DTS235SNDS2K4PNPQMHYC6LLG\",\"WARC-Block-Digest\":\"sha1:DLZXHBNCHZ4NKFGIOANTWNN6U4BVCLF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573258.74_warc_CC-MAIN-20190918065330-20190918091330-00084.warc.gz\"}"}
http://talks.cam.ac.uk/talk/index/31019
[ "# On symplectic hypersurfaces\n\n•", null, "Lehn, M (Mainz)\n•", null, "Thursday 28 April 2011, 15:30-16:30\n•", null, "Seminar Room 1, Newton Institute.\n\nModuli Spaces\n\nThe Grothendieck-Brieskorn-Slodowy theorem explains a relation between ADE -surface singularities \\$X\\$ and simply laced simple Lie algebras \\$g\\$ of the same Dynkin type: Let \\$S\\$ be a slice in \\$g\\$ to the subregular orbit in the nilpotent cone \\$N\\$. Then \\$X\\$ is isomorphic to \\$S\u0001p N\\$. Moreover, the restriction of the characteristic map \\$\bi:g o g//G\\$ to \\$S\\$ is the semiuniversal deformation of \\$X\\$. We (j.w. Namikawa and Sorger) show that the theorem remains true for all non-regular nilpotent orbits if one considers Poisson deformations only. The situation is more complicated for non-simply laced Lie algebras.\n\nIt is expected that holomorphic symplectic hypersurface singularities are rare. Besides the ubiquitous ADE -singularities we describe a four-dimensional series of examples and one six-dimensional example. They arise from slices to nilpotent orbits in Liealgebras of type \\$C_n\\$ and \\$G_2\\$.\n\nThis talk is part of the Isaac Newton Institute Seminar Series series." ]
[ null, "http://talks.cam.ac.uk/images/user.jpg", null, "http://talks.cam.ac.uk/images/clock.jpg", null, "http://talks.cam.ac.uk/images/house.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81147677,"math_prob":0.90953034,"size":1984,"snap":"2021-43-2021-49","text_gpt3_token_len":455,"char_repetition_ratio":0.07878788,"word_repetition_ratio":0.0069204154,"special_character_ratio":0.18548387,"punctuation_ratio":0.07098766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564961,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T01:51:51Z\",\"WARC-Record-ID\":\"<urn:uuid:56955086-d39d-4a2c-909b-e922ea93c7e4>\",\"Content-Length\":\"12871\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36d51648-1eb1-4f59-969c-1197ac414838>\",\"WARC-Concurrent-To\":\"<urn:uuid:df6e940d-5f10-415b-9f12-7a7fabec69a5>\",\"WARC-IP-Address\":\"131.111.150.181\",\"WARC-Target-URI\":\"http://talks.cam.ac.uk/talk/index/31019\",\"WARC-Payload-Digest\":\"sha1:RD33ROMGDMGEG4ZXH7LO5WXP7YTVPULE\",\"WARC-Block-Digest\":\"sha1:6RUAILKI2WGDW5RDH5WASP5KBRJMQ5RI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587608.86_warc_CC-MAIN-20211024235512-20211025025512-00126.warc.gz\"}"}
https://carsonfarmer.com/2009/10/community-structure-in-directed-weighted-networks/
[ "Community structure in directed, weighted networks\n\nTue 20 October 2009\n\nMany natural and human systems can be represented as networks, including the Internet, social interactions, food webs, and transportation and communication flows. One thing that these types of networks have in common, is that they can each be represented as a series of vertices (or nodes) and edges (or links). This blog entry presents a nice description of networks, highlighting the differences between various network types (directed, undirected, weighted, unweighted, etc.).\n\nAccording to this paper, many networks are found to display “community structure”, which basically refers to groupings of vertices where within-group edge connections are more dense than between-group edge connections. In order to detect and delineate these groupings, Leicht & Newman (2008) present a nice “modularity” optimisation algorithm which is designed to find a “good” division of a network by maximising\n\n$$Q = \\frac{1}{2m}s^TB_s,$$\n\nwhere $$s$$ is a vector whose elements define which group each node belongs to, and $$\\mathbf{B}$$ is the so-called modularity matrix, with elements\n\n$$B_{ij} = A_{ij} - \\frac{k_{i}^{in} k_{j}^{out}}{m},$$\n\nwhere $$A_{ij}$$ is an element in the adjacency matrix $$\\mathbf{A}$$, $$k_{i}^{in}$$ and $$k_{j}^{out}$$ are the in- and out-degrees of the vertices, and $$m$$ is the total sum of edges in the network. In practice, this can be extended to directed networks by considering the matrix $$\\mathbf{B} + \\mathbf{B}^T$$ (for an explanation of why this is the case, see Leicht & Newman).\n\nIt is relatively straight-forward to extend the above modularity optimisation algorithm to the case of a weighted network by computing the modularity matrix using the in- and out-strength(see link to blog post above) of the vertices instead of the degree. This is similar to the concept presented in Newman (2004), and indeed the theory of the modularity algorithm holds for this more general case (note that an unweighted network can simply be represented as a weighted network where the edge weights are all set to 1). As such, our new modularity matrix can be computed as\n\n$$B_{ij} = A_{ij} - \\frac{s_{i}^{in} s_{j}^{out}}{m},$$\n\nwhere $$m = \\sum_{i}s_{i}^{in} = \\sum_{j} s_j^{out}$$, and $$s$$ represents the vertex strength. As such, using the above new definition of $$\\mathbf{B}$$, the modularity of a directed, weighted network is computed as\n\n$$Q = \\frac{1}{4m}s^{T}(\\mathbf{B}-\\mathbf{B}^{T})s.$$\n\nMy current research uses a modified modularity optimisation algorithm to compute functional regions for Ireland based on a range of socio-economic variables. The goal is to provide a consistent framework for computing functional regions which are comparable across different countries and/or regions.\n\nC\n\nReferences\n\nLeicht, E. A. & Newman, M. E. J.(2008). Community structure in directed networks. Physical Review Letters, 100(11), 118703.\n\nNewman, M. E. J.(2004). Analysis of weighted networks. Physical Review E, 70(5), 056131.", null, "" ]
[ null, "https://carsonfarmer.com/uploads/map-thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.898812,"math_prob":0.9972443,"size":2743,"snap":"2022-05-2022-21","text_gpt3_token_len":632,"char_repetition_ratio":0.120847024,"word_repetition_ratio":0.0,"special_character_ratio":0.23769595,"punctuation_ratio":0.11576846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997187,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T19:55:30Z\",\"WARC-Record-ID\":\"<urn:uuid:5de9866a-2449-4d50-96ad-a39e779c6a73>\",\"Content-Length\":\"24013\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca0d979e-b4b0-498f-90a2-e0c6fe786732>\",\"WARC-Concurrent-To\":\"<urn:uuid:669363d6-4d2c-4d15-b364-4c53da1a8a07>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://carsonfarmer.com/2009/10/community-structure-in-directed-weighted-networks/\",\"WARC-Payload-Digest\":\"sha1:MCAD2QXVVTUAVQKP3T2S4S4ZCVC2G265\",\"WARC-Block-Digest\":\"sha1:XPP7J4KBSN6P6N7YWOH5BHYHHV7M7LWZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301488.71_warc_CC-MAIN-20220119185232-20220119215232-00306.warc.gz\"}"}
https://extraextravagant.com/skin-problem/question-how-many-moles-of-molecules-are-there-in-16-gram-of-oxygen.html
[ "# Question: How many moles of molecules are there in 16 gram of oxygen?\n\nContents\n\n## How many molecules are in 16g of CO?\n\n16g of CO contains x 6.023 x 1023 = 0.57 x 6.023 x 1023 molecules.\n\n## How many moles are in a gram of oxygen?\n\nThe mass of oxygen equal to one mole of oxygen is 15.998 grams and the mass of one mole of hydrogen is 1.008 g.\n\n## What is the volume of 16 grams of oxygen?\n\n11.20 lts is the amount of volume occupied by 16 gm of Oxygen at STP condition.\n\n## What is the volume of oxygen occupied by 2 moles?\n\nDavid G. Assuming that the gas is at standard temperature and pressure (STP), one mole of any gas occupies 22.4 L . This means the number of moles of O2 is 222.4=0.089 mol .\n\n## How many grams is 5 liters?\n\nHow Many Grams are in a Liter?\n\nVolume in Liters: Weight in Grams of:\nWater Cooking Oil\n5 l 5,000 g 4,400 g\n6 l 6,000 g 5,280 g\n7 l 7,000 g 6,160 g\n\n## How many grams are in 88.1 moles?\n\nHow many grams are in 88.1 moles of magnesium? 88.1 molx 24.3059 = 21409.\n\n## What is the formula for moles to grams?\n\nIn order to convert the moles of a substance to grams, you will need to multiply the mole value of the substance by its molar mass.\n\n## How many molecules are there in CO2?\n\nUsing the formula number of moles = Mass/Mr 44/44=1 mole of CO2 present. (Mr of carbon dioxide is (2*16)+12=44 Now times by Abogadros constant: 1* 6.022*10^23=6.022*10^23 molecules of CO2 are present. Understanding the last step is critical.\n\n## How many molecules are there in 6cl2?\n\nmole is a unit of measurement, which equals 6.02 * 10^23. If you have a mole of cl2, then you have 6.02 * 10^23 molecules of cl2. If you want to calculate for atoms however, you can see that each molecule has two atoms of cl. Therefore you have 2* 6.02 * 10^23 chlorine atoms.\n\n## How many atoms are there in oxygen?\n\nOxygen is found naturally as a molecule. Two oxygen atoms strongly bind together with a covalent double bond to form dioxygen or O2. Oxygen is normally found as a molecule. It is called dioxygen." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9174002,"math_prob":0.9840324,"size":1895,"snap":"2021-43-2021-49","text_gpt3_token_len":572,"char_repetition_ratio":0.15282919,"word_repetition_ratio":0.027472528,"special_character_ratio":0.33298153,"punctuation_ratio":0.13636364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987964,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T20:04:44Z\",\"WARC-Record-ID\":\"<urn:uuid:fd62e648-0aa4-454b-8fc7-d6c2532991d0>\",\"Content-Length\":\"67793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:998dd954-bbec-43b8-b360-d866adfae224>\",\"WARC-Concurrent-To\":\"<urn:uuid:fce8d684-5e3a-4596-8c45-ff1c52a8f073>\",\"WARC-IP-Address\":\"207.244.241.49\",\"WARC-Target-URI\":\"https://extraextravagant.com/skin-problem/question-how-many-moles-of-molecules-are-there-in-16-gram-of-oxygen.html\",\"WARC-Payload-Digest\":\"sha1:5WVN4GT2IF5BNZM347MFUXISOOCFS5I4\",\"WARC-Block-Digest\":\"sha1:V7XJPL4QI5UOEUTAXLPHBF5U7I4XKZHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358591.95_warc_CC-MAIN-20211128194436-20211128224436-00287.warc.gz\"}"}
https://unix.stackexchange.com/questions/1527/bash-eval-array-variable-name/1528
[ "# Bash eval array variable name\n\nHere is my bash case:\n\nFirst case, this is what I want to do \"aliasing\" `var` with `myvarA`:\n\n``````myvarA=\"variableA\"\nvarname=\"A\"\neval varAlias=\\\\$\"myvar\"\\$varname\necho \\$varAlias\n``````\n\nSecond case for array variable and looping its members, which is trivial:\n\n``````myvarA=( \"variableA1\" \"variableA2\" )\nfor varItem in \\${myvarA[@]}\ndo\necho \\$varItem\ndone\n``````\n\nNow somehow I need to use \"aliasing\" technique like example 1, but this time for array variable:\n\n``````eval varAlias=\\\\$\"myvar\"\\$varname\nfor varItem in \\${varAlias[@]}\ndo\necho \\$varItem\ndone\n``````\n\nBut for last case, only first member of `myvarA` is printed, which is `eval` evaluate to value of the variable, how should I do var array variable so `eval` is evaluate to the name of array variable not the value of the variable.\n\n• I think what I meant by \"aliasing\" is should be \"indirection\" in bash\n– uray\nSep 2, 2010 at 22:17\n\nThe simplest form for parameter expansion is: `\\${parameter}`.\nUsing braces in confused case is better way.\n\nConsidering of possibilities of being included spaces in array of \"myvarA\", I think this would be the answer.\n\n``````#!/bin/bash -x\nmyvarA=( \"variable A1\" \"variable A2\" )\nvarname=\"A\"\n\neval varAlias=( '\"\\${myvar'\\${varname}'[@]}\"' )\neval varAlias=( \\\"\\\\${myvar\\${varname}[@]}\\\" ) # both works\nfor varItem in \"\\${varAlias[@]}\" # double quote and `@' is needed\ndo\necho \"\\$varItem\"\ndone\n``````\n\nIn your answer, `varAlias` isn't an array, so you can do `for varItem in \\$varAlias` which is just doing word splitting. Because of that if any of your original array elements include spaces, they will be treated as separate words.\n\nYou can do scalar indirection like this: `a=42; b=a; echo \\${!b}`.\n\nYou can do indirection to an array of scalars like this:\n\n``````\\$ j=42; k=55; m=99\n\\$ a=(j k m)\n\\$ echo \\${!a}\n55\n``````\n\nUnfortunately, there's no satisfactory way to do the type of array indirection you're trying to do. You should be able to rework your code so that no indirection is needed.\n\nI solved it; last example should be like this:\n\n``````eval varAlias=\\\\${\"myvar\"\\$varname[@]}\nfor varItem in \\${varAlias[@]}\ndo\necho \\$varItem\ndone\n``````\n\nIt looks like you're trying to indirectly reference an array from the index of another.\n\nYou might like to do something like:\n\n``````arr_one=arr_two[@]\n``````\n\nFrom there you can do:\n\n``````cmd \"\\${!arr_one}\"\n``````\n\n...to indirectly reference a full expansion of `\"\\${arr_two[@]}\"`. As near as I can tell, there is no direct method of indexing further. For example `\"\\${!arr_one}\"` doesn't work as I'd hope (at least, not in `bash`) but you can do `\"\\${!arr_one1:1}\"` and similar to slice the expansion as you could any other array. The end result is something like the 2-dimensional array structure that some other, more capable shells offer.\n\nJust to note that that above accepted answer is not complete. To do an actual array assignment you are missing parenthesis around your original array otherwise you will still get a new array of size 1\n\nWhat i mean is that the following:\n\n``````eval varAlias=\\\\${\"myvar\"\\$varname[@]}\n``````\n\nshould be changed to:\n\n``````eval varAlias=(\\\\${\"myvar\"\\$varname[@]})\n``````\n\nYou can validate this by taking both cases and running:\n\n``````echo \\${#varAlias[@]}\n``````\n\nIn the original case you will get 1 in parenthesis you will get the actual number or elements in the original array. In both cases we basically create a new array.\n\nThis is how you would create a dynamically named variable (bash version < 4.3).\n\n``````# Dynamically named array\nmy_variable_name=\"dyn_arr_names\"\neval \\$my_variable_name=\\(\\)\n\n# Adding by index to the array eg. dyn_arr_names=\"bob\"\neval \\$my_variable_name=\"bob\"\n\n# Adding by pushing onto the array eg. dyn_arr_names+=(robert)\neval \\$my_variable_name+=\\(robert\\)\n\n# Print value stored at index indirect\necho \\${!my_variable_name}\n\n# Print value stored at index\neval echo \\\\${\\$my_variable_name}\n\n# Get item count\neval echo \\\\${#\\$my_variable_name[@]}\n``````\n\nBelow is a group of functions that can be used to manage dynamically named arrays (bash version < 4.3).\n\n``````# Dynamically create an array by name\nfunction arr() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\n# The following line can be replaced with 'declare -ag \\$1=\\(\\)'\n# Note: For some reason when using 'declare -ag \\$1' without the parentheses will make 'declare -p' fail\neval \\$1=\\(\\)\n}\n\n# Insert incrementing by incrementing index eg. array+=(data)\nfunction arr_insert() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\neval \\$1[\\\\$\\(\\(\\\\${#\\${1}[@]}\\)\\)]=\\\\$2\n}\n\n# Update an index by position\nfunction arr_set() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\neval \\${1}[\\${2}]=\\\\${3}\n}\n\n# Get the array content \\${array[@]}\nfunction arr_get() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\neval echo \\\\${\\${1}[@]}\n}\n\n# Get the value stored at a specific index eg. \\${array}\nfunction arr_at() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\n[[ ! \"\\$2\" =~ ^(0|[-]?[1-9]+[0-9]*)\\$ ]] && { echo \"Array index must be a number\" 1>&2 ; return 1 ; }\nlocal v=\\$1\nlocal i=\\$2\nlocal max=\\$(eval echo \\\\${\\#\\${1}[@]})\n# Array has items and index is in range\nif [[ \\$max -gt 0 && \\$i -ge 0 && \\$i -lt \\$max ]]\nthen\neval echo \\\\${\\$v[\\$i]}\nfi\n}\n\n# Get the value stored at a specific index eg. \\${array}\nfunction arr_count() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable \" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\nlocal v=\\${1}\neval echo \\\\${\\#\\${1}[@]}\n}\n\narray_names=(bob jane dick)\n\nfor name in \"\\${array_names[@]}\"\ndo\narr dyn_\\$name\ndone\n\necho \"Arrays Created\"\ndeclare -a | grep \"a dyn_\"\n\n# Insert three items per array\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Inserting dyn_\\$name abc\"\narr_insert dyn_\\$name \"abc\"\necho \"Inserting dyn_\\$name def\"\narr_insert dyn_\\$name \"def\"\necho \"Inserting dyn_\\$name ghi\"\narr_insert dyn_\\$name \"ghi\"\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Setting dyn_\\$name=first\"\narr_set dyn_\\$name 0 \"first\"\necho \"Setting dyn_\\$name=third\"\narr_set dyn_\\$name 2 \"third\"\ndone\n\ndeclare -a | grep \"a dyn_\"\n\nfor name in \"\\${array_names[@]}\"\ndo\narr_get dyn_\\$name\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Dumping dyn_\\$name by index\"\n# Print by index\nfor (( i=0 ; i < \\$(arr_count dyn_\\$name) ; i++ ))\ndo\necho \"dyn_\\$name[\\$i]: \\$(arr_at dyn_\\$name \\$i)\"\n\ndone\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Dumping dyn_\\$name\"\nfor n in \\$(arr_get dyn_\\$name)\ndo\necho \\$n\ndone\ndone\n``````\n\nBelow is a group of functions that can be used to manage dynamically named arrays (bash version >= 4.3).\n\n``````# Dynamically create an array by name\nfunction arr() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -g -a \\$1=\\(\\)\n}\n\n# Insert incrementing by incrementing index eg. array+=(data)\nfunction arr_insert() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\ndeclare -n r=\\$1\nr[\\${#r[@]}]=\\$2\n}\n\n# Update an index by position\nfunction arr_set() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\ndeclare -n r=\\$1\nr[\\$2]=\\$3\n}\n\n# Get the array content \\${array[@]}\nfunction arr_get() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\ndeclare -n r=\\$1\necho \\${r[@]}\n}\n\n# Get the value stored at a specific index eg. \\${array}\nfunction arr_at() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable\" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\n[[ ! \"\\$2\" =~ ^(0|[-]?[1-9]+[0-9]*)\\$ ]] && { echo \"Array index must be a number\" 1>&2 ; return 1 ; }\ndeclare -n r=\\$1\nlocal max=\\${#r[@]}\n# Array has items and index is in range\nif [[ \\$max -gt 0 && \\$i -ge 0 && \\$i -lt \\$max ]]\nthen\necho \\${r[\\$2]}\nfi\n}\n\n# Get the value stored at a specific index eg. \\${array}\nfunction arr_count() {\n[[ ! \"\\$1\" =~ ^[a-zA-Z_]+[a-zA-Z0-9_]*\\$ ]] && { echo \"Invalid bash variable \" 1>&2 ; return 1 ; }\ndeclare -p \"\\$1\" > /dev/null 2>&1\n[[ \\$? -eq 1 ]] && { echo \"Bash variable [\\${1}] doesn't exist\" 1>&2 ; return 1 ; }\ndeclare -n r=\\$1\necho \\${#r[@]}\n}\n\narray_names=(bob jane dick)\n\nfor name in \"\\${array_names[@]}\"\ndo\narr dyn_\\$name\ndone\n\necho \"Arrays Created\"\ndeclare -a | grep \"a dyn_\"\n\n# Insert three items per array\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Inserting dyn_\\$name abc\"\narr_insert dyn_\\$name \"abc\"\necho \"Inserting dyn_\\$name def\"\narr_insert dyn_\\$name \"def\"\necho \"Inserting dyn_\\$name ghi\"\narr_insert dyn_\\$name \"ghi\"\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Setting dyn_\\$name=first\"\narr_set dyn_\\$name 0 \"first\"\necho \"Setting dyn_\\$name=third\"\narr_set dyn_\\$name 2 \"third\"\ndone\n\ndeclare -a | grep 'a dyn_'\n\nfor name in \"\\${array_names[@]}\"\ndo\narr_get dyn_\\$name\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Dumping dyn_\\$name by index\"\n# Print by index\nfor (( i=0 ; i < \\$(arr_count dyn_\\$name) ; i++ ))\ndo\necho \"dyn_\\$name[\\$i]: \\$(arr_at dyn_\\$name \\$i)\"\n\ndone\ndone\n\nfor name in \"\\${array_names[@]}\"\ndo\necho \"Dumping dyn_\\$name\"\nfor n in \\$(arr_get dyn_\\$name)\ndo\necho \\$n\ndone\ndone\n``````\n\nFor more details on these examples visit Getting Bashed by Dynamic Arrays by Ludvik Jerabek" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7845458,"math_prob":0.8224045,"size":731,"snap":"2023-40-2023-50","text_gpt3_token_len":207,"char_repetition_ratio":0.15680881,"word_repetition_ratio":0.0,"special_character_ratio":0.23939809,"punctuation_ratio":0.08208955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95497835,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T19:57:32Z\",\"WARC-Record-ID\":\"<urn:uuid:b445792e-aa4c-4bb1-a811-4f05ca98b414>\",\"Content-Length\":\"205440\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bb8f436-ce48-462e-8b8d-5af1e6c4d59f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b95d7138-f2e6-4392-b372-a4f1b14e7e38>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://unix.stackexchange.com/questions/1527/bash-eval-array-variable-name/1528\",\"WARC-Payload-Digest\":\"sha1:BLKJB552MKAFOEC4JM3M5IEZMUGIYKEU\",\"WARC-Block-Digest\":\"sha1:EVHUCEKJTYACLJYPH4V6DCTBPMFGC2JR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100448.65_warc_CC-MAIN-20231202172159-20231202202159-00358.warc.gz\"}"}
https://strafverteidigungdinslaken.de/page/ceba17-the-logic-of-probability
[ "# the logic of probability\n\nWe would like to thank Johan van Benthem, Joe Halpern, Jan Heylen, then there cannot be any uncertainty about the conclusion either. [Please contact the author with suggestions. conclusion of the valid argument $$A$$, but also as the conclusion of modal probability logics discussed in notion of validity, which we will call Hailperin-probabilistic a modal operator to the language as is done in Fagin and Halpern notions of logic in the quantitative terms of probability theory, or of this encyclopedia. iteration can be achieved using possible worlds) was given and the essentialness 1), then Theorem 4 yields the same upper bound as [!\\psi]\\phi\\) if and only if $$M',w\\models \\phi$$, where $$M'$$ is the probability”, in, van Benthem, J., Gerbrandy, J., and Kooi, B., 2009, “Dynamic (formally: $$P(\\phi)\\geq P(\\psi)$$). Update with Probabilities,”, Cross, C., 1993, “From Worlds to Probabilities: A Another possibility is to interpret a sentence’s probability as = 1\\). Logic,” in the, Dempster, A., 1968, “A Generalization of Bayesian If $$\\phi$$ is a formula and $$q$$ is a rational number in the Herzig and Longin (2003) and Arló Costa (2005) provide weaker Programming Approach to Reasoning about Probabilities,”, Keisler, H. J., 1985, “Probability Quantifiers,” in, Kooi B. P., 2003, “Probabilistic Dynamic Epistemic $$y$$ to $$1/2$$, $$x$$ to $$0$$, and $$z$$ to $$0$$. [ f (t_1,\\ldots,t_n)]\\! There is some discussion about the This entry discusses the major proposals to combine logic remainder of this dynamics subsection that every relevant set この本のはじめの方で、多くの本では「これらの議論を形式化すれば、~が出てきて、第二不完全先生定理が帰結する」というようになっている部分を、丁寧に追っている。可証性述語と様相論理の関係というのがこの本の一番のテーマであるから、こうした一部だけを取り上げてレビューするのは適当ではないかもしれないが、この点だけでも貴重な一冊なので挙げてみた。田中一之さんの本でも可導性性条件の証明は多少書いてあるが、やはりある程度丁寧に追った本を探していたので、この本はぴったりだった。特に、不完全性定理の議論を初めから形式化する方法が載っている本は少ない(と思う)のでかなり貴重。このやり方だと、第一不完全性定理を証明するのが通常より大変になるが、その代わり第二不完全性定理が自然に出てくる。ただし、Boolosは(論文でもその傾向があるように思われるが)本の書き方として、それほど分りやすくない点があり、それで四つ星にした。. In Ognjanović and Rašković (1999), a took into account the premise $$s$$, which has a rather high In order to restrict Theory,”. probabilistic operators, but rather deal with a “Some probability logics with new types of probability For more on inductive logic, the reader can consult Jaynes (2003), obviously, in concrete applications, certain interpretations of $$9/11$$ and $$5/11$$). in, Scott, D., 1964, “Measurement Structures and Linear Nilsson, N. J., 1986, \"Probabilistic logic,\", Jøsang, A., 2001, \"A logic for uncertain probabilities,\", Jøsang, A. and McAnally, D., 2004, \"Multiplication and Comultiplication of Beliefs,\". Bayesian epistemology, For example, when expressed in terms of We smallest essential premise set that contains $$\\gamma$$. 2011) for a recent survey. variety of approaches in this booming area, but interested readers can compatible with all of the common interpretations of probability, but expresses that more than 75% of all birds fly. discussed in Halpern (1990): The probability that Tweety flies is greater than $$0.9$$. Papadimitriou 1990), and thus finding these functions quickly becomes (rather than being defined as $$P(\\phi\\wedge\\psi)/P(\\psi)$$, as is We will discuss three extensions h-valid, written $$\\Gamma\\models_h\\phi$$, if and only if $$P(\\phi)=1.$$, Finite additivity. probabilistic operators. Finally, languages with first-order probabilistic operators will be value is not known, but it is known to have a lower bound logic’ are used by different researchers in different, Renne (2015) further extend the qualitative approach, by allowing the proof system and proof of strong completeness for propositional her own strategy; for instance at $$x$$, player $$a$$ is certain that Please try your request again later. The need to deal with a broad variety of contexts and issues has led to many different proposals. The first-order sound and strongly complete proof system is given for propositional and assignment function $$g$$, we map each term $$t$$ to domain Recall Adams-probabilistic validity has an alternative, equivalent A basic modal probability logic adds to propositional logic formulas countable additivity condition for probability measures. (2015)), it is not the case that any class of models definable by a Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. The importance of higher-order probabilities is clear the next two subsections we will consider more interesting cases, when There exist functions $$L_{\\Gamma,\\phi}: \\(\\mathcal{P}_{b,x}$$ and $$\\mathcal{P}_{b,z}$$ map $$x$$ to $$1/4$$, Your recently viewed items and featured recommendations, Select the department you want to search in. on a machine. propositions from a set $$\\Phi$$ to each world. q\\). this entry. Propositional probability logics are extensions of propositional logic ‘extensional’; for example, $$P(\\phi\\wedge\\psi)$$ cannot conditional), and therefore falls outside the scope of this P(\\psi)\\) for all formulas $$\\phi,\\psi\\in\\mathcal{L}$$ that are See Chapter 3 of Ognjanović et al. five are black and four are white. Halpern, J. Y., 1990, “An analysis of first-order logics of Section 4 probability logic is modeled. completely axiomatize the behavior of $$\\geq$$ without having to use \\models \\psi\\), $$M,g \\models Px(\\phi) \\geq q$$ iff $$\\sum_{d :M,g[x \\mapsto d] In this section we will focus on those Goldblatt (2010) presents a strongly complete proof system for a However, \\([!\\psi]\\phi$$ does unfold too, logic’s semantics is probabilistic in nature, but probabilities This probability is 5/9 In this subsection, we consider a first-order probability logic with a There was a problem loading your book clubs. within probabilities, that is, it can for example reason about the Alternatively, one can add various kinds of probabilistic they are used to describe the behavior of a transition system, their for all $$\\epsilon>0$$ there exists a $$\\delta>0$$ such that for picking a white marble from the vase. absolutely certain truths and inferences, whereas probability theory exact relation between inductive logic and probability logic, which is The formula $$P(\\varphi)\\ge q$$ is Consider a valid argument $$D$$ is a finite nonempty set of objects, the interpretation Every free This approach is taken by Bacchus (1990) and Halpern (1990), Minimizers in Probability Kinematics,”, –––, 1981b, “Probabilistic Semantics words, they do not study truth preservation, but rather possible-world semantics (which we abbreviate FOPL). Probably,”, Ilić-Stepić, Ognjanović, Z., Ikodinović, N., (Hansen and Jaumard 2000; chapter 2 of Haenni et al. The following three subsections \\sum_i\\mu(A_i)\\) whenever $$A_i\\cap A_j = \\emptyset$$ for each to every variable. (1990). Theorem 2. in the object language, such as those involving sums and products of This language is interpreted on very simple first-order models, which weakly complete. P_L(\\gamma_i),\\) $$P_U(\\gamma_i)\\leq b_i$$ for $$1\\leq i\\leq n$$, and an extension requires that the language contains two separate classes truth preservation: in a valid argument, the truth of the Probability Function (P). syntactical objects, namely terms and formulas. Consider the ), Demey, L. and Sack, J., 2015, “Epistemic Probabilistic a reasonably high defeasible reasoning, However, this system Let us assume for the Propositional Modal Logics”. Given probabilities for events A and B, we can calculate the probability of “A and B”, “A or B”, “A given B”, and so on. $$\\times$$ 4/9 = 20/81, but we cannot express this in the language" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81653965,"math_prob":0.99101835,"size":7725,"snap":"2022-05-2022-21","text_gpt3_token_len":2246,"char_repetition_ratio":0.115529075,"word_repetition_ratio":0.0,"special_character_ratio":0.24711974,"punctuation_ratio":0.15983027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T10:38:13Z\",\"WARC-Record-ID\":\"<urn:uuid:dafb8612-0e28-4ab0-ae72-111147b25432>\",\"Content-Length\":\"36429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9a0a77b-c98c-49ec-8a00-0d0b66410e74>\",\"WARC-Concurrent-To\":\"<urn:uuid:af53236b-f89b-4851-a261-4f733ee6482e>\",\"WARC-IP-Address\":\"194.55.14.56\",\"WARC-Target-URI\":\"https://strafverteidigungdinslaken.de/page/ceba17-the-logic-of-probability\",\"WARC-Payload-Digest\":\"sha1:JSP6ZN4H2OEISOW4A7UVJXAM3S56VYLT\",\"WARC-Block-Digest\":\"sha1:XTI2B4ECN4RMRLB2KOATXAINC2JAISTD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531779.10_warc_CC-MAIN-20220520093441-20220520123441-00350.warc.gz\"}"}
https://www.research.ed.ac.uk/portal/en/publications/amplitude-analysis-of-the-decay-overlineb0-to-ks0-pi-pi-and-first-observation-of-the-cp-asymmetry-in-overlineb0-to-k892-pi(e077b324-cfbc-4796-afc8-8d81cfb9faef).html
[ "## Amplitude analysis of the decay $\\overline{B}^0 \\to K_{S}^0 \\pi^+ \\pi^-$ and first observation of the CP asymmetry in $\\overline{B}^0 \\to K^{*}(892)^- \\pi^+$\n\nResearch output: Contribution to journalArticle\n\n### Related Edinburgh Organisations\n\nOriginal language English Aaij:2017ngy 261801 Physical Review Letters 120 26 https://doi.org/10.1103/PhysRevLett.120.261801 Published - 27 Jun 2018\n\n### Abstract\n\nThe time-integrated Dalitz plot of the three-body hadronic charmless decay ${{\\overline{B}}^0 \\to K_{\\mathrm{\\scriptscriptstyle S}}^0 \\pi^+ \\pi^-}$ is studied using a $pp$ collision data sample recorded with the LHCb detector, corresponding to an integrated luminosity of $3.0\\;\\mathrm{fb}^{-1}$. The decay amplitude is described with an isobar model. Relative contributions of the isobar amplitudes to the ${\\overline{B}^0 \\to K_{\\mathrm{\\scriptscriptstyle S}}^0 \\pi^+ \\pi^-}$ decay branching fraction and CP asymmetries of the flavour-specific amplitudes are measured. The CP asymmetry between the conjugate ${\\overline{B}^0 \\to K^{*}(892)^{-}\\pi^+}$ and ${\\overline{B}^0 \\to K^{*}(892)^{+}\\pi^-}$ decay rates is determined to be $-0.308 \\pm 0.062$.\n\nID: 90517102" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7103755,"math_prob":0.9644473,"size":764,"snap":"2019-35-2019-39","text_gpt3_token_len":238,"char_repetition_ratio":0.125,"word_repetition_ratio":0.021978023,"special_character_ratio":0.32198954,"punctuation_ratio":0.07575758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933553,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T12:59:34Z\",\"WARC-Record-ID\":\"<urn:uuid:84d886fd-7757-4c30-a715-6453b8c4171a>\",\"Content-Length\":\"18898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a00fe6d4-ba7d-4ffa-9d79-95caad43e600>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d8298bb-ea9f-4f58-b501-6252ea145f56>\",\"WARC-IP-Address\":\"129.215.228.22\",\"WARC-Target-URI\":\"https://www.research.ed.ac.uk/portal/en/publications/amplitude-analysis-of-the-decay-overlineb0-to-ks0-pi-pi-and-first-observation-of-the-cp-asymmetry-in-overlineb0-to-k892-pi(e077b324-cfbc-4796-afc8-8d81cfb9faef).html\",\"WARC-Payload-Digest\":\"sha1:XHBDFYOY645KMF5OJ664ELFSNFL3RVH5\",\"WARC-Block-Digest\":\"sha1:VPUKXLLU5QZOL4MORFVJ42QSUBZMV3AX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514576965.71_warc_CC-MAIN-20190923125729-20190923151729-00513.warc.gz\"}"}
https://dochero.tips/counting-solutions-to-diophantine-equations.html
[ "# Counting solutions to Diophantine equations\n\nCounting solutions to Diophantine equations OSCAR MARMON ... her love, thoughtfulness and generosity and for inspiring me, supporting me and believing...\n\nThesis for the degree of Doctor of Philosophy\n\nCounting solutions to Diophantine equations Oscar Marmon\n\nDepartment of Mathematical Sciences Chalmers University of Technology and University of Gothenburg Gothenburg, Sweden 2010\n\nCounting solutions to Diophantine equations OSCAR MARMON ISBN 978-91-7385-402-3\n\nc OSCAR MARMON 2010 Doktorsavhandlingar vid Chalmers tekniska högskola Ny serie nr 3083 ISSN 0346-718X Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg Sweden Telephone: +46 (0)31-772 1000\n\nPrinted in Gothenburg, Sweden 2010 ii\n\nCounting solutions to Diophantine equations OSCAR MARMON Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg\n\nAbstract This thesis presents various results concerning the density of rational and integral points on algebraic varieties. These results are proven with methods from analytic number theory as well as algebraic geometry. Using exponential sums in several variables over finite fields, we prove upper bounds for the number of integral points of bounded height on an affine variety. More precisely, our method is a generalization of a technique due to Heath-Brown — a multi-dimensional version of van der Corput’s AB-process. It yields new estimates for complete intersections of r hypersurfaces of degree at least three in An , as well as for hypersurfaces in An of degree at least four. We also study the so called determinant method, introduced by Bombieri and Pila to count integral points on curves. We show how their approach may be extended to higher-dimensional varieties to yield an alternative proof of Heath-Brown’s Theorem 14, avoiding p-adic considerations. Moreover, we use the determinant method to study the number of representations of integers by diagonal forms in four variables. Heath-Brown recently developed a new variant of the determinant method, adapted to counting points near algebraic varieties. Extending his ideas, we prove new upper bounds for the number of representations of an integer by a diagonal form in four variables of degree k ≥ 8. Furthermore, we use a refined version of the determinant method for affine surfaces, due to Salberger, to derive new estimates for the number of representations of a positive integer as a sum of four k-th powers of positive integers, improving upon estimates by Wisdom. Keywords. Integral points, rational points, counting function, exponential sums, Weyl differencing, van der Corput’s method, determinant method, sum of k-th powers.\n\niii\n\niv\n\nPapers in this thesis Paper I Oscar Marmon. The density of integral points on complete intersections. Q. J. Math. 59 (2008), 29–53. With an appendix by Per Salberger.\n\nPaper II Oscar Marmon. The density of integral points on hypersurfaces of degree at least four. Acta Arith. 141 (2010), 211–240.\n\nPaper III Oscar Marmon. A generalization of the Bombieri-Pila determinant method. To appear in Proceedings of the HIM Trimester Program on Diophantine Equations, Bonn 2009.\n\nPaper IV Oscar Marmon. Sums and differences of four k-th powers. Preprint.\n\nv\n\nvi\n\nAcknowledgements I wish to thank my supervisor Per Salberger, who found a good starting point for my research, and who has provided invaluable help and insightful guidance during these five years. I am also grateful to Pär Kurlberg, my co-supervisor, for introducing me to analytic number theory. I am grateful to the Department of Mathematical Sciences at Chalmers University of Technology and University of Gothenburg — it has been a privilege to conduct doctoral studies here. I also want to thank my (former and present) fellow doctoral students, whose company I have enjoyed during this time: Leif, Kenny, Elizabeth, Jonas, Micke P, Blojne, David, Fredrik, Karin, Jacob, Magnus, Ragnar, Ida and many others. In 2009, I visited the HIM in Bonn, during the trimester program Diophantine Equations. I am deeply grateful to the organizers for providing that opportunity. Thanks also to all friends I have met at conferences and summer schools. I have benefited greatly from discussions with Tim Browning and Jonathan Pila. I also wish to thank Roger Heath-Brown for showing interest in my research, as well as providing inspiration for a major part of it. I thank my family for always supporting me. Especially, I thank Sofia for her love, thoughtfulness and generosity and for inspiring me, supporting me and believing in me during times of doubt. And I thank our ǫ, whom I can’t wait to meet. Oscar Marmon Gothenburg April 2010\n\nvii\n\nviii\n\nTill morfar\n\nix\n\nx\n\nCounting solutions to Diophantine equations Oscar Marmon\n\n1\n\nIntroduction\n\nThe study of Diophantine equations is among the oldest branches of mathematics, and also one of the most intriguing. By a Diophantine equation, we mean a polynomial equation in several variables defined over the integers. The term Diophantine refers to the Greek mathematician Diophantus of Alexandria, who studied such equations in the 3rd century A.D. Thus, let f (x 1 , . . . , x n ) be a polynomial with integer coefficients. We then wish to study the set of solutions (x 1 , . . . , x n ) ∈ Zn to the equation f (x 1 , . . . , x n) = 0.\n\n(1)\n\nThis may be done from several different perspectives. The first question one may ask is perhaps whether or not the Diophantine equation (1) has any solutions at all. Indeed, one of the most famous theorems in mathematics, Fermat’s Last Theorem, proven by Wiles in 1995, states that for f (x, y, z) = x n + y n −z n , where n ≥ 3, there are no solutions in positive integers x, y, z. Qualitative questions of this type are often studied using algebraic methods. Secondly, one may adapt an algorithmic perspective. To give another famous example, the tenth problem in Hilbert’s famous list from 1900 asked for a general algorithm to determine, in a finite number of steps, the solvability of any given Diophantine equation. It was proven by Matiyasevich in 1970 that this problem is unsolvable. In this thesis, we shall focus on a third problem - that of estimating the number of solutions to Diophantine equations. Our methods are both analytic and algebraic in nature. Much attention has been given to cases where the set of solutions to (1) is finite. Thus, for example, if f (x, y, z) is a homogeneous polynomial, the equation f = 0 defines an algebraic curve in the projective plane. The celebrated theorem of Faltings states that there are only finitely many rational points on such a curve if its genus is at least 2. In other words, there are only finitely many solutions, with x, y, z relatively prime, to the Diophantine equation (1) in this case. 1\n\nWhen the number of variables is larger, however, we often expect there to be infinitely many solutions. Still, we want to measure the size of the solution set in some way. One convenient way of expressing such quantitative information is through the counting function N ( f , B) = #{x ∈ Zn ; f (x 1 , . . . , x n ) = 0, max |x i | ≤ B}. i\n\nEstimates for such counting functions shall occur frequently throughout this thesis. In order to express these estimates, it is convenient at this point to introduce some notation. Notation. We shall interchangeably use the notations Φ(B) ≪ Ψ(B), Φ(B) = O(Ψ(B)) to express the fact that there is a constant c such that Φ(B) ≤ cΨ(B) for B large enough. If c is allowed to depend on certain parameters, this is indicated by subscripts. The notation Φ(B) ∼ Ψ(B) shall mean that lim Φ(B)/Ψ(B) = 1,\n\nB→∞\n\nand Φ(B) ≈ Ψ(B) means that Φ(B) ∼ cΨ(B) for some constant c.\n\n1.1 A simple heuristic Suppose that f ∈ Z[x 1 , . . . , x n ] is a polynomial of degree d ≤ n. Then we can argue as follows to guess the order of magnitude of N ( f , B). The values f (x), where x ∈ [−B, B]n , will be of order B d . Thus, we might expect the probability that f (x) vanishes for a randomly chosen x ∈ [−B, B]n to be of order B −d . As the cube [−B, B]n contains ≈ B n integral points, we are led to expect that B n−d ≪ N ( f , B) ≪ B n−d .\n\n(2)\n\nIn some cases, this heuristic can be shown to give the correct answer. In particular, the Hardy-Littlewood circle method yields accurate bounds when n is large enough compared to d. Thus, Birch has proved that for a nonsingular homogeneous polynomial f of degree d in n > (d − 1)2d variables, we have N ( f , B) ∼ c f B n−d as B → ∞, where the constant c f is positive if the equation f = 0 has a nontrivial solution in R and in each p-adic field Q p . One may apply the same heuristic arguments to systems of equations. Let us denote the maximum norm of a point x ∈ Cn by |x| = max i |x i |. In analogy 2\n\nwith the definition of N ( f , B), we define a counting function for systems of equations: N ( f1 , . . . , f r , B) = #{x ∈ Zn ; f1 (x) = · · · = f r (x) = 0, |x| ≤ B}. Given polynomials f1 , . . . , f r ∈ Z[x 1 , . . . , x r ] of degree d1 , . . . , d r , respectively, the naïve reasoning above would lead us to expect that B n−(d1 +···+dr ) ≪ N ( f1 , . . . , f r , B) ≪ B n−(d1 +···+dr )\n\n(3)\n\nif d1 + · · · + d r ≤ n. In Section 2 we will discuss results of this thesis, providing estimates that come quite close to the heuristic upper bounds in (2) and (3), for n of more moderate size than required in the Hardy-Littlewood circle method.\n\n1.2 Integral and rational points on algebraic varieties In the language of algebraic geometry, the equation (1) defines a hypersurface X in affine space An . The set of integral solutions to (1) may then be seen as the intersection of X with the integral lattice Zn . In general, given any locally closed subvariety X ⊂ An , we can study the set of integral points X (Z) = X ∩ Zn . In this thesis, we investigate the quantitative arithmetic of algebraic varieties, to use a term introduced by Browning . This involves understanding the behaviour of counting functions similar to the function N ( f , B) introduced above. Thus, let X (Z, B) := X (Z) ∩ [−B, B]n for any positive real number B, and define the counting function N (X , B) := #X (Z, B). The first simple observation one can make about the growth of N (X , B) is the following standard result. Proposition 1.1. Let X ⊂ An be a closed subvariety of dimension m and degree d. Then N (X , B) = On,d (B m ). (4) Proof. We shall prove (4) by induction on m. If m = 0, then X consists of at most d points, so the estimate follows. Thus, suppose that m > 0. Since X decomposes into at most d irreducible components by Bézout’s theorem, we may assume that X is in fact irreducible. 3\n\nThen, for some i ∈ {1, . . . , n}, X intersects any hyperplane H a = {x i = a}, ¯ properly. Thus we have for a ∈ Q, X (Z, B) ⊆\n\n[ a∈Z, |a|≤B\n\n(X ∩ H a )(Z, B),\n\nwhere dim(X ∩ H a ) ≤ m − 1 and deg(X ∩ H a ) ≤ d for all a. Thus we may conclude by induction that N (X ∩ H a , B) = On,d (B m−1 ), so that N (X , B) =\n\nX |a|≤B\n\nN (X ∩ H a , B) ≪n,d B m .\n\nWe shall refer to the bound given by Proposition 1.1 as the trivial estimate. If the polynomial f is homogeneous, any multiple of a solution to (1) is again a solution, so it is more natural to study the set of solutions (x 1 , . . . , x n ) ∈ Zn with gcd(x 1 , . . . , x n) = 1. We call these solutions primitive. The primitive solutions correspond to rational points on the projective variety X ⊂ Pn−1 defined by (1). ¯ defined as the set of More precisely, consider projective space Pn over Q, ¯ n+1 under equivalence classes [x] of non-zero elements x = (x 0 , . . . , x n ) ∈ Q the equivalence relation ∼ given by (x 0 , . . . , x n ) ∼ (λx 0 , . . . , λx n ) for all\n\n¯ \\ {0}. λ∈Q\n\nLet Pn (Q) be the set of points x ∈ Pn for which we can find a representative x ∈ Qn+1 such that [x] = x. If X ⊂ Pn is a locally closed subvariety, we define X (Q) = X ∩ Pn (Q). For each rational point x ∈ X (Q), one can find a representative x ∈ Zn+1 for x such that gcd(x 0 , . . . , x n) = 1. Moreover, x is uniquely determine up to a choice of sign. We then define the height of the rational point x ∈ Pn (Q) as H(x) = max{|x 0 |, . . . , |x n|}. The density of rational points on X is measured by the counting function N (X , B) := {x ∈ X (Q), H(x) ≤ B}. For any positive real number B, we also define the set S(X , B) := {x ∈ Zn+1 ; [x] ∈ X (Q), H([x]) ≤ B}. 4\n\nRemark 1.1. The Möbius function µ may be used to single out the primitive points in S(X , B), as explained in [8, §1.2]. Thus we have N (X , B) =\n\n∞ 1X\n\n2\n\nµ(k)#S(X , k−1 B)\n\n(5)\n\nk=1\n\nand #S(X , B) = 2\n\n∞ X\n\nN (X , k−1 B).\n\n(6)\n\nk=1\n\nIn particular, it is easy to see that if θ > 1, then N (X , B) ≪ B θ if and only if #S(X , B) ≪ B θ . More generally, one may impose individual restrictions on the coordinates x 0 , . . . , x n . Let B = (B0 , . . . , Bn ) be an (n + 1)-tuple of positive real numbers. Then we define S(X , B) = {x ∈ Zn+1 ; [x] ∈ X (Q), |x i | ≤ Bi , i = 0, . . . , n}. Remark 1.2. If K is a number field, one may define the height of a point x ∈ Pn (K) represented by (x 0 , . . . , x n) ∈ K n+1 as Y max{kx 0 kv , . . . , kx n kv }, H K (x) = v∈MK\n\nwhere MK is the set of standard absolute values on K (see [18, B.1]). It follows from the product formula [18, B.1.2] that H K (x) is independent of the choice of representative for x. Consequently, we may define a counting function NK (X , B) = #{x ∈ X (K); H K (x) ≤ B}. We shall mostly be interested in the case K = Q, but the conjectures we shall describe shortly, relating geometry and arithmetic, are most conveniently formulated over general number fields. Example 1.1. It is easy to see that N (An , B) = (2⌊B⌋ + 1)n . Counting rational points in projective space is already more subtle. By a theorem of Schanuel , we have N (Pn , B) =\n\n2n−2 ζ(n + 1)\n\nB n+1 + O(B n (log B) bn ),\n\nwhere b1 = 1, bn = 0 for n ≥ 2. For projective varieties, the trivial estimate is given by the following proposition. 5\n\nProposition 1.2. Let X ⊂ Pn be a closed subvariety of dimension m and degree d. Then N (X , B) = On,d (B m+1 ). Proof. This follows from Proposition 1.1 by considering the affine cone C(X ) ⊂ An+1 over X . This is a variety of dimension m + 1 and degree d, so N (X , B) ≤ N (C(X ), B) = On,d (B m+1 ).\n\nThe trivial estimate is obviously best possible for varieties containing a linear component defined over Q. For any irreducible variety of degree at least 2, however, it can be improved upon. The most general result is due to Pila , who proves that N (X , B) = On,d,ǫ (B m−1+1/d+ǫ )\n\n(7)\n\nN (X , B) = On,d,ǫ (B m+1/d+ǫ )\n\n(8)\n\nin the affine case, and\n\nin the projective case.\n\n1.3 The relation between geometry and arithmetic There is a general philosophy in Diophantine geometry that “geometry governs arithmetic”. For curves, one has a very satisfactory characterization of the density of rational points in terms of the genus, as explained in [18, Thm. B.6.2]. Let C ⊂ Pn be a smooth curve over a number field K. If g(C) = 0 and C(K) 6= ∅, then C is isomorphic to P1 over K, which implies that NK (C, B) ≈ B 2/d , where d is the degree of C. If g(C) = 1, then C is an elliptic curve, and by the Mordell-Weil theorem, C(K) is a finitely generated abelian group. Then NK (C, B) ≈ (log B) r/2 , where r is the rank of C(K). Finally, if g(C) ≥ 2, then Faltings’ Theorem states that C(K) is a finite set. In other words, NK (C, B) = O(1). A similar characterization for higher-dimensional varieties is yet to be discovered, but there are conjectures inspired by the one-dimensional case. If X is a smooth projective variety over K, let KX be a divisor in the canonical class (see [32, III.6.3]). The variety X is said to be of general type if KX is ample, i.e. if some multiple nKX defines an embedding of X into some projective space. A curve is of general type if and only if g ≥ 2 [13, Prop. IV.5.2]. Thus, in 6\n\nanalogy with the case of curves, it is expected that rational points are scarce on such varieties. Indeed the Bombieri-Lang Conjecture [18, Conj. F. 5.2.1] states that X (K) is not Zariski dense in X if X is of general type. At the opposite end of the spectrum, a variety X is called a Fano variety if −KX is ample. Such varieties are believed to possess “many” rational points. Batyrev and Manin have formulated very general conjectures relating the density of rational points on a variety X to certain geometric invariants. They are most precise in the case of Fano varieties. For simplicity, suppose that we have an embedding X ⊂ Pn . If X is Fano, then it is predicted that there is a non-empty open subset U ⊆ X such that NK (U, B) ∼ cX B α (log B) t−1\n\n(9)\n\nfor a certain α > 0 and a certain integer t ≥ 1, possibly after replacing K by a finite extension K ⊆ K ′ . For the precise definition of α and t we refer to , although a counterexample due to Batyrev and Tschinkel shows that the suggested interpretation of t is not always correct. For a further discussion of this counterexample and its consequences, as well as the many cases for which the conjectural asymptotic formula has been verified, one may consult Peyre’s survey article . In the following important example, however, the invariants α and t have simple interpretations. Example 1.2. Suppose that X = V1 ∩ · · · ∩ Vr is an intersection of hypersurfaces Vi ⊂ Pn−1 of degrees di , respectively, and that dim X = n − 1 − r. In this case, we call X a complete intersection of multidegree d = (d1 , . . . , d r ). If X is non-singular, then KX = −(n − (d1 + · · · + d r ))H, where H is a hyperplane section. Therefore X is Fano if and only if n > d1 + · · · + d r , and in this case, α = n − (d1 + · · · + d r ). Then [1, Conj. B’] states that NK ′ (U, B) ∼ cB n−(d1 +···+dr ) (log B) t−1 ,\n\n(10)\n\nprovided U ⊂ X is a sufficiently small dense open subset, and K ′ ⊇ K is sufficiently large. Moreover, it is conjectured that the integer t in (10) equals the rank of the Picard group of X . By [12, Cor. IV. 3.2], Pic(X ) ∼ = Z if X is a complete intersection of dimension at least 3, and thus the log-factor in (10) vanishes in this case. In particular, for a hypersurface of degree d, the conjectural asymptotic formula (10) is in accordance with the heuristic (2) and Birch’s theorem. 7\n\n1.4 The dimension growth conjecture One may also ask for a very general upper bound for the growth rate of N (X , B), requiring as little information as possible about the variety X . In this direction, Heath-Brown has made the following conjecture ( or [16, Conj. 2]). Conjecture 1. Let F ∈ Z[x 1 , . . . , x n ] be an irreducible homogeneous polynomial of degree d. Then N (F, B) ≪n,d,ǫ B n−2+ǫ . Using birational projections, one can show that Conjecture 1 implies the following more general statement ([7, Conj. 2]): Conjecture 2. Let X ⊂ Pn be an irreducible closed subvariety of dimension m and degree d ≥ 2. Then N (X , B) ≪n,d,ǫ B m+ǫ . (11) We refer to this statement, or a weaker version of it where the implied constant is allowed to depend on X , as the dimension growth conjecture. Conjecture 2 has now been established in many cases, mainly due to HeathBrown, Browning and Salberger. A table summarizing the progress may be found in Browning [8, Table 3.1]. Several different methods have been employed to treat different cases, including the approach with exponential sums introduced in Section 2 and the determinant method of Section 3, as well as the Hardy-Littlewood circle method. The case of cubic hypersurfaces has turned out to be the most difficult one . More precisely, the remaining open case of the dimension growth conjecture is that of a singular hypersurface X ⊂ Pn of degree d = 3, where 5 ≤ n ≤ 5 + dim Sing X . Example 1.3. Consider the surface X ⊂ P3 given by x 13 − x 23 + x 33 − x 43 = 0. Although X is irreducible, and even non-singular, it obviously contains the line given by x 1 = x 2 and x 3 = x 4 . The contribution from this line alone shows that N (X , B) ≫ B 2 , so this provides an example where the bound (11) is best possible. On the other hand, the heuristic (2) suggests an estimate of order B. Thus, we might be tempted to pursue a better bound for the dense open subset U ⊂ X obtained by deleting all lines on X . By a recent result of Salberger [28, Cor. 6.5], one obtains p N (U, B) ≪ B 3 (log B)4 . 8\n\nIn this case, the only lines on X are those given by x σ(1) − x σ(2) = x σ(3) − x σ(4) = 0 for some permutation σ ∈ S4 . In general, rational points on a variety X may accumulate along a certain algebraic subset A, and it makes sense to study the density of rational points on the open subset U = X \\ A rather than on X .\n\n1.5 Counting representations of integers as sums or differences of powers (Paper IV) Now we shall consider a particularly well studied Diophantine equation. The classical Waring’s problem asks for the least integer s = g(k) such that the equation x 1k + · · · + x sk = N , (12)\n\nhas a solution (x 1 , . . . , x s ) ∈ Ns for any N ∈ N. In other words, we ask for the smallest integer s such that any natural number can be represented as a sum of s k-th powers of natural numbers. A related question concerns the number of representations of N on the form (12), denoted R k,s (N ). One easily deduces the bound R k,s (N ) = Oǫ (N (s−2)/k+ǫ ).\n\n(13)\n\nIndeed, the case s = 2, from which the general case is easily obtained by induction, follows from well-known estimates for the divisor function. Remark 1.3. Moreover, we trivially have N X n=1\n\nR k,s (n) ≪ N s/k ,\n\nso one might heuristically expect that R k,s (N ) = Oǫ (N ǫ ) if s ≤ k and R k,s (N ) = Oǫ (N s/k−1+ǫ ) if s > k. The former bound would follow from the so called Hypothesis K of Hardy and Littlewood, stating that R k,k (N ) = O(N ǫ ) for any natural number k. For k = 3, Hypothesis K was proven to be false for k = 3 by Mahler , but it remains open for larger k. In Paper IV of this thesis, we study the case s = 4. Thus, let R k (N ) := R k,4(N ). For sums of four cubes, Hooley has proved the remarkable estimate R3 (N ) = Oǫ (N 11/18+ǫ ), using sieve methods. Wisdom [36, 37] extended Hooley’s methods to prove that R k (N ) = Oǫ (N 11/(6k)+ǫ ) for odd integers k ≥ 3. In Paper IV, the following result is proven. 9\n\nTheorem 1.1 (Paper IV, Thm. 1.3). R k (N ) ≪k,ǫ N 1/k+2/k\n\n3/2\n\nfor any ǫ > 0. More generally, we consider sums of three k-th powers and an l-th power. The results follow rather easily from recent work of Salberger , discussed further in Section 3. The main part of Paper IV is devoted to another variant of the same problem, where some of the + signs in (12) are replaced by −, and the variables x i are allowed to be arbitrary integers. This problem was recently studied by Heath-Brown in the case s = 3. Now there may well be infinitely many solutions, so it makes sense to study the density of solutions. Thus, for k ≥ 3 and ǫ2 , ǫ3 ∈ {±1}, let R(N , B) be the number of solutions x ∈ Z3 to x 1k + ǫ2 x 2k + ǫ3 x 3k = N ,\n\n(14)\n\nsatisfying max |x i | ≤ B. Here we have the trivial estimate (cf. (13)) R(N , B) = Oǫ (B 1+ǫ ). Call a solution special if one of the terms ±x ik equals N . It may happen that the contribution to R(N , B) of the special solutions is of order B. If R0 (N , B) denotes the number of non-special solutions to the equation (14) satisfying max |x i | ≤ B, then Heath-Brown proves that R0 (N , B) = Ok (B 10/k ) if N ≪ B, and that\n\nR0 (N , B) = Oǫ (B 9/10+ǫ N 1/10 )\n\nif N ≪ B 3/13 . In Paper IV, we consider the case s = 4, in the following more general setting. For a quadruple of non-zero integers (a1 , a2 , a3 , a4 ) and a positive integer N , we consider the equation a1 x 1k + a2 x 2k + a3 x 3k + a4 x 4k = N ,\n\n(15)\n\nwhere x i ∈ Z. Reusing the notation above, let R(N , B) be the number of solutions x ∈ Z4 to (15) satisfying max |x i | ≤ B. We think of N as being considerably smaller than B k , as opposed to in Theorem 1.1, where we had a natural bound B = N 1/k for the height of the solutions. In this case, the “trivial” estimate R(N , B) = Oǫ (B 2+ǫ ) (16) 10\n\nmay be deduced from known bounds for the number of solutions to Thue equations. As above, call a solution x to special if either ai x ik = N for some index i or ai x ik + a j x kj = N for some pair of indices i, j. Again, the contribution of the special solutions to R(N , B) may be of order B, as the example N = 1, a = (1, −1, 1, 1), x = (t, t, 0, 1) shows. However, one can show that the contribution is at most Oǫ (BN ǫ ). If R0 (N , B) denotes the number of non-special solutions to (15) with |x i | ≤ B for all i, then the main result of Paper IV is the following. Theorem 1.2 (Paper IV, Thm. 1.1). For any ǫ > 0 we have p\n\nR0 (N , B) ≪ai ,N ,ǫ B 16/(3\n\n3k)+ǫ\n\np\n\n(B 2/\n\nk\n\np\n\n+ B 1/\n\nk+6/(k+3)\n\n).\n\n(17)\n\nIn particular, R(N , B) ≪ai ,N B for k ≥ 27. This estimate improves upon the trivial estimate (16) as soon as k ≥ 8. There is also a version of Theorem 1.2 where the dependence on N is explicit. The method of proof is discussed in further detail in Section 3. Remark 1.4. In the notation used earlier, we have R(N , B) = N (X , B), where X ⊂ An is the hypersurface given by (15), and R0 (N , B) = N (U, B), where the open subset U ⊂ X is the complement of all lines contained in X (cf. Example 1.3).\n\n2\n\nThe method of exponential sums\n\nAs mentioned above, the Hardy-Littlewood circle method may be used to prove asymptotic formulae for the density of solutions to Diophantine equations, provided the number of variables is large enough. The method described in this section gives slightly weaker bounds, but is useful when the number of variables is smaller. We begin by stating a result of Heath-Brown that has provided the inspiration for the first two papers in this thesis. By the leading form of a polynomial, we shall mean its homogeneous part of maximal degree. Theorem 2.1 (Heath-Brown [15, Thm. 2]). Let f ∈ Z[x 1 , . . . , x n ], where n ≥ 5, be a polynomial of degree at least 3, with leading form F . Suppose that F n−1 defines a non-singular hypersurface in PQ . Then N ( f , B) ≪ f B n−3+15/(n+5) . 11\n\nNote that for cubic polynomials, this estimate approaches the upper bound predicted by our heuristic consideration (2) and the Batyrev-Manin conjectures, as n → ∞. In Paper I, we prove the following generalization of Theorem 2.1 to varieties defined by several equations. Here we denote the height of a polynomial F ∈ C[x 1 , . . . , x n ], defined as the maximum of the absolute values of the coefficients of F , by kF k. Theorem 2.2 (Paper I, Thm. 1.1). Let f1 , . . . , f r ∈ Z[x 1 , . . . , x n ] be polynomials of degree at least 3 and at most d, with leading forms F1 , . . . , F r , respectively. Suppose that F1 , . . . , F r define a non-singular complete intersection of codimenn−1 . Then sion r in PQ\n\nN ( f1 , . . . , f r , B) ≪n,d B\n\nn−3r+r 2\n\n13n−5−3r n2 +4nr−n−r−r 2\n\n(log B)n/2\n\nr X\n\n!2r+1\n\nlog F . i\n\ni=1\n\nAgain, if d = 3, this estimate approaches the conjectural one as n → ∞. In the case r = 1, the exponent n−3+\n\n13n − 8\n\n2\n\nn + 3n − 2\n\noffers a slight improvement upon the exponent of Theorem 2.1, the nature of which will be explained in §2.7. Unfortunately, this is not enough to extend the range n ≥ 10 in which Theorem 2.1 validates Conjecture 1. In Paper II, the aim is to improve upon the estimate in Theorem 2.1 for polynomials of higher degree. The following is our main result. Theorem 2.3 (Paper II, Thm. 1.2). Let f ∈ Z[x 1 , . . . , x n] be a polynomial of degree d ≥ 4 with leading form F . Suppose that F defines a non-singular hypern−1 . Then surface in PQ 2\n\nN ( f , B) ≪n,d,ǫ B n−4+(37n−18)/(n\n\n+8n−4)\n\n+ B n−3+ǫ .\n\nThis estimate improves upon Theorem 2.1 as soon as d ≥ 4 and n ≥ 11. For very large n, the results above fall in importance, in favour of the HardyLittlewood circle method, at least for forms. Indeed, recall that Birch’s work yields the bound N ( f , B) ≪ B n−4 if d = 4 and n ≥ 49. This has recently been improved to n ≥ 41 by Browning and Heath-Brown . The proofs of Theorems 2.2 and 2.10 are discussed in §2.7-2.8. 12\n\n2.1 Counting solutions to congruences The method used to prove the above results starts with the trivial observation that a solution x ∈ Zn to the equation f (x 1 , . . . , x n) = 0 is a fortiori a solution to the congruence f (x 1 , . . . , x n) ≡ 0 (mod q) (18) for any integer q. To count solutions to a polynomial congruence (18), one may use exponential sums. We adopt the standard notation e(x) := e2πi x and eq (x) := e(x/q). If t is an integer, then we have the following identity: ( q X q if q | t, eq (at) = (19) 0 otherwise. a=1 Let R = Z/qZ, and let f ∈ R[x 1 , . . . , x n ] be a polynomial. If N ( f ) denotes the number of solutions x ∈ Rn to the equation f (x) = 0, then we may use (19) to obtain the formula q 1XX N(f ) = eq (a f (x)). q a=1 x∈Rn For a = q, the inner sum equals q n , so we get q−1\n\nN(f ) − q\n\nn−1\n\n=\n\n1XX q\n\na=1\n\neq (a f (x)).\n\n(20)\n\nx∈Rn\n\nThus, we can get a bound for the deviation P of N ( f ) from its expected value q n−1 by estimating the exponential sums x eq (a f (x)). Suppose now that q = p is a prime. Let f ∈ F p [x 1 , . . . , x n ] be a polynomial of degree d, where (d, p) = 1. Suppose that the leading form F of f defines a non-singular hypersurface in Pn−1 over F p . Then we have Deligne’s estimate [9, Thm. 8.4] X e p ( f (x)) ≪n,d p n/2 . (21) x∈Fnp\n\nAn immediate corollary of (21) is that N ( f ) = p n−1 + On,d (p n/2 ). More generally, Deligne proves the following result. Theorem 2.4 (Deligne [9, Thm. 8.1]). Let X ⊂ PFn p be a non-singular complete intersection of dimension m = n − r and multidegree d = (d1 , . . . , d r ). Then #X (F p ) = #Pm (F p ) + On,d (p m/2 ). 13\n\nRemark 2.1. We have #Pm (F p ) = p m + p m−1 + · · · + 1.\n\nTheorem 2.4 and the estimate (21) are both consequences of the Weil conjectures, in particular the Riemann hypothesis for varieties over finite fields, proven by Deligne in .\n\n2.2 Counting solutions of bounded height to congruences It seems desirable to extend the above results in two directions. First, one may ask what happens for singular varieties. This question is addressed by Hooley , who proves the following generalization of Theorem 2.4. Theorem 2.5 (Hooley [20, Thm. 2]). Let X be a complete intersection in PFn p of dimension m = n − r and multidegree d, and let s be the dimension of the singular locus of X . Then #X (F p ) = #Pm (F p ) + On,d (p(m+s+1)/2 ). Secondly, we are interested in counting solutions of bounded height to congruences, rather than all solutions. Indeed, if we define N ( f1 , . . . , f r , B, q) = #{x ∈ Zn ; f i (x) ≡ 0 (mod q), 1 ≤ i ≤ r, |x| ≤ B}, then we have N ( f1 , . . . , f r , B) ≤ N ( f1 , . . . , f r , B, q)\n\nfor any integer q. If B ≪ q, we may identify Z ∩ [−B, B]n with a subset of (Z/qZ)n . It is sometimes convenient to replace the characteristic function of the box [−B, B]n by a smooth weight function. More precisely, let W : Rn → [0, 1] be an infinitely differentiable function, supported on [−2, 2]n . Then we define a weighted counting function \u0012 \u0013 X 1 x . W NW ( f1 , . . . , f r , B, q) = B n x∈Z q| f 1 (x),..., f r (x)\n\nWe may for example take W to be the function defined by ( n Y exp(−1/(1 − t 2)), w(t i /2), where w(t) = W (t) = 0, i=1\n\n|t| < 1,\n\n|t| ≥ 1.\n\n(22)\n\nIt is then clear that N ( f1 , . . . , f r , B, q) ≪ NW ( f1 , . . . , f r , B, q). In the hypersurface case, Heath-Brown proves the following asymptotic formula. 14\n\nTheorem 2.6 (Heath-Brown [15, Thm. 3]). Let f ∈ Z[x 1 , . . . , x n] be a polynomial of degree d ≥ 2. Let q be a prime and B ≪ q a real number. Let Zq the hypersurface in PFnq defined by the leading form F of f . Then NW ( f , B, q) = q\n\n\u0012\n\nX\n\n−1\n\nW\n\n1 B\n\nx∈Zn\n\n\u0013 Š € x + On,d,W B s+1 q(n−s−1)/2 ,\n\nwhere s is the dimension of the singular locus of Zq . The following result of Paper I extends Theorem 2.6, and improves slightly upon its error term. It may be viewed as a weighted, affine version of Theorem 2.5. Theorem 2.7 (Paper I, Thm. 3.3). Let f1 , . . . , f r ∈ Z[x 1 , . . . , x n ] be polynomials of degree at least 2 and at most d, with leading forms F1 , . . . , F r , respectively. Let q be a prime and B a real number with 1 ≤ B ≪ q. Suppose that F1 , . . . , F r define a closed subscheme Zq ⊂ PFn−1 of codimension r. Then q NW ( f1 , . . . , f r , B, q) = q\n\n−r\n\n\u0012\n\nX W x∈Zn\n\n1 B\n\n\u0013 € Š x + On,d,W B s+2 q(n−r−s−2)/2 ,\n\nwhere s is the dimension of the singular locus of Zq . Remark 2.2. We have simplified the error term somewhat, by noting that the theorem is trivially true if B ≪ q1/2 (cf. Remark 3.1 in Paper II). We shall discuss the proof of Theorem 2.7 in §2.7. In the appendix to Paper I, Salberger proves a version of Theorem 2.7 without the smooth weight function W , but with a slightly larger error term. To state it, we need to define counting functions for general boxes. By a box B in Rn , we mean a product of closed intervals. If q is a positive integer, we define N ( f1 , . . . , f r , B, q) = #{x ∈ B ∩ Zn ; f1 (x) ≡ · · · ≡ f r (x) ≡ 0 (mod q)}. Theorem 2.8 (Salberger). Let f1 , . . . , f r be r < n polynomials in Z[x 1 , . . . , x n] with leading forms F1 , . . . , F r , respectively, of degree at least 2 and at most d. Let q be a prime and B be a box in Rn such that each side has length at most 2B < q. Suppose that F1 , . . . , F r define a closed subscheme Zq ⊂ PFn−1 of codimension r. q Then N ( f1 , . . . , f r , B, q) = q−r #(B ∩ Zn ) + On,d (B s+2 q(n−r−s−2)/2 (log q)n ), where s is the dimension of the singular locus of Zq . 15\n\n2.3 Weyl differencing The method used by Heath-Brown to derive the estimate in Theorem 2.1 has its roots in classical techniques for exponential sums in one variable. In its original form, the idea is due to Hermann Weyl, who pioneered the use of exponential sums in number theory, in his paper Über die Gleichverteilung von Zahlen mod. Eins from 1916 . One of the fundamental tools in this paper is a procedure now known as “Weyl differencing”. If f : Z → R is any function, the idea is to bound the size of the exponential sum S=\n\nB X\n\ne( f (x))\n\nx=1\n\nfrom above by a mean value of exponential sums involving differenced functions fh(x) = f (x + h) − f (x). In case f is a polynomial, the differenced function will be a polynomial of lower degree. To achieve this, one writes 2\n\n|S| = =\n\nB X\n\ne(− f (x))\n\nX\n\nX\n\ne( f (x ′ ))\n\nx ′ =1\n\nx=1\n\nX\n\n|h|\n=\n\nB X\n\nX\n\ne( f (x + h) − f (x)) e( fh (x)).\n\n|h|\nIf f is a polynomial of degree d, one can iterate this procedure d − 1 times, using Cauchy’s inequality, until one has exponential sums involving linear polynomials. These are sums over geometric progressions, and are thus easily estimated. For a more precise account of Weyl’s method, and applications to the Riemann zeta function, one may consult Iwaniec & Kowalski [21, Ch. 8].\n\n2.4 Van der Corput’s method A refinement of Weyl’s method was devised by van der Corput [33, 34]. His method involves two processes, now known as A and B, that may be iterated alternatingly to produce increasingly sharper bounds for certain exponential sums. The A-process is a refinement of the Weyl differencing described above. Changing notation slightly, we consider a function F : Z → C and let S=\n\nB X\n\nF (x).\n\nx=1\n\n16\n\nLet χ be the characteristic function of the interval [1, B]. One then introduces a parameter H ≤ B, and writes HS =\n\nH X X\n\nχ(x + h)F (x + h) =\n\nh=1 x∈Z\n\nH XX\n\nχ(x + h)F (x + h).\n\nx∈Z h=1\n\nBy Cauchy’s inequality we have 2 ! H B−1 X X X 2 2 1 χ(x + h)F (x + h) H |S| ≤ x∈Z h=1 x=1−H X X ≤ (2B) χ(x + h1 )χ(x + h2 )F (x + h1 )F (x + h2 ). x∈Z 1≤h1 ,h2 ≤H\n\nA variable change furnishes X #{(h1 , h2 ); h1 − h2 = h} |S|2 ≪ BH −2 |h|\n≪ BH −1\n\nX\n\nX\n\nX\n\nF (x + h)F (x)\n\n1≤x≤B 1≤x+h≤B\n\nF (x + h)F (x).\n\n|h|\nIf F (x) = e( f (x)), then F (x + h)F (x) = e( fh(x)) as defined above. The introduction of the parameter H adds a new level of flexibility compared to Weyl’s method. Loosely speaking, the B-process in van der Corput’s method uses Poisson’s summation formula to transform our exponential sum into another one of shorter length, under suitable smoothness assumptions on the function f . A thorough treatment of van der Corput’s method is given in .\n\n2.5 Heath-Brown’s q-analogue In Heath-Brown introduced a q-analogue of van der Corput’s method, applicable in the case where F is a periodic function, say F (x) = eq ( f (x)) for some function f : Z → Z. For a suitable divisor q0 of q, we take as our starting point the formula HS =\n\nH X X\n\nχ(x + hq0 )F (x + hq0 ).\n\nh=1 x∈Z\n\nIn the resulting estimate |S|2 ≪ BH −1\n\nX\n\nX\n\n|h|\n1≤x≤B 1≤x+hq0 ≤B\n\n17\n\nF (x + hq0 )F (x),\n\nwe have then also achieved a shortening of the period of the summand to q/q0 , which is often favourable.\n\n2.6 A multi-dimensional q-analogue The method developed by Heath-Brown to prove Theorem 2.1 may be viewed as an extension of his q-analogue of van der Corput’s method to exponential sums in several variables. The idea is to bound N ( f , B) from above by NW ( f , B, m), where W is the weight function defined in (22) and m is a composite number. More precisely, one chooses two primes p,q satisfying p ≤ B ≤ q and puts m = pq. Here, p will play the role of the integer q0 above. Thus, one divides the sum \u0012 \u0013 X 1 W NW ( f , B, pq) = x B x∈Zn pq| f (x)\n\ninto congruence classes (mod p); NW ( f , B, pq) =\n\n\u0012\n\nX X W u∈Fnp x≡u(p) p| f (u) q| f (x)\n\n1 B\n\n\u0013 x .\n\nThe expected value of the inner sum is −n −1\n\nK := p q\n\n\u0012\n\nX W x∈Zn\n\n1 B\n\n\u0013 x ,\n\nso one writes  NW ( f , B, pq) = K\n\nX u∈Fnp p| f (u)\n\n\u0012 \u0013  X  1 X  W x − K 1+  B x≡u(p)  u∈Fn p\n\np| f (u)\n\nq| f (x)\n\n  \u0012 \u0013  X X  1   W ≪ B n p−1 q−1 + x − K  .  B  u∈Fnp x≡u(p) p| f (u) q| f (x)\n\n(23)\n\nVan der Corput differencing is then applied to the rightmost sum, which leads one to estimate counting functions for the hypersurfaces in AFnq defined by differenced polynomials f y (x) := f (x + py) − f (x). 18\n\nTo this end, one uses Theorem 2.6. Thus, geometric arguments are needed to keep track of the quantity s appearing in Theorem 2.6, or, in other words, to determine how singular the “differenced” hypersurfaces are.\n\n2.7 Generalization to complete intersections (Paper I) Let f1 , . . . , f r and F1 , . . . , F r be as in the hypotheses of Theorem 2.2. As in Heath-Brown’s proof of Theorem 2.1, we shall work with a modulus that is a product of two primes. More precisely, if Z = Proj Z[x 1 , . . . , x n ]/(F1 , . . . , F r ), then the primes p and q are chosen so that 2p < 2B + 1 < q − p, and both ZFp and ZFq are non-singular of codimension r. The differencing procedure that lies at the heart of the proof of Theorem 2.2 is a rather straightforward extension of that in , one major difference being that we avoid the use of a smooth weight function. Thus, let χB be the characteristic function of the box B = [−B, B]n . Writing N := N ( f1 , . . . , f r , B, pq) we have  X\n\nN =K\n\n1+\n\nX u∈Fnp p| f 1 (u),..., f r (u)\n\nu∈Fnp p| f 1 (u),..., f r (u)\n\n   \n\n X x≡u(p) q| f 1 (x),..., f r (x)\n\n  χB (x) − K  , \n\n(24)\n\nwhere K = p−n q−r (2B + 1)n . As in (23), van der Corput differencing is applied to the rightmost sum in y (24). This procedure introduces differenced polynomials f1 , . . . , f ry , for y ∈ Fqn , defined by y f i (x) := f i (x + py) − f i (x). Defining the boxes\n\nBy :=\n\nx ∈ Zn ; x ∈ B, x + py ∈ B , P we are then required to estimate the sum y∈Zn ∆(y), where \b\n\n∆(y) = N ( f1 , . . . , f ry , By , q) − q−r #(By ∩ Zn ). y\n\n19\n\ny\n\ny\n\nIf the leading form of f i is denoted Fi for 1 ≤ i ≤ r, let y\n\nZy = Proj Fq [x 1 , . . . , x n]/(F1 , . . . , F ry ). P y (In fact, Fi = py·∇Fi for all i, unless y = 0.) Then the estimation of ∆(y) by means of Theorem 2.8 requires uniform bounds for the dimension and degree of the closed subset of y ∈ An such that dim Sing(Zy ) ≥ s, for all possible values of s ≥ −1. This problem is addressed in the geometric part of the paper. The output of the method just described is the following asymptotic formula (Paper I, Theorem 1.4): N ( f1 , . . . , f r , B, pq) =\n\n(2B + 1)n pr qr\n\n+ On,d\n\n€€\n\nB (n+1)/2 p−r/2 q(n−r−1)/4\n\n+B (n+1)/2 p(n−2r)/2 q−1/4 + B n/2 p−r/2 q(n−r)/4 Š +B n/2 p(n−r)/2 + B n p−(n+r−1)/2 q−r + B n−1 p−r+1 q−r )(log q)n/2 .\n\n(25)\n\nTo deduce Theorem 2.2 from the asymptotic formula (25), we try to find primes p and q optimizing the expression on the right hand side, and at the same time satisfying the hypotheses that ZFp and ZFq be non-singular. It is only at this point that a dependence on the height of the polynomials is introduced into our estimates. A careful analysis using universal forms allows us to state this dependence explicitly in Theorem 2.2. Now we turn to Theorem 2.7. The main part of the proof concerns the non-singular case s = −1. As in , we apply Poisson’s summation formula to the counting function N ( f1 , . . . , f r , B, q), in a q-analogue of van der Corput’s B-process. One is then lead to estimate exponential sums X eq (a · x), x∈Fqn q| f 1 (x),..., f r (x)\n\nwhere a ∈ Fqn . Following Luo , we use estimates due to Katz to bound these exponential sums. Already for hypersurfaces, this yields an improvement upon Heath-Brown’s approach using Deligne’s bounds. The general case is proven by induction on s. Part of the paper is devoted to geometric considerations needed in order to carry out this induction in a uniform manner. At the end of Paper I, we sketch a proof of the following weighted version of the asymptotic formula (25), using Theorem 2.7 in place of Theorem 2.8. Theorem 2.9. Let W : Rn → [0, 1] be an infinitely differentiable function supported on [−2, 2]n . Let f1 , . . . , f r be polynomials in Z[x 1 , . . . , x n] of degree at least 3 and at most d, with leading forms F1 , . . . , F r . Let Z = Proj Z[x 1 , . . . , x n ]/(F1 , . . . , F r ) 20\n\nand suppose that p and q are primes, with p ≤ B ≤ q, such that both ZFp and ZFq are non-singular subvarieties of codimension r in Pn−1 . Then we have −r −r\n\nNW ( f1 , . . . , f r , B, pq) − p q ≪ D,n,d,C B\n\nX\n\n\u0012 W\n\n1 B\n\n\u0013 x\n\nx∈Zn (n+1)/2 −r/2 (n−r−1)/4\n\np\n\nq\n\n+ B (n+1)/2 p(n−2r)/2 q−1/4\n\n+ B n p−(n+r−1)/2 q−r + B n−C /2 p(C −r)/2 q−r/2\n\nfor any C > 0, where D = Dn+1 is the maximum over Rn of all partial derivatives of W of order n + 1.\n\n2.8 Iteration of the Heath-Brown-van der Corput method (Paper II) The idea behind the proof of Theorem 2.3 is to perform two iterations of the differencing procedure introduced in . To this end, we use a modulus that is a product of three primes, m = πpq, where π, p ≤ B ≤ q. Moreover, we revert to the use of a smooth weight function. Thus, with W the function defined in (22), the heart of the proof of Theorem 2.10 is an asymptotic formula of the type X \u00121 \u0013 −1 NW ( f , B, πpq) = (πpq) W x + error terms. (26) B x∈Zn The primes π and p are used as parameters in the two differencing steps. In the differencing procedure, we incorporate a refinement of Heath-Brown’s method due to Salberger , allowing us to retain, throughout the differencing step, congruence conditions that were discarded in the original approach. For any y ∈ Zn , we define the polynomial f y ∈ Z[x 1 , . . . , x n ] by f y (x) = f (x + y) − f (x). For any pair (y, z) ∈ Zn × Zn , we define the twice differenced polynomial f y,z (x) = f (x + y + z) − f (x + y) − f (x + z) + f (x). Furthermore, let F y (x) = y · ∇F (x) = y1\n\n∂F\n\n+ · · · + yn\n\n∂ x1 X F y,z (x) = (Hess(F ))y · z =\n\n1≤i, j≤n\n\n21\n\n∂F ∂ xn\n\n∂ 2F ∂ xi∂ x j\n\n,\n\nyi z j .\n\nNote how this notation differs from that of §2.7. Ignoring the technicalities of the differencing procedure itself, the main issue is now to estimate counting functions for the family of “differenced” varieties defined by f (x) = f pz (x) = f πy,pz (x) = 0. Once more, these estimates are supplied by Theorem 2.7, successful application of which requires knowledge of the dimension of the singular loci of the projective subschemes Zq,z,y = Proj Fq [x 1 , . . . , x n]/(F, F z , F y,z ) for different determinations of y, z. This turns out to require considerable geometric machinery. In particular, although the asymptotic formula (26) holds uniformly in f , the conditions that have to be imposed on the primes π, p, q in order for the method to work go beyond mere good reduction of the hypersurface f = 0. A thorough investigation is made in order to ensure that such primes may always be found, of a prescribed order of magnitude in relation to B. At this point, a dependence on the height of F is unavoidable. We get the following result. Theorem 2.10 (Paper II, Thm. 1.1). Let f ∈ Z[x 1 , . . . , x n ] be a polynomial of degree d ≥ 4 with leading form F . Suppose that F defines a non-singular n−1 . Then hypersurface in PQ 2\n\nN ( f , B) ≪ F B n−4+(37n−18)/(n\n\n+8n−4)\n\n.\n\nThe uniform version, Theorem 2.3, is derived using a version of Siegel’s lemma due to Heath-Brown , combined with an application of the determinant method in §3, due to Browning, Heath-Brown and Salberger .\n\n3 The determinant method In this section we shall discuss another method for counting solutions to Diophantine equations, investigated in Papers III and IV in this thesis. Quite contrary to the methods discussed above, the determinant method is most powerful when the number of variables is small compared to the degree of the equation. The method has its origins in a paper by Bombieri and Pila from 1989. 22\n\n3.1 Uniform bounds for affine curves Bombieri and Pila proved upper bounds for the number of integral points on plane curves. If C ⊂ A2 is an irreducible algebraic curve of degree d, defined over the integers, then their bound has the shape N (C, B) = Od,ǫ (B 1/d+ǫ )\n\n(27)\n\nfor any ǫ > 0. The key part of the proof of (27) is the construction of a number of auxiliary curves of low degree. Thus, it is proven that every point in C(Z, B) resides on one of Od,ǫ (B 1/d+ǫ ) algebraic curves of degree Od (1). To achieve this, one divides the curve C into small arcs, where it is sufficiently smooth, and for each such arc Γ one exhibits an algebraic curve meeting C in all the integral points of Γ. The number of integral points on the intersection of C with this curve is then Od (1) by Bézout’s Theorem. The existence of the auxiliary curves is established by examining a generalized Vandermonde determinant involving monomials evaluated at integral points, a procedure similar to techniques used in the theory of Diophantine approximation. A notable feature of the estimate (27), which makes it useful in several contexts, is that the bound does not depend on the coefficients of the defining equation. Thus, for example, Pila uses (27) as base for an induction argument to prove the estimates (7) and (8) above.\n\n3.2 Heath-Brown’s p-adic determinant method An important milestone in the subject of quantitative arithmetic of algebraic varieties is Heath-Brown’s paper from 2002, in which the key result is a vast generalization of the one in . It is a sign of its significance that the theorem is often referred to simply as “Theorem 14”. Theorem 3.1 ([16, Thm. 14]). Let F ∈ Z[x 1 , . . . , x n ] be an absolutely irreducible homogeneous polynomial of degree d, defining a hypersurface X ⊂ Pn−1 . Then, for any ǫ > 0, there is a homogeneous polynomial G ∈ Z[x 1 , . . . , x n] \\ (F ) of degree −1/(n−2) +ǫ k ≪n,d,ǫ B (n−1)d (log kF k)2n−3 , all of whose irreducible factors have degree On,d,ǫ (1), such that G(x) = 0 for every x ∈ S(X , B). Remark 3.1. Heath-Brown actually proves a more general statement, for boxes of possibly unequal sidelength. If B = (B1 , . . . , Bn ), then he defines V = B1 · · · Bn ,\n\nf\n\nT = max(B11 · · · Bnf n ) 23\n\nf\n\nwhere the maximum is taken over all monomials x 11 · · · x nf n that occur in F with non-zero coefficient. The statement of the theorem above then holds with S(X , B) replaced by S(X , B) and € Šd −(n−1)/(n−2) k ≪n,d,ǫ V d /T V ǫ (log kF k)2n−3 . Remark 3.2. The mild dependence on the coefficients of F in the above theorem may in fact be eliminated by appealing to an argument in the spirit of Siegel’s lemma [16, Theorem 4]. The same argument appears in our Paper II (Lemma 5.1). Heath-Brown’s version of the determinant method is quite different from that of Bombieri and Pila, but is based upon a variant of the same idea. Let F be as in Theorem 3.1. We may certainly assume that F is primitive, i.e. has coprime coefficients, so that for each prime p, the equation F (x 1 , . . . , x n) ≡ 0 . For a prime p one then considers (mod p) defines a hypersurface X p ⊂ PFn−1 p a partition of S(X , B) into subsets S(X , B, ξ), where S(X , B, ξ) is the set of x ∈ S(X , B) such that [x] reduces (mod p) to ξ ∈ X p (F p ). For each nonsingular F p -point ξ one gets an auxiliary polynomial, vanishing at every point of S(X , B, ξ), by proving the vanishing of a certain determinant. In other words, integral points are grouped together based on their p-adic distance to each other, whereas Bombieri and Pila considered a covering of small patches defined in the Euclidean metric. Thus we may distinguish between the “real” determinant method of , and the “p-adic” determinant method developed by Heath-Brown.\n\n3.3 Further refinements Several refinements of Heath-Brown’s determinant method have appeared. Broberg generalizes Theorem 14 to the case of an irreducible projective variety of any codimension. Broberg uses graded monomial orderings to formulate his result. This notion is elaborated in Paper III, Section 3. In particular, given a monomial ordering < and an irreducible projective variety X ⊆ Pn , we associate to the pair (<, X ) an (n + 1)-tuple (a0 , . . . , an ) of real numbers satisfying 0 ≤ ai ≤ 1 and a0 + · · · + an = 1. Broberg considers varieties over an arbitrary number field K, but for simplicity we shall only state the case K = Q here. Theorem 3.2 ([5, Thm. 1]). of dimension m and degree d. is generated by forms of degree tuple of positive real numbers,\n\nLet X ⊂ Pn be an irreducible closed subvariety Suppose that the ideal I ⊂ Q[x 0 , . . . , x n ] of X at most δ. Let B = (B0 , . . . , Bn ) be an (n + 1)and let < be a graded monomial ordering on 24\n\nQ[x 0 , . . . , x n ]. Then, for any ǫ > 0, there is a homogeneous polynomial G ∈ Z[x 0 , . . . , x n ] \\ I of degree a\n\nk ≪n,δ,ǫ (B0 0 · · · Bnan )(m+1)d\n\n−1/m\n\n,\n\nall of whose irreducible factors have degree On,δ,ǫ (1), such that G(x) = 0 for all x ∈ S(X , B). Remark 3.3. The dependence on δ in the implied constants in Theorem 3.2 may be replaced by a dependence on the degree d of X , by Lemmata 1.3 and 1.4 in Salberger . To understand some of the further developments, we shall have to look a bit closer on the proof of Theorem 14. Thus, let X be a hypersurface as in the theorem. First we note that we can easily dispose of those x ∈ S(X , B) that correspond to singular points [x] ∈ X , incorporating among our auxiliary forms one of the equations defining the singular locus of X . Thus it suffices to count non-singular points. For any given prime p, however, it may happen that a point that is non-singular over Q still reduces to a singular point on X p . But it is possible to find a finite set of primes P with the property that any nonsingular point x ∈ X (Q) reduces to a non-singular point ξ ∈ X p (F p ) for some prime p ∈ P . Heath-Brown then uses a p-adic Implicit Function Theorem to parameterize the elements of S(X , B, ξ). Salberger endeavours to count integral points on an affine surface X by a procedure that may be roughly described as follows. First the p-adic determinant method is applied once to obtain a number of curves of bounded degree on X . Then one repeats this a second time with a new prime q, to count integral points on these curves, but retaining also the p-adic congruence conditions from the first step. One is then forced to consider singular F p -points as well. This refinement of Heath-Brown’s determinant method is accomplished in , leading to proofs of new cases of the dimension growth conjecture. Recently, Salberger has developed a new version of the p-adic determinant method, where one uses congruence conditions for (almost) all primes p simultaneously. The output is an auxiliary form whose degree is considerably smaller than that given by Theorem 3.1, n−1\n\nk ≪n,d,ǫ B (n−2)d 1/(n−2)\n\n,\n\nbut with the serious drawback that there is no smaller upper bound for the degrees of its irreducible factors. However, through an intricate procedure, Salberger is able to interpolate between this result and the one obtained by the original argument, to the effect that every point to be counted belongs either to one of at most k hypersurfaces of bounded degree, or to one of −1/(n−2) +ǫ O(B (n−1)d ) subvarieties of codimension two. 25\n\n3.4 The basic idea In all versions of the determinant method, the aim is to construct auxiliary polynomials that vanish at the integral points one wishes to count. In general, given a collection of points a1 , . . . , as ∈ Cn , one can consider the interpolation problem of finding a polynomial of degree at most δ that vanishes at a1 , . . . , as . This corresponds to solving a system of equations X cα aαj = 0, j = 1, . . . , s, (28) |α|≤δ\n\nnon-trivially in indeterminates cα , |α| ≤ δ. Here we use multi-index notation α n xα = x 1 1 · · · x nαn , and |α| = α1 + . . . + αn for α ∈ Z≥0 . Assuming that the system (28) is quadratic, such a non-trivial solution exists if \u0010 \u0011 ∆ := det aαj j=1,...,s = 0. |α|≤δ\n\nFor integral points a1 , . . . , as ∈ Zn , this holds as soon as |∆| < 1. As for an ordinary Vandermonde determinant, one can make ∆ small by choosing the points ai close to each other (in the Euclidean sense). Alternatively, for any prime p, ∆ will vanish as soon as |∆| · k∆kp < 1, which we may try to achieve by choosing the points close to each other in the p-adic sense. In our situation, it is of course essential that the auxiliary polynomial does not vanish entirely on the variety under consideration. This may be achieved by restricting the set of monomials xα occuring in (28) (see Paper III, §3, where this is elaborated using monomial orderings.) The following example illustrates the basic arguments used to estimate such a monomial determinant, in the real setting. Example 3.1. Let (u1 , v1 , w1 ), . . . , (us , vs , ws ) be points in [−B, B]3 ∩Z3 satisfying f (ui , vi , w i ) = 0 for some polynomial f . For simplicity, let us assume that (u1 , v1 , w1 ) = (0, 0, 0). Given a set M = (m1 , . . . , ms ) of monomials in (x, y, z) of degree ≤ δ, define the determinant m (u , v , w ) · · · m (u , v , w ) 1 s s s 1 1 1 1 .. .. .. ∆0 = . . . . ms (u1 , v1 , w1 ) · · · ms (us , vs , ws ) It is convenient to rescale the problem. The points \u0012 \u0013 1 1 1 (x i , yi , zi ) = ui , vi , w i ∈ [−1, 1]3 ∩ Q3 B B B 26\n\nsatisfy f B (x i , yi , zi ) = 0, where f B (x, y, z) = f (B x, B y, Bz). Suppose now that (x i , yi , zi ) all lie within a cube K ⊂ [−1, 1]3 of sidelength ρ. Suppose furthermore that the subset of the hypersurface f B = 0 bounded by K can be parameterized as z = φ(x, y), where φ has continuous partial derivatives of any order. Now we have P ∆0 = B i deg(mi ) ∆, where\n\nm (x , y , z ) · · · 1 1 1 1 .. .. ∆= . . ms (x 1 , y1 , z1 ) · · ·\n\nm1 (x s , ys , zs ) .. . . ms (x s , ys , zs )\n\nDefine functions ψi (x, y) = mi (x, y, φ(x, y)) and consider the power series expansion around (0, 0, 0) ψi (x, y) = Pi,ν (x, y) + O(ρ ν+1 ), where Pi,ν is a polynomial of total degree ν, say X ci,α x α1 y α2 , Pi,ν (x, y) = α∈Z2 α1 +α2 ≤ν\n\nand ν is chosen so that Pi,ν has at least s terms (including terms with vanishing coefficient). We may then write ∆ = ∆′ + R,\n\nwith R of negligible size and ∆′ = det P, where   P1,ν (x 1 , y1 ) · · · P1,ν (x s , ys )   .. .. .. . P = . . .   Ps,ν (x 1 , y1 ) · · · Ps,ν (x s , ys ) A basis for the row space of P is furnished by the vectors α\n\nα\n\nvα = (x 1 1 y1 2 · · · x sα1 ysα2 ), where α1 +α2 ≤ ν. Since, for each k, the subspace generated by {vα ; α1 +α2 = k} has dimension at most k + 1, the matrix P is clearly row equivalent to a matrix where the first row has entries of size O(1), the next two rows have entries of size O(ρ), the next three rows have entries of size O(ρ 2 ), and so on. This yields the estimate ∆ ≪ ρ 1·2+2·3+···+ν(ν+1) = ρ ν\n\n3\n\n/3+O(ν 2 )\n\n.\n\n(29)\n\n(The implied constants, of course, depend upon the size of the partial derivatives of the function φ.) It is now a matter of choosing the parameters ρ, δ and ν optimally, in terms of B and the degree of f , to get |∆0 | < 1. 27\n\n3.5 The real determinant method for higher-dimensional varieties (Paper III) In the third paper of this thesis, the real determinant method is developed in higher dimensions. We give a new proof of a result (Paper III, Thm. 1.2) that is essentially Theorem 3.2 above. In particular, we recover Theorem 14. It might seem, at a first glance, that we have achieved a generalization in allowing non-rational coefficients for the defining polynomials, but as HeathBrown notes [16, Cor. 1] it is easy to find auxiliary hypersurfaces for varieties not defined over Q. The main obstacle in generalizing the procedure of Bombieri and Pila to higher dimensions has to do with local parameterization. In Example 3.1 above, we considered a patch of our variety parameterized by a smooth function. In other words, to carry out our estimate we had to assume that we were in a situation where the Implicit Function Theorem could be invoked. In particular, singular points have to be handled separately by other means. Furthermore, the implied constants in our bounds depended on the sizes of the partial derivatives of the implicit function. It is thus necessary to control these derivatives to get a bound on the determinant that holds uniformly in all patches. In , this is done through a rather elaborate iterative procedure, in which patches where the derivatives oscillate are excised and reparameterized. (In the p-adic setting, there is also a version of the Implicit Function Theorem [5, Lemma 6], allowing for parameterization of the points in a congruence class S(X , B, ξ) appertaining to a non-singular point ξ ∈ X p (F p ), by p-adic power series.) In Paper III, we employ a powerful result due to Gromov to tackle the parameterization problem. Let V ⊂ ARn be an algebraic variety of dimension m < n and degree d, and let Y = V ∩ [−1, 1]n . Then Yomdin-Gromov’s algebraic lemma (Lemma 4.1 in Paper III) states that for each r ∈ Z+ , Y can be parameterized by On,r,d (1) functions [−1, 1]m → [−1, 1]n , all of whose partial derivatives of order up to r are continuous and bounded in absolute value by 1. This lemma is an example of the theory of C k -reparameterization of semialgebraic sets, first introduced by Yomdin to study topological entropy (see or for an account of this field of research). More generally still, a statement of the same nature can be proven to hold for so called definable sets in o-minimal structures. This is proven in , where it is used to prove a theorem on the paucity of rational points of such sets. From Theorem 1.2 in Paper III it is not difficult to derive an affine version, containing the estimate (27) as a special case.\n\n28\n\nTheorem 3.3 (Paper III, Thm. 1.1). Let X ⊂ ARn be an irreducible closed subvariety of dimension m and degree d, and let I ⊂ R[x 1 , . . . , x n] be the ideal of X . Then, for any ǫ > 0, there exists a polynomial g ∈ Z[x 1 , . . . , x n ] \\ I of degree k ≪n,d,ǫ B md\n\n−1/m\n\n,\n\nall of whose irreducible factors have degree On,d,ǫ (1), such that g(x) = 0 for each x ∈ X (Z, B).\n\n3.6 An approximative determinant method. Sums and differences of k-th powers (Paper IV) The fourth paper in this thesis (see also §1.5) deals with counting the number of representations of a positive integer N by a diagonal form, that is, integral solutions to the equation a1 x 1k + a2 x 2k + a3 x 3k + a4 x 4k = N .\n\n(30)\n\nIn general, let F (x 1, x 2 , x 3 , x 4 ) be a non-singular homogeneous polynomial of degree k. We want to count integral solutions to the equation F (x 1 , x 2 , x 3 , x 4 ) = N\n\n(31)\n\nwith |x i | ≤ B. As a first approach, we may apply Theorem 3.3 to the threedimensional affine hypersurface defined by (31), to obtain a collection of 1/3 Ok,ǫ (B 3/k +ǫ ) auxiliary hypersurfaces containing all points we want to count. Suppose, however, that the positive integer N is considerably smaller than k B . Then a primitive integer quadruple (x 1 , x 2 , x 3 , x 4 ) satisfying (31) corresponds to a point in P3 (Q) lying, in a certain sense, near the projective surface defined by F (x 1 , x 2 , x 3 , x 4 ) = 0. (32) p\n\nSeeing as Theorem 3.1 would yield a collection of Ok,ǫ (B 3/ k ) auxiliary forms for rational points of height at most B on the surface (32), one could hope to improve the exponent 3/k1/3 by incorporating this additional information into the determinant method. Such an approximative version of the determinant method was recently developed by Heath-Brown for studying the corresponding problem in three variables, and that work provides the main ideas for our approach in Paper IV. We prove that every solution to (31) (or indeed, to the corresponding inequality) satisfies one of p\n\nOF,N ,ǫ (B 16/(3 29\n\n3k)+ǫ\n\n)\n\nauxiliary homogeneous equations (Paper IV, Prop.p3.1). Thus we successfully interpolate between the exponents 3/k1/3 and 3/ k discussed above. In the case of a diagonal form F (x 1 , x 2 , x 3 , x 4 ) = a1 x 1k + a2 x 2k + a3 x 3k + a4 x 4k , we proceed by applying the results from described above to each affine surface F (x 1, x 2 , x 3 , x 4 ) − N = Ai (x 1 , x 2 , x 3 , x 4 ) = 0 obtained by the above procedure. Here Ai denotes an auxiliary form. The final ingredient in our estimate for the number of representations is a lower bound for the degrees of curves on Fermat hypersurfaces, due to Salberger .\n\nReferences V. V. Batyrev and Yu. I. Manin. Sur le nombre des points rationnels de hauteur borné des variétés algébriques. Math. Ann., 286(1-3):27–43, 1990. Victor V. Batyrev and Yuri Tschinkel. Rational points on some Fano cubic bundles. C. R. Acad. Sci. Paris Sér. I Math., 323(1):41–46, 1996. B. J. Birch. Forms in many variables. Proc. Roy. Soc. Ser. A, 265:245–263, 1961/1962. E. Bombieri and J. Pila. The number of integral points on arcs and ovals. Duke Math. J., 59(2):337–357, 1989. Niklas Broberg. A note on a paper by R. Heath-Brown: “The density of rational points on curves and surfaces” [Ann. of Math. (2) 155 (2002), no. 2, 553–595; mr1906595]. J. Reine Angew. Math., 571:159–178, 2004. T. D. Browning and D. R. Heath-Brown. Rational points on quartic hypersurfaces. J. Reine Angew. Math., 629:37–88, 2009. T. D. Browning, D. R. Heath-Brown, and P. Salberger. Counting rational points on algebraic varieties. Duke Math. J., 132(3):545–578, 2006. Timothy D. Browning. Quantitative arithmetic of projective varieties. Progress in Mathematics 277. Basel: Birkhäuser. xi, 160 p., 2009. 30\n\n Pierre Deligne. La conjecture de Weil. I. Inst. Hautes Études Sci. Publ. Math., (43):273–307, 1974. S. W. Graham and G. Kolesnik. van der Corput’s method of exponential sums, volume 126 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1991. M. Gromov. Entropy, homology and semialgebraic geometry. Astérisque, (145-146):5, 225–240, 1987. Séminaire Bourbaki, Vol. 1985/86. Robin Hartshorne. Ample subvarieties of algebraic varieties. Notes written in collaboration with C. Musili. Lecture Notes in Mathematics, Vol. 156. Springer-Verlag, Berlin, 1970. Robin Hartshorne. Algebraic geometry. Springer-Verlag, New York, 1977. D. R. Heath-Brown. Hybrid bounds for Dirichlet L-functions. Invent. Math., 47(2):149–170, 1978. D. R. Heath-Brown. The density of rational points on nonsingular hypersurfaces. Proc. Indian Acad. Sci. Math. Sci., 104(1):13–29, 1994. D. R. Heath-Brown. The density of rational points on curves and surfaces. Ann. of Math. (2), 155(2):553–595, 2002. D. R. Heath-Brown. Sums and differences of three kth powers. J. Number Theory, 129(6):1579–1594, 2009. Marc Hindry and Joseph H. Silverman. Diophantine geometry, volume 201 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2000. An introduction. C. Hooley. On the representations of a number as the sum of four cubes. I. Proc. London Math. Soc. (3), 36(1):117–140, 1978. C. Hooley. On the number of points on a complete intersection over a finite field. J. Number Theory, 38(3):338–358, 1991. Henryk Iwaniec and Emmanuel Kowalski. Analytic number theory, volume 53 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI, 2004. Nicholas M. Katz. Estimates for “singular” exponential sums. Internat. Math. Res. Notices, (16):875–899, 1999. 31\n\n Wenzhi Luo. Rational points on complete intersections over F p . Internat. Math. Res. Notices, (16):901–907, 1999. Kurt Mahler. Note on hypothesis K of Hardy and Littlewood. J. Lond. Math. Soc., 11:136–138, 1936. Emmanuel Peyre. Points de hauteur bornée et géométrie des variétés (d’après Y. Manin et al.). Astérisque, (282):Exp. No. 891, ix, 323–344, 2002. Séminaire Bourbaki, Vol. 2000/2001. J. Pila. Density of integral and rational points on varieties. Astérisque, (228):4, 183–187, 1995. Columbia University Number Theory Seminar (New York, 1992). J. Pila and A. J. Wilkie. The rational points of a definable set. Duke Math. J., 133(3):591–616, 2006. Per Salberger. Counting rational points on projective varieties. Preprint, 2009. Per Salberger. Integral points on hypersurfaces of degree at least three. unpublished. Per Salberger. On the density of rational and integral points on algebraic varieties. J. Reine Angew. Math., 606:123–147, 2007. Stephen Hoel Schanuel. Heights in number fields. Bull. Soc. Math. France, 107(4):433–449, 1979. Igor R. Shafarevich. Basic algebraic geometry. 1. Springer-Verlag, Berlin, 1994. J. G. van der Corput. Zahlentheoretische Abschätzungen. Math. Ann., 84:53–79, 1921. J. G. van der Corput. Verschärfung der Abschätzung beim Teilerproblem. Math. Ann., 87:39–65, 1922. H. Weyl. Über die Gleichverteilung von Zahlen mod. Eins. Math. Ann., 77:313–352, 1916. Joel M. Wisdom. On the representation of numbers as sums of powers. PhD thesis, University of Michigan, 1998. Joel M. Wisdom. On the representations of a number as the sum of four fifth powers. J. London Math. Soc. (2), 60(2):399–419, 1999. 32\n\n Y. Yomdin. Analytic reparametrization of semi-algebraic sets. J. Complexity, 24(1):54–76, 2008. Yosef Yomdin and Georges Comte. Tame geometry with application in smooth analysis, volume 1834 of Lecture Notes in Mathematics. SpringerVerlag, Berlin, 2004.\n\n33\n\n34\n\nCorrections to the papers The versions of Paper I and Paper II included in this thesis differ from the published versions on the following points.\n\nPaper I In Lemma 2.9, we have added the hypothesis q ∤ (di − 1) for i = 1, . . . , r. We have also removed an erroneous, but superfluous, assertion made in the proof of (i).\n\nPaper II 1. In Lemma 4.1, the notation for the “differenced” weight functions used was not consistent with the definition in Notation 4.2. Thus, we have changed this notation slightly throughout Lemma 4.1 and its proof. 2. In Lemma 5.1, we have added the hypothesis that the coefficients of f be coprime, which is of course no restriction in the application of the result.\n\n35\n\n36" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8590764,"math_prob":0.9961367,"size":66978,"snap":"2021-31-2021-39","text_gpt3_token_len":20507,"char_repetition_ratio":0.13788933,"word_repetition_ratio":0.12330069,"special_character_ratio":0.31865686,"punctuation_ratio":0.1664949,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99934965,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T21:04:52Z\",\"WARC-Record-ID\":\"<urn:uuid:4cd1ff80-9606-4eb1-82fc-b46d2ad15d22>\",\"Content-Length\":\"98169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73c24c28-ed4c-49be-895b-c21d3bbe6db0>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8e8f73b-3352-4198-ae81-73d5dbe00bd4>\",\"WARC-IP-Address\":\"172.67.172.58\",\"WARC-Target-URI\":\"https://dochero.tips/counting-solutions-to-diophantine-equations.html\",\"WARC-Payload-Digest\":\"sha1:LK74DWFF6WSS7ANQFYHTZ2QNBGNO2YGK\",\"WARC-Block-Digest\":\"sha1:4ZNDKPWM5OTZRWLRBXPYIRHFGFJE276P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154219.62_warc_CC-MAIN-20210801190212-20210801220212-00404.warc.gz\"}"}
https://link.springer.com/article/10.1007/s11229-014-0632-x
[ "# New theory about old evidence\n\nA framework for open-minded Bayesianism\n\n## Abstract\n\nWe present a conservative extension of a Bayesian account of confirmation that can deal with the problem of old evidence and new theories. So-called open-minded Bayesianism challenges the assumption—implicit in standard Bayesianism—that the correct empirical hypothesis is among the ones currently under consideration. It requires the inclusion of a catch-all hypothesis, which is characterized by means of sets of probability assignments. Upon the introduction of a new theory, the former catch-all is decomposed into a new empirical hypothesis and a new catch-all. As will be seen, this motivates a second update rule, besides Bayes’ rule, for updating probabilities in light of a new theory. This rule conserves probability ratios among the old hypotheses. This framework allows for old evidence to confirm a new hypothesis due to a shift in the theoretical context. The result is a version of Bayesianism that, in the words of Earman, “keep[s] an open mind, but not so open that your brain falls out”.\n\n## Introduction\n\nBayesianism offers a way to revise our degrees of belief in light of new evidence. However, it does not capture all the relevant belief dynamics: in the process of evaluating our evidence, we may want to consider a new theory, and thus reconsider some of the assumptions on which all of our former degrees of belief co-depend. Standard forms of Bayesianism do not foresee the option of adopting a new theory in its formalism, so it seems that when a new theory does surface we have to start from scratch: assigning priors to the empirical hypotheses belonging to the new theories, and revising the degrees of belief in the face of further evidence. In the current paper, we propose a conservative extension of Bayesianism that is able to encompass theory change, while retaining comparative aspects of probabilities that have been computed prior to this change.\n\n### Example: food inspector raising a new hypothesis\n\nThroughout the paper, especially the more technical Sect. 3, it may be helpful to keep in mind a simple example. For this purpose, we offer the following scenario (inspired by an example from Romeijn 2005).\n\nA food safety inspector wants to determine whether or not a restaurant is taking the legally required precautions against food poisoning. She enters the restaurant anonymously and orders a number of dishes. She uses food testing strips to determine for each of the dishes whether or not it is infected by a particularly harmful strain of Salmonella. She assumes that these tests work perfectly, interpreting a positive test result as a Salmonella-infected dish and a negative result as an uninfected one. She also assumes that in kitchens that implement the precautionary practices each dish has a probability of 1% of being infected, whereas this probability rises to 20% in kitchens that do not implement the practices. She orders five dishes from the kitchen and they all turn out to be infected. This prompts her to consider a third hypothesis: the test strips may have been contaminated, rendering all test results positive, irrespective of whether the dish is infected or not.\n\nAfter considering this third option, the inspector will not order any additional dishes. Instead, she will take the old evidence (that five dishes out of five appeared to be infected) to confirm the new theory (that the test strips were infected) and it seems reasonable enough for her to do so. Our challenge is how to represent this positive confirmation of the old evidence for the new theory within (an extension of) the Bayesian framework.\n\n### Old evidence and new theories\n\nThe confirmation-theoretic model of this paper sheds new light on the problem of old evidence and new theories. This problem for Bayesianism was first identified by Clark Glymour (1980). The problem arises from the discrepancy between descriptive, historical examples, in which old evidence does seem to lend positive confirmation to new theories, and the normative, Bayesian position, in which old evidence cannot confirm new theories. In particular, by updating via Bayes’ rule (used here to refer to Bayesian conditionalization), taking into account evidence that has already been conditioned upon cannot change the probabilities. And since all expressions of confirmation hinge on differences in probabilities, it seems that old evidence cannot lead to confirmation of new theories. Many later authors have called Glymour’s problem simply “the problem of old evidence”.Footnote 1 A minority of philosophers has stressed the importance of the other side of the problem: “the problem of new theories” (for example Earman 1992). In what follows, we will clarify that both problems can be resolved in open-minded Bayesianism.\n\nNew theories pose a bigger problem for Bayesianism than usually recognized.Footnote 2 In fact, without a way of introducing a new theory into the domain of an agent’s degrees of belief, its prior and posterior degrees of belief simply do not show up in the model. In effect, as we will explain in Sect. 3.3.2, those probabilities are set to zero. Either way, for want of a way to express non-zero probability assignments to a new theory, the problem of old evidence does not even occur—or it is worse than the usual presentations suggest. Therefore, we analyze the problem of new theories first and offer a conservative extension of Bayesianism to deal with this problem: a framework for open-minded Bayesianism. In the course of doing so, it will become clear what is missing to deal adequately with old evidence and to determine the confirmation it may give to a new theory. In particular, our model is compatible with Glymour’s observation that in important historical examples old evidence does offer positive confirmation to new theories.\n\nSome proposals for addressing the problem of old evidence (in particular that of Garber 1983) observe that the crucial content that is being learned and that lends positive confirmation to a new theory, is not the old evidence itself, but rather the fact that this new theory implies or explains the old evidence. Recently, Sprenger (2014) has proposed a new solution along these lines. We are sympathetic to this approach.Footnote 3 However, Sprenger’s results presuppose that the old evidence, the new theory, and the relevant relation between the two are all elements of some algebra (see his Theorems 1 and 2). As such, this approach does not address a more fundamental question: how can a new theory (or a new relation between a theory and a piece of evidence) be incorporated in the algebra? This is the problem of new theories, which is especially pressing in the presence of old evidence, that we tackle here.\n\n### Bayesian confirmation theory\n\nSince the problem of old evidence and new theories is ultimately a problem concerning Bayesian confirmation, we should first be clear on how we intend to measure confirmation of a hypothesis by a body of evidence. This is in itself an interesting problem in formal epistemology, and some reactions to the problem of old evidence are in fact proposals for a new measure of confirmation (e.g., Christensen 1999; Joyce 1999). In qualitative terms, a piece of evidence $$E$$ lends positive confirmation to a theory $$T$$ if the posterior $$P(T \\mid E)$$ exceeds the prior $$P(T)$$. To turn this into a quantitative notion, different measures of confirmation have been proposed: for instance, the difference or (the log of) the ratio of posterior and prior.\n\nHowever, our current investigation focuses on how to deal with new theories, which is a problem that besets Bayesianism more broadly, and quite independently of the chosen confirmation measure. Therefore, we will not opt for any such measure, and focus our attention on what they supervene on: the probability assignment over hypotheses themselves. Nothing in our exposition hinges on the precise measure of confirmation that may be grafted onto the probabilistic models.\n\n### The catch-all hypothesis\n\nOur proposal of open-minded Bayesianism relies on the use of a catch-all hypothesis: given a set of explicit hypotheses, we introduce an additional hypothesis that is the negation of the union of the previous hypotheses. Facing the possibility of currently unexplored theoretical alternatives is relevant, not only for the formal framework of Bayesian confirmation theory, but also for the philosophy of science more generally. See for instance the discussion on the pessimistic meta-induction by Sklar (1981), who speaks of “unborn hypotheses”, and by Stanford (2006), who uses the term “unconceived alternatives”. In statistical parlance, the catch-all hypothesis makes good on Lindley’s demand for observing Cromwell’s rule (Lindley 1991 p. 104), which states that prior probabilities of zero or one should only be assigned to logical truths or falsehoods (cf. strict coherence and regularity; see, e.g., Hájek 2012).\n\nWe aim to develop a particular way of observing Cromwell’s rule, which can be found already in Shimony (1970). He discussed the idea of a catch-all hypothesis in the context of his “tempered personalist” account of probability: he suggested it as a way to represent open-mindedness, which he regarded as a tempering condition to obtain a weakened form of Bayesianism adequate for scientific inference. (Shimony (1970), p. 96) suggested not to assign numerical weights (priors) to the catch-all (in contrast to the other hypotheses).\n\nAlso Earman (1992) discussed the use of a catch-all to make room for later theory change. According to (Earman (1992), p. 196), new theories are “shaven off” from the catch-all hypothesis, which thus “serves as a well for initial probabilities for as yet unborn theories, and the actual introduction of new theories results only in drawing upon this well without disturbing the probabilities of previously formulated theories.” However, he is not satisfied by the proposal of shaving off from a catch-all; according to (Earman (1992), pp. 195–196) it leads to the assignment of successively smaller probabilities to later theories (cf. Romeijn 2004), and shaving off does not give an adequate description of scientific revolutions (in the Kuhnian sense) that involve radically new theories. These reservations do not apply to the way in which we formalize the notion of a catch-all, as we will explain in Sect. 4.\n\n### Assigning open-minded probabilities\n\nWhen Bayesian ideas are applied within the sciences, the domain of the probability function tends to have a small scope: it is used to compare parametric models that apply to a single, well-delineated target system. In philosophy, however, we often speak as if the domain of the probability function captures every thinkable thought. In particular, in philosophy of science and Bayesian confirmation theory, the probability function assigns values to scientific theories.\n\nIf all later changes to the probability assignment are to be due to conditioning, as standard forms of Bayesianism prescribe, we have to be able to specify the domain in such a way as to include all possible scientific theories, including those that are yet to be developed. Nevertheless, it may happen that genuinely new scientific theories do emerge. It is unclear how those can be incorporated in a domain that has to be defined upfront.\n\nIn Sect. 2, we will make explicit what the domain of the probability function is on the standard account. Since probability functions assign values to scientific theories as well as to particular pieces of evidence, we have to define a domain that can represent all these objects, even though they are of very different kinds. Specifying this domain provides us with a good opportunity to formalize the notion of the catch-all hypothesis, and how it is used to change the domain of the probability function.\n\nIn Sect. 3, we will introduce two forms of open-minded Bayesianism, called vocal and silent, which both employ a catch-all hypothesis. Both are based on the idea that we can remain open-minded about our probabilities by employing sets of probability functions rather than single functions. But both approach the (incomplete) assignment of probabilities in slightly different ways. Also the rule for updating on new theories takes a different form in both contexts.\n\nIn Sect. 4, we evaluate the proposals and offer a hybrid approach that alternates between silent and vocal episodes.\n\n## Bayesianism and the catch-all hypothesis\n\nUpon the introduction of a new theory, the domain of the probability function may change. Before we decide how we will capture this change, let us first specify the domain for the standard form of Bayesianism. We will start this investigation by considering Bayes’ theorem. This set-up will also prove fruitful to formalize the notions of hypothesis, evidence, and the catch-all, which prepares us for the subsequent treatment of domain changes and associated changes in probability.\n\n### Domain of the probability function\n\nBayes’ theorem is often presented as follows:Footnote 4\n\n\\begin{aligned} P(H \\mid E) = \\frac{P(E \\mid H) \\ P(H)}{P(E)}, \\end{aligned}\n\nwhere $$H$$ is a hypothesis and $$E$$ is a piece of evidence. But what is $$P$$? This function symbol appears four times in the equation, but can it be interpreted in the same way in all four appearances?Footnote 5\n\nWe maintain that the four occurrences of $$P$$ in Bayes’ theorem do refer to the same probability function with the same domain. We take probability to be a one-place function, and we employ the standard definition of conditional probability to make sense of posterior and likelihood. As the common domain, we consider an algebra spanned by the Cartesian product of a set of elementary hypotheses, $$\\varTheta$$, and a sample space, $$\\varOmega$$ (more on these in the following subsections):\n\n\\begin{aligned} \\mathcal {A}(\\varTheta \\times \\varOmega ). \\end{aligned}\n\nTo be precise, we interpret the argument $$H$$ of the prior and the posterior as shorthand for $$H \\times \\varOmega$$ and the argument $$E$$ of the marginal likelihood as shorthand for $$\\varTheta \\times E$$. The interpretation of these elements of the algebra $$\\mathcal {A}$$ remains as before.\n\nDynamics: time stamps In Bayesian confirmation theory, we model the rational degrees of belief of an agent by a probability function. To capture the dynamics of the agent’s degrees of belief, we consider a succession of probability functions, indexed by a time stamp: $$P_t$$ is the probability function that represents the rational degrees of belief of the agent at time $$t$$. In standard Bayesianism, these belief states are linked by Bayes’ rule, as detailed below.\n\n### Evidence and updating\n\nBefore we can start applying probability theory, we have to fix a particular sample space (or set of atomic events), $$\\varOmega$$, which is chosen such that any result of a measurement can be represented as a subset of $$\\varOmega$$. The sample space can be a Cartesian product of sets, which allows us to represent very different types of empirical data.Footnote 6 We represent (actual and hypothetical) pieces of evidence as elements of an algebra on the sample space, $$\\mathcal {A}(\\varOmega )$$. This set is usually called the event space, but in the current context it is better to call it the ‘evidence space’.\n\nDynamics: Bayes’ rule If an agent receives evidence $$E$$ at $$t=n$$, then Bayes’ rule prescribes that the agent has to adopt a new probability function $$P_{t=n}$$ that is equal to the posterior of the agent’s immediately preceding probability function: $$P_{t=n}(\\cdot )=P_{t=n-1}(\\cdot \\mid E)$$, which can be computed via Bayes’ theorem.\n\n### Explicit hypotheses and the catch-all\n\nIn the Bayesian framework, probability functions range over evidence and hypotheses. Hence, in addition to specifying $$\\varOmega$$ and $$\\mathcal {A}(\\varOmega )$$, we need to define a set of hypotheses, $$\\mathcal {H}$$, and an algebra over this set, $$\\mathcal {A}(\\mathcal {H})$$. The hypotheses are only specified up to their empirical content. The scientific theories that motivate them are not brought into view. The way to characterize an empirical hypothesis, $$H$$, is by specifying a likelihood function $$P( \\cdot \\mid H)$$ ranging over the evidence space, $$\\mathcal {A}(\\varOmega )$$. Because the empirical content of hypotheses is spelled out in terms of probability functions over the data, the hypotheses are called statistical.Footnote 7\n\nUnder a hypothesis we may also subsume an entire family (i.e., a set) of likelihood functions, which have the same form except for a different value of a parameter (or vector of parameters).Footnote 8 Henceforth, we will treat all hypotheses as sets of probability functions on the domain $$\\mathcal {A}(\\varOmega )$$. Hypotheses that correspond with singleton sets will be called elementary hypotheses, others will be called composite. Observe that the hypotheses in $$\\mathcal {H}$$ need not be elementary in this sense.\n\nLike the elementary events in $$\\varOmega$$, the hypotheses in $$\\mathcal {H}$$ need to be mutually exclusive and jointly exhaustive. However, merely exhausting the union of the hypotheses in $$\\mathcal {H}$$, which is the set of hypotheses that are being considered at a given point in time, may not suffice. In particular, it does not suffice once a new hypothesis emerges, because in that case we want to involve a hypothesis outside $$\\bigcup _{H \\in \\mathcal {H}}H$$. As indicated before, if we do not offer a domain in which possibilities outside $$\\mathcal {H}$$ can be denoted, we cannot begin to formulate the problem of old evidence and new theories.\n\nOur first and important deviation from what we call ‘standard Bayesianism’ is that we give the probability function a domain that includes hypotheses outside the set that is currently under consideration. We propose that the hypotheses ought to be mutually exclusive and jointly exhaustive of the vast set of all probability functions on the evidence space $$\\mathcal {A}(\\varOmega )$$:Footnote 9\n\n\\begin{aligned} \\varTheta = \\{ P: \\mathcal {A}(\\varOmega ) \\rightarrow [0,1] \\mid P \\hbox { is a probability function}\\}. \\end{aligned}\n\nThen, we can represent an empirical, or statistical, hypothesis as a non-empty set of probability functions on $$\\mathcal {A}(\\varOmega )$$; hypotheses are thus elements of an algebra on $$\\varTheta$$.\n\nLet us consider a collection of $$N+1$$ hypotheses (with $$N$$ a positive integer) that are mutually exclusive and jointly exhaustive: this partition of $$\\varTheta$$ contains $$N$$ explicitly formulated hypotheses, $$H_0,\\ldots ,H_{N-1}$$, and one catch-all, $$\\overline{\\varTheta _N}$$. By an ‘explicitly formulated’ hypothesis, $$H_i$$, we mean an empirical hypothesis that is produced by a scientific theory. We do not discuss in detail the scientific theories themselves, or even how they lead to statistical hypotheses.Footnote 10\n\nWe will denote the set of explicitly formulated hypotheses (previously indicated by $$\\mathcal {H}$$) by\n\n\\begin{aligned} T_N = \\left\\{ H_i \\mid i \\in \\left\\{ 0,\\ldots ,N-1\\right\\} \\right\\} . \\end{aligned}\n\n$$T_N$$ represents the ‘theoretical context’ against which hypotheses are being considered. We will denote the union of the hypotheses in $$T_N$$ by\n\n\\begin{aligned} \\varTheta _N = \\bigcup _{i=0}^{N-1} H_i. \\end{aligned}\n\nHence, $$T_N$$ is a partition of $$\\varTheta _N$$. $$\\varTheta _N$$ is the subset of $$\\varTheta$$ that is currently being covered by some scientific theory. The catch-all, $$\\overline{\\varTheta _N}$$, is the complement of $$\\varTheta _N$$ within $$\\varTheta$$ (so, $$T_N \\cup \\{ \\overline{\\varTheta _N} \\}$$ is a partition of $$\\varTheta$$): this hypothesis is the set of all the probability functions that are not produced by any known scientific theory. Whereas the other hypotheses come with a—possibly very intricate—theoretical background story, the catch-all $$\\overline{\\varTheta _N}$$ has no content other than “none of the explicitly formulated hypotheses”. So, $$\\overline{\\varTheta _N}$$ is the set $$\\varTheta \\setminus \\bigcup _{i=0}^{N-1} H_i$$ and that is all that can be said about it. In the same vein, we cannot say anything about the probabilities that the catch-all hypothesis assigns to the evidence.\n\nDynamics: shaving off In the previous subsection, we have seen that the incorporation of evidence leads to an update of the probability function governed by Bayes’ rule. Standard Bayesianism lacks an analogous procedure for revising the probability function in light of a new hypothesis. We will now discuss how the presence of the catch-all allows us to represent the dynamics of the set of hypotheses. This prepares us for the proposal of open-minded Bayesianism in the next section.\n\nAfter a new scientific theory has been developed, the statistical hypothesis it produces may be added to the partition of $$\\varTheta$$ by “shaving off” from the catch-all (by the terminology of Earman 1992, p. 196) . At this point in time, the former catch-all may be decomposed into an additional explicitly formulated hypothesis $$H_N$$ (disjoint from the earlier hypotheses) and a new (smaller) catch-all, $$\\overline{\\varTheta _{N+1}}$$. So, the algebra on $$\\varTheta \\times \\varOmega$$ changes.Footnote 11\n\n### Summary of key ideas\n\nWe briefly recapitulate our approach so far and our use of the following terms: scientific theory, statistical hypothesis, sample space, evidence, and catch-all.\n\nA scientific theory together with background assumptions produces an empirical, or statistical, hypothesis. (How this happens requires engaging with the details of a scientific theory, which falls outside the scope of our current framework.) Such an empirical or statistical hypothesis is a set, possibly a singleton, of probability functions. In order to compare hypotheses produced by different theories in the light of a common body of empirical data (and thus to compare their measures of confirmation or evidential support), their probability functions need to have a common domain. This domain is called the evidence space: it is an algebra on a sample space (which may be a Cartesian product set to allow for the representation of mutually independent measurable quantities).\n\nThe union of all statistical hypotheses produced by the currently available scientific theories $$(\\varTheta _N)$$ does not exhaust the set of all probability functions on the evidence space $$(\\varTheta )$$.Footnote 12 The complement of the former set relative to the latter set is called the catch-all hypothesis $$(\\overline{\\varTheta _N})$$: unlike the other hypotheses, it is not produced by a scientific theory, but rather it results from a meta-theory. The catch-all hypothesis is included to express that many other hypotheses are conceivable, each associated with a probability assignment or a set of such assignments over the evidence.\n\nWith the idea of a catch-all hypothesis in place, we can now turn to a full specification of open-minded Bayesianism. The inclusion of a catch-all hypothesis makes room for modeling the introduction of new hypotheses, namely by shaving them off from the catch-all. But this in itself is not sufficient: we still need to specify how shaving off influences probability assignments over the hypotheses. This is the task undertaken in the next section.\n\n## Open-minded probability assignments\n\nIn the previous section, we have introduced the formal framework of open-minded Bayesianism. It is a form of Bayesianism that requires the set of hypotheses to include a catch-all hypothesis. In the current section, we develop the probability kinematics for open-minded Bayesianism. Two versions will be considered: vocal and silent. The two approaches suggest slightly different rules for revising probability functions upon theory change.Footnote 13\n\n### Vocal and silent open-mindedness\n\nIn open-minded Bayesianism, hypotheses are represented as sets of probability functions. If prior probabilities are assigned to the functions within a set, then a single marginal probability function can be associated with the set. But without such a prior probability assignment within the set, the set specifies so-called imprecise probabilities (see, for instance, Walley 2000).\n\nWe first clarify probability assignments over explicitly formulated hypotheses. In standard Bayesianism, prior probabilities are assigned to the hypotheses, which are all explicitly formulated. We can furthermore assign priors over the individual probability functions contained within composite hypotheses, if there are any. We call such priors within a composite hypothesis sub-priors. The use of sub-priors leads to a marginal likelihood function for the composite hypothesis.Footnote 14 Upon the receipt of evidence we can update all these priors, i.e., those over elementary and composite hypotheses as well as those within composite hypotheses.\n\nNow recall that in open-minded Bayesianism, the space of hypotheses also contains a catch-all, which is a composite hypothesis encompassing all statistical hypotheses that are not explicitly specified. In standard Bayesianism, this catch-all hypothesis is usually not mentioned, and all probability mass is concentrated on the hypotheses that are formulated explicitly. Within the framework of open-minded Bayesianism, we will represent this standard form of Bayesianism by setting the prior of the catch-all hypothesis to zero.Footnote 15\n\nLet us turn to open-minded Bayesianism itself. To express that we are prepared to revise our theoretical background, we assign a strictly positive prior to the catch-all. However, we agree with Shimony (1970) that it is not sensible to assign any definite value to the prior of the catch-all. Since the catch-all is not based on a scientific theory, the usual “arational” considerations (to employ the terminology of Earman 1992, p. 197) for assigning it a prior, namely by comparing it to hypotheses produced by other theories, do not come into play here. Moreover, it seems clear that the catch-all should give rise to imprecise marginal likelihoods as well, which suggests that we should refrain from assigning sub-priors to its constituents, too. (Recall that the algebra on $$\\varTheta \\times \\varOmega$$ cannot pick out any strict subset of the catch-all.) These considerations lead us to consider two closely connected forms of open-minded Bayesianism, which both avoid assigning a definite prior to the catch-all:\n\n• Vocal open-minded Bayesianism assigns an indefinite prior and likelihood to the catch-all hypothesis, $$\\overline{\\varTheta _N}$$. We represent its prior by $$\\tau _N \\in ]0,1[$$ and its likelihood by $$x_N(\\cdot \\mid E)$$. To ensure normalization over all hypotheses (including the catch-all), the priors assigned to the explicitly formulated hypotheses are set equal to the value they would have in a model without a catch-all now multiplied by $$(1-\\tau _N)$$.\n\n• Silent open-minded Bayesianism assigns no prior or likelihood to the catch-all hypothesis, not even symbolically. To achieve this, all probabilistic statements are conditionalized on the algebra on $$\\varTheta _N$$ (shorthand for $$\\varTheta _N \\times \\varOmega$$). $$\\varTheta _N$$ represents the union of the hypotheses in the current theoretical context. From the viewpoint of the algebra on $$\\varTheta \\times \\varOmega$$, the probability assignments are incomplete.\n\nIn both cases, we deviate from the standard Bayesian account in that we give a strictly positive prior to the catch-all, and then allow opinions to be partially unspecified: vocal open-minded Bayesianism retains the entire algebra but uses symbols without numerical evaluation as placeholders, whereas silent open-minded Bayesianism restricts the algebra to which probabilities are assigned (leaving out the catch-all).Footnote 16 Formally, the partial specification of a probability function comes down to specifying the epistemic state of the agent by means of a set of probability assignments (cf. Halpern 2003; Haenni et al. 2003).\n\n### A conservative extension of standard Bayesianism\n\nAs detailed in the foregoing, we aim to represent probability assignments of an agent that change over time. An agent’s probability function therefore receives a time stamp $$t$$. Informally, this is often presented as if the probability function changes over time, but it is more accurate to say that the entire probability function gets replaced by a different probability function at certain points in time. Accordingly, subsequent functions need not even have the same domain.\n\nStandard Bayesianism has one way to replace an agent’s probability function once the agent learns a new piece of evidence: Bayes’ rule. It amounts to restricting the algebra to those sets that intersect with the evidence just obtained. Equivalently, it amounts to setting all the probability assignments outside this domain to zero. If at time $$t$$ an agent learns evidence $$E$$ with certainty, Bayes’ rule amounts to setting $$P_{t=n}$$ equal to $$P_{t=n-1}(\\cdot \\mid E)$$. If $$E$$ is the first piece of evidence that the agent learns, this amounts to restricting the domain from an algebra on $$\\varTheta \\times \\varOmega$$ to an algebra on $$\\varTheta \\times E$$ and redistributing the probability over the remaining parts of the algebra according to Bayes’ theorem.\n\nIn addition to this, open-minded Bayesianism requires a rule for replacing an agent’s probability function once the agent learns information of a different kind: the introduction of a new hypothesis. This amounts to expanding the algebra to which explicit probability values are assigned (from an algebra on $$\\varTheta _N \\times E$$ to an algebra on $$\\varTheta _{N+1} \\times E$$). Or in other words, it amounts to refining the algebra on $$\\varTheta \\times E$$. On both views, the new algebra is larger (i.e., it contains more sets). What is still missing from our framework is a principle for determining the probability over the larger algebra. In analogy with Bayes’ rule, one natural conservativity constraint is that the new probability distribution must respect the old distribution on the preexisting parts of the algebra.\n\nViewed in this way, our proposal does not introduce any radical departure from standard Bayesianism. Open-minded Bayesianism respects Bayes’ rule, but this rule already concerns changes in the algebra, namely reductions. The only new part is that we require a separate rule for enlarging the algebra (extending $$\\varTheta _N$$ or refining the partition of $$\\varTheta$$) rather than for reducing it (restricting $$\\varOmega$$). The principle that governs this change of the algebra again satisfies conservativity constraints akin to Bayes’ rule. As detailed below, silent and vocal open-minded Bayesianism will give a slightly different rendering of this rule.\n\n### Updating due to a new hypothesis\n\nIn this section, we consider how the probability function ought to change upon the introduction of a new hypothesis after some evidence has been gathered. We first consider an abstract formulation of a reduction and extension of the domain, as well as an example of such an episode in the life of an epistemic agent. After that, we consider both versions of open-minded Bayesianism as developments of the standard Bayesian account.\n\n#### Reducing and enlarging: setting the stage\n\nThe epistemic episode that we aim to model has three stages:\n\n$$(t=0) N$$ explicit hypotheses At time $$t=0$$, the theoretical context of the agent consists of $$N$$ explicit hypotheses: $$T_N = \\{ H_0,\\ldots ,H_{N-1} \\}$$. The union of the hypotheses in $$T_N$$ is $$\\varTheta _N$$. The catch-all is the complement of the latter (within $$\\varTheta$$): $$\\overline{\\varTheta _N}$$.\n\n$$(t=1)$$ Evidence $$E$$ At time $$t=1$$, the agent receives evidence $$E$$. The initial likelihood of obtaining this evidence given any one of the hypotheses $$H_i$$ ($$i \\in \\{0,\\ldots ,N-1\\}$$) is a particular value $$P_{t=0}(E \\mid H_i)$$.\n\n$$(t=2)$$ New hypothesis $$H_N$$ At time $$t=2$$, a new scientific theory is introduced, which produces a statistical hypothesis that is a subset of $$\\overline{\\varTheta _N}$$; call this additional hypothesis $$H_N$$. The new set of explicit hypotheses is thus $$T_{N+1} = \\{ H_0,\\ldots ,H_{N-1},H_N \\}$$. The union of the hypotheses in $$T_{N+1}$$ is $$\\varTheta _{N+1} \\supset \\varTheta _N$$. The new catch-all is the complement of $$\\varTheta _{N+1}$$: $$\\overline{\\varTheta _{N+1}} \\subset \\overline{\\varTheta _N}$$. In other words: in the algebra on $$\\varTheta$$, the old catch-all $$\\overline{\\varTheta _N}$$ is replaced by two disjoint parts, $$H_N$$ and $$\\overline{\\varTheta _{N+1}}$$. The new explicit hypothesis $$H_N$$ is shaven-off from the old catch-all, $$\\overline{\\varTheta _N}$$, leaving us with a smaller new catch-all, $$\\overline{\\varTheta _{N+1}}$$.\n\nOur first question is how the agent ought to revise her probability assignments at $$t=2$$. The second question is whether the old evidence ($$E$$ obtained at $$t=1$$) can lend positive confirmation to the new hypothesis ($$H_N$$ formulated at $$t=2$$). We will consider these questions in the context of standard Bayesianism and both forms of open-minded Bayesianism. As will be seen, the probability assignments that result from open-minded Bayesianism will show the relevant similarities with those of standard Bayesianism: within $$\\varTheta _N$$, both have the same proportions among the probabilities for the hypotheses $$H_i$$.\n\nFood inspection example While reading our general treatment of the three stages, it may be helpful to keep in mind the example of Sect. 1.1. In this example, the number of explicit hypotheses is $$N=2$$. The hypotheses $$H_0$$ (meaning, informally, “the kitchen is clean”) and $$H_1$$ (“this kitchen is not clean”) can be made formal in the following way: the distribution of infections follows a binomial distribution with bias parameter $$p_0=0.01$$ ($$H_0$$) or with bias parameter $$p_1=0.2$$ ($$H_1$$). The sample space is the same for both hypotheses: $$\\varOmega = \\{0,1\\}^\\mathbb {N}$$, where 0 means that a dish tested negatively and 1 means that a dish tested positively. In this case, the evidence takes the form of initial segments of the sequences in the sample space (cylindrical sets of $$\\{0,1\\}^\\mathbb {N}$$).Footnote 17 At $$t=1$$, the inspector tests five dishes and receives as evidence an initial segment of five times ‘1’. The initial likelihood of obtaining this evidence $$E$$ given hypothesis $$H_0$$ is\n\n\\begin{aligned} P_{t=0}(E \\mid H_0)=p_0^5=10^{-10}, \\end{aligned}\n\nand given hypothesis $$H_1$$ the initial likelihood of the evidence is\n\n\\begin{aligned} P_{t=0}(E \\mid H_1)=p_1^5=3.2 \\times 10^{-4}. \\end{aligned}\n\nAt $$t=2$$, the inspector considers a new hypothesis, $$H_2$$, which can be modeled as a binomial distribution with $$p_2=1$$.\n\n#### No update rule for standard Bayesianism\n\nStandard Bayesianism works on a fixed algebra on a fixed set $$\\varTheta _N \\times \\varOmega$$. On this view, none of the probabilities can change due to hypotheses that are external to $$\\varTheta _N$$.\n\n( $$t=0$$ ) N explicit hypotheses Each explicit hypothesis receives a prior probability, $$P_{t=0}(H_i)$$. If we assume that, initially, the agent is completely undecided with regard to the $$N$$ hypotheses, she will assign equal priors to them: $$P_{t=0}(H_i)=1/N$$ (for all $$i \\in \\{0,\\ldots ,N-1\\}$$).Footnote 18\n\n( $$t=1$$ ) Evidence E The marginal likelihood of the evidence can be obtained via the law of total probability:\n\n\\begin{aligned} P_{t=0}(E) = \\sum _{j=0}^{N-1} P_{t=0}(H_j) \\ P_{t=0}(E \\mid H_j), \\end{aligned}\n\nwhich is about $$1.6 \\times 10^{-4}$$ for the example. The posterior probability of each hypothesis given the evidence can be obtained by Bayes’ theorem:\n\n\\begin{aligned} P_{t=0}(H_i \\mid E)=\\frac{P_{t=0}(H_i) \\ P_{t=0}(E \\mid H_i)}{P_{t=0}(E)}\\mathrm \\ (for\\ all\\ i \\in \\{0,\\ldots ,N-1\\}). \\end{aligned}\n\nIn the example, this is about $$3.1 \\times 10^{-7}$$ for $$H_0$$ and $$1 - 3.1 \\times 10^{-7}$$ for $$H_1$$. According to Bayes’ rule, upon receiving the evidence $$E$$, the agent should replace her probability function by $$P_{t=1}=P_{t=0}(\\cdot \\mid E)$$. The inspector should now assign a probability to $$H_1$$ that is more than three million times higher than the probability she assigns to $$H_0$$. So, in the example, the confirmation is positive for $$H_1$$ and negative for $$H_0$$.\n\n( $$t=2$$ ) New hypothesis $$H_N$$ Suppose a new hypothesis is formulated: some $$H_N \\in \\overline{\\varTheta _N}$$. In terms of the example: the inspector was in a situation in which she could have received evidence with a much higher initial probability than that of the evidence she actually received, and we might imagine that this makes her decide to take the hypothesis $$H_2$$ concerning infected test strips into consideration. Now since, in general, the new hypothesis $$H_N$$ is not a part of the theoretical context, $$T_N$$, the intersection of $$H_N$$ with $$\\varTheta _N$$ is empty. Hence, the probability assigned to $$H_N$$ is zero, simply because $$P(\\overline{T_N})=0$$. And since the prior of this hypothesis is zero, the confirmation of this hypothesis is zero as well. In other words, standard Bayesianism simply does not allow us to represent new hypotheses (other than by the empty set). In this sense, the ensuing problem of old evidence does not even occur: new theories cannot be taken into account in the first place.\n\n#### Update rule for vocal open-minded Bayesianism\n\nVocal open-minded Bayesianism employs a refinable algebra on a fixed set $$\\varTheta \\times \\varOmega$$. In this view, none of the previous probability assignments change upon theory change, but additional probabilities can be expressed and earlier expressions can be rewritten accordingly.\n\n( $$t=0$$ ) N explicit hypotheses Each explicit hypothesis receives a prior, $$P_{t=0}(H_i)$$ (and, where appropriate, sub-priors). The proposal of vocal open-mindedness is to assign an undefined prior, $$\\tau _N \\in (0,1)$$, to the catch-all hypothesis, $$\\overline{\\varTheta _N}$$:\n\n\\begin{aligned} P_{t=0}(\\overline{\\varTheta _N})=\\tau _N. \\end{aligned}\n\nNo subsets of the catch-all receive (sub-)priors at $$t=0$$, but certain subsets of the catch-all will receive a prior later on. To ensure normalization over all hypotheses (including the catch-all), the priors assigned to the explicitly formulated hypotheses are set equal to the value they had in the model without a catch-all now multiplied by $$(1-\\tau _N)$$; for each $$i \\in \\{ 0,\\ldots ,N-1 \\}$$:\n\n\\begin{aligned} P_{t=0}(H_i) = (1-\\tau _N) \\ P_{t=0}(H_i \\mid \\varTheta _N). \\end{aligned}\n\nAlthough the value of $$\\tau _N$$ is unknown, the $$N+1$$ priors sum to unity. In the example, we have as prior of the catch-all $$P_{t=0}(\\overline{\\varTheta _2})= \\tau _2$$ and as prior for the two explicit hypotheses $$P_{t=0}(H_0)=1/2 \\times (1-\\tau _2)=P_{t=0}(H_1)$$.\n\nThe likelihood functions of the explicit hypotheses $$H_i$$ are the same as in the usual model. Regarding the likelihood of the catch-all, the proposal is to represent it by an undefined weighted average of functions in $$\\varTheta \\setminus \\varTheta _N$$: $$P_{t=0}( \\cdot \\mid \\overline{\\varTheta _N}) = x_N(\\cdot )$$.\n\n( $$t=1$$ ) Evidence E The marginal likelihood of the evidence has an additional term as compared to the standard model:\n\n\\begin{aligned} P_{t=0}(E) = \\sum _{j=0}^{N-1} P_{t=0}(H_j) \\ P_{t=0}(E \\mid H_j) \\ + \\ \\tau _N \\ x_N(E). \\end{aligned}\n\nDue to the presence of undetermined factors associated with the catch-all, $$P_{t=0}(E)$$ cannot be evaluated numerically. As a result, also the updated probability function, $$P_{t=1}(\\cdot )=P_{t=0}(\\cdot \\mid E)$$, contains unknown factors. These are the posteriors for $$H_i$$ (for all $$i \\in \\{0,\\ldots ,N-1\\}$$):\n\n\\begin{aligned} \\begin{array}{lll} P_{t=0}(H_i \\mid E) &{} = &{} \\frac{P_{t=0}(H_i) \\ P_{t=0}(E \\mid H_i)}{P_{t=0}(E)} \\\\ &{} = &{} \\frac{(1-\\tau _N) \\ P_{t=0}(H_i \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_i)}{\\sum _{j=0}^{N-1} (1-\\tau _N) \\ P_{t=0}(H_j \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_j) \\ + \\ \\tau _N \\ x_N(E)}. \\end{array} \\end{aligned}\n\nAlthough this expression cannot be evaluated numerically, some comparative probability evaluations can be computed since the unknown factors cancel. In particular, the ratio of two posterior probabilities assigned to explicit hypotheses can still be obtained; for $$i,j \\in \\{0,\\ldots ,N-1\\}$$:\n\n\\begin{aligned} \\frac{P_{t=1}(H_i)}{P_{t=1}(H_j)} = \\frac{P_{t=0}(H_i \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_i)}{P_{t=0}(H_j \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_j)}. \\end{aligned}\n\nIn the example, it can still be established that after receiving evidence $$E$$ the inspector should assign a probability to $$H_1$$ that is more than three million times higher than the probability she assigns to $$H_0$$. Similarly, we can still establish that both hypotheses have a very small likelihood for the evidence that is obtained. And this may be enough to motivate the introduction of a new hypothesis.\n\nIn the context of vocal open-mindedness, any expression of the belief change will contain unknown factors, and the implications are worse than for the posteriors: if the change is measured as the difference between posterior and prior, both terms have different unknown factors ($$\\frac{1-\\tau _N}{P_{t=0}(E)}$$ and $$1-\\tau _N$$, respectively).\n\n( $$t=2$$ ) New hypothesis $$H_N$$ Recall that the old catch-all $$\\overline{\\varTheta _N}$$ is replaced by two disjoint parts: the hypothesis that is shaven off, $$H_N$$, and the remaining part of the catch-all, $$\\overline{\\varTheta _{N+1}}$$. Finite additivity suggests to decompose the prior that was assigned to $$\\overline{\\varTheta _N}$$ into two corresponding terms:\n\n\\begin{aligned} \\tau _N = P_{t=0}(H_N) \\ + \\ \\tau _{N+1}, \\end{aligned}\n\nwhere $$P_{t=0}(H_N)$$ is the prior of the new hypothesis $$H_N$$ and $$\\tau _{N+1} \\in ]0,\\tau _N[$$ is the (indefinite) prior of the remaining catch-all $$\\overline{\\varTheta _{N+1}}$$, both of which are assigned retroactively. Although the value of $$\\tau _{N+1}$$ is unknown, the $$N+2$$ priors sum to unity.\n\nThe priors for the hypotheses in $$T_N$$ can thence be written in three ways:\n\n\\begin{aligned} P_{t=0}(H_i)&= (1-\\tau _N) \\ P_{t=0}(H_i \\mid \\varTheta _N) \\\\&= (1-\\tau _{N+1}) P_{t=0}(H_i \\mid \\varTheta _{N+1}) \\\\&= (1-\\tau _{N+1}) \\ (1-P_{t=0}(H_N \\mid \\varTheta _{N+1})) \\ P_{t=0}(H_i \\mid \\varTheta _N), \\end{aligned}\n\nwhere $$P_{t=0}(H_N \\mid \\varTheta _{N+1})$$ is some definite number $$\\in ]0,\\tau _{N}[$$. For instance, if we had a uniform prior over $$T_N$$ and we want to keep a uniform prior over $$T_{N+1}$$, we have to set $$P_{t=0}(H_N \\mid \\varTheta _{N+1})=\\frac{1}{N+1}$$.\n\nNow that $$H_N$$ is an explicit hypothesis, its likelihood is a definite function $$P_{t=0}(\\cdot \\mid H_N)$$ (also specified retroactively). In the example, the likelihood for obtaining the evidence $$P_{t=0}(E \\mid H_2)$$ is 1 on the new hypothesis. We assign an undefined likelihood to the new catch-all: $$P_{t=0}( \\cdot \\mid \\overline{\\varTheta _{N+1}}) = x_{N+1}(\\cdot )$$. This allows us to rewrite the previous expression obtained for the marginal likelihood:\n\n\\begin{aligned} P(E)&= \\sum \\nolimits _{j=0}^{N-1} (1-\\tau _{N+1}) \\ ( 1-P_{t=0}(H_N \\mid \\varTheta _{N+1}) ) \\ P_{t=0}(H_j \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_j)\\\\&+ \\ P_{t=0}(H_N) \\ P_{t=0}(E \\mid H_N) \\ + \\ \\tau _{N+1} \\ x_{N+1}(E), \\end{aligned}\n\nwhere the last two terms equal $$\\tau _N \\ x_N(E)$$.\n\nAt this point, we can also rewrite the expressions for the posteriors (for all $$i \\in \\{0,\\ldots ,N-1\\}$$):\n\n\\begin{aligned} P_{t=2}(H_i) = \\frac{(1-\\tau _{N+1}) \\ (1-P_{t=0}(H_N \\mid \\varTheta _{N+1})) \\ P_{t=0}(H_i \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_i)}{P(E)}. \\end{aligned}\n\nMoreover, we can now assign a posterior to $$H_N$$:\n\n\\begin{aligned} P_{t=2}(H_N) = \\frac{(1-\\tau _{N+1}) \\ P_{t=0}(H_N \\mid \\varTheta _{N+1}) \\ P_{t=0}(E \\mid H_N)}{P(E)}. \\end{aligned}\n\nAlthough it is still not possible to evaluate these posteriors numerically, we can compute new comparative probability evaluations for ratios involving $$H_N$$. For all $$i \\in \\{0,\\ldots ,N-1\\}$$:\n\n\\begin{aligned} \\frac{P_{t=2}(H_N)}{P_{t=2}(H_i)} = \\frac{P_{t=0}(H_N \\mid \\varTheta _{N+1}) \\ P_{t=0}(E \\mid H_N)}{( 1-P_{t=0}(H_N \\mid \\varTheta _{N+1}) ) \\ P_{t=0}(H_i \\mid \\varTheta _N) \\ P_{t=0}(E \\mid H_i)}. \\end{aligned}\n\nIn the case of uniform priors, additional factors cancel:Footnote 19\n\n\\begin{aligned} ( 1-P_{t=0}(H_N \\mid \\varTheta _{N+1}) ) \\ P_{t=0}(H_i \\mid \\varTheta _N)&= \\left( 1-\\frac{1}{N+1}\\right) \\ \\frac{1}{N} \\\\&= \\frac{1}{N+1} \\\\&= P_{t=0}(H_N \\mid \\varTheta _{N+1}). \\end{aligned}\n\nAnd so, in the case of uniform priors, we obtain:\n\n\\begin{aligned} \\frac{P_{t=2}(H_N)}{P_{t=2}(H_i)} = \\frac{P_{t=0}(E \\mid H_N)}{P_{t=0}(E \\mid H_i)}. \\end{aligned}\n\nFor the example, we can compute $$\\frac{P_{t=2}(H_2)}{P_{t=2}(H_1)} = \\frac{1}{p_0^5}=\\frac{1}{3.2 \\times 10^{-4}}=3,125$$. So, in the new theoretical context ($$T_3$$) the posterior of the new hypothesis ($$H_2$$) given the old evidence $$E$$, namely the sequence of five positive tests, is more than three thousand times higher than that of the hypothesis that was best confirmed ($$H_1$$) within the old theoretical context ($$T_2$$).Footnote 20\n\nAt $$t=1$$, no degree of belief can be expressed for $$H_N$$, but at $$t=2$$ the degrees regarding $$H_N$$ at $$t=1$$ can be expressed and the expressions for the old hypotheses $$H_i$$ can be rewritten. We are still left with two terms that have different unknown factors, which do not simply cancel out.Footnote 21 At any rate, degrees of confirmation can be evaluated if we first condition the probability assignments on the current theoretical context, $$\\varTheta _{N}$$. We return to this point below.\n\n#### Update rule for silent open-minded Bayesianism\n\nSilent open-minded Bayesianism employs an algebra on a set $$\\varTheta _N \\times \\varOmega$$, which may be extended to $$\\varTheta _{N+1} \\times \\varOmega$$ (and beyond). On this view, when the theoretical context changes, new conditional probabilities become relevant to the agent.\n\nLet us briefly motivate the silent version as an alternative to vocal open-mindedness. We have seen that the vocal version comes with a heavy notational load. Given that, in the end, we can only compute comparative probabilities, it seems desirable to dispense with the symbolic assignment of a prior and a likelihood to the catch-all hypothesis. Silent open-mindedness achieves this by conditioning all evaluations on $$\\varTheta _N$$, the union of the hypotheses in the theoretical context. This allows us to express the agent’s opinions concerning the relative probability of $$H_{i}$$ and $$H_{j}$$ (for any $$i, j \\in \\{0,\\ldots ,N-1\\}$$) without saying anything, not even in terms of free parameters, about the absolute probability that they have. Opinions about the theories in the current theoretical context $$T_N$$ are thus comparative only.\n\n( $$t=0$$ ) N explicit hypotheses Instead of assigning absolute priors to $$P_{t=0}(H_i)=P_{t=0}(H_i \\mid \\varTheta )$$, silent Bayesianism suggests to only assign priors that are conditionalized on the theoretical context, $$P_{t=0}(H_i \\mid \\varTheta _N)$$.\n\n( $$t=1$$ ) Evidence E Since $$H_{i} \\subseteq \\varTheta _{N}$$, the likelihoods of explicit hypotheses are statistically independent of the theoretical context:\n\n\\begin{aligned} P_{t=0}(E | H_{i} \\cap \\varTheta _{N}) = P_{t=0}(E | H_{i}). \\end{aligned}\n\nSilent open-mindedness suggests not to assign a likelihood to the catch-all. This “probability gap” is not problematic (by the terminology of Hájek 2003), since all the other probability assignments are conditionalized on $$\\varTheta _N$$. The agent can update her comparative opinion in the usual Bayesian way, as long as she conditionalizes everything on this context:Footnote 22\n\n\\begin{aligned} P_{t=1}(H_{i} \\mid \\varTheta _{N}) = P_{t=0}(H_{i} | E \\cap \\varTheta _{N}) = P_{t=0}(H_{i} | \\varTheta _{N}) \\ \\frac{P_{t=0}(E | H_{i})}{P_{0}(E | \\varTheta _{N})}. \\end{aligned}\n\n( $$t=2$$ ) New hypothesis $$H_N$$ After a new hypothesis has been introduced, the silently open-minded Bayesian has to start conditionalizing on the expanded (union of the) theoretical context $$\\varTheta _{N+1}$$ rather than on $$\\varTheta _N$$. Once $$H_N$$ gets formulated, its likelihood will be known too. We require that the probability evaluations conditional on the old context $$\\varTheta _N$$ do not change. In this way, we cohere with standard Bayesianism and with the vocal open-minded variant.\n\nWe can treat $$P_{t=2}(H_N \\mid \\varTheta _{N+1})$$ much like a ‘postponed prior’, and give it a value based on arational considerations that are not captured by constraints within the (extended) Bayesian framework. In particular, we can engage in the kind of reconstructive work as is done in vocal open-mindedness, but this is not mandatory here. We might also determine the posterior probability of $$H_N$$ and so reverse-engineer what the prior must have been to make this posterior come out after the occurrence of $$E$$. In any case, when moving to a new context, the other posteriors need to be changed accordingly (such that the $$N+1$$ posteriors sum to unity): $$P_{t=2}(H_i \\mid \\varTheta _{N+1}) = (1-P_{t=2}(H_N \\mid \\varTheta _{N+1})) P_{t=1}(H_i \\mid \\varTheta _N)$$. So, the move from $$T_N$$ to $$T_{N+1}$$ essentially amounts to a kind of recalibration of the posteriors.\n\nImportantly, we can compute all known confirmation measures using the priors and posteriors that are conditional on a particular theoretical context. Once the context changes, this clearly impacts on the confirmation allotted to the respective hypotheses. The price for this transparency is of course that we can only establish the confirmation of a hypothesis relative to a theoretical context $$\\varTheta _N$$. The natural use of a degree of confirmation thus becomes comparative: it tells us which hypothesis among the currently available ones is best supported by the evidence, but there is no attempt to offer an absolute indication of this support.\n\n## Evaluation and conclusion\n\nIn this section we critically evaluate open-minded Bayesianism. We clarify our views on it, and conclude that it provides a handle on the problem of old evidence: it explains how old evidence can be used afresh without violating Bayesian coherence norms. Towards the end, we sketch a number of ideas and problems that deserve further exploration.\n\n### Evaluation of open-minded Bayesianism\n\nIt may be argued that open-minded Bayesianism fails to provide us with the required normative guidance. In the silent version, it only concerns suppositional reasoning and hence cannot inform our unconditional beliefs. In metaphorical terms, the worry is that the agent cannot keep hiding behind the conditionalization stroke. In the vocal form, the same worry arises in relation to the use of factors with indefinite numerical values, which are introduced to represent the prior and likelihood of the catch-all hypothesis, but which soon ‘infect’ all probability assignments and measures of confirmation. Either way, it may seem that the agent must come clean on her absolute commitments at some point.\n\nWe respond to this worry by biting the bullet. If we want to allow new theories to enter the conceptual scene, then we will have to provide room for this in our formal framework. There are attempts to accommodate (other forms of) theory change in a Bayesian framework that employ fully specified probability assignments (e.g., Romeijn 2004, 2005). In this paper, by contrast, we have offered a framework that creates room for new theories by leaving part of the probability assignment unspecified. We accept that this leads to a model that only concerns conditional belief.\n\nWe should emphasize that we are not alone in preaching an open-minded form of Bayesianism. We already mentioned the proposal for tempered Bayesianism by Shimony (1970), who suggested the use of a catch-all hypothesis in this context. This suggestion was also discussed by Earman (1992), who introduced the evocative terminology of shaving off new hypotheses from the catch-all. Furthermore, our proposal of humble Bayesianism is related to earlier work by Salmon (1990) and Lindley (1991). Morey et al. (2013) recently introduced what they call humble Bayesianism in a debate over the nature of statistical model comparisons.\n\nThe latter paper lends further support to open-minded Bayesianism. The point of Morey et al. (2013) is that an agent may want to use Bayesian methods to evaluate statistical models, without buying into the conviction, implicit in the standard Bayesian framework, that one of the theories under consideration is true. After all, a standard Bayesian will have the probabilities of the hypotheses under consideration add up to one, and so judges herself to be perfectly calibrated (cf. Dawid 1982). The standard Bayesian is overly confident, hence a more open-minded form of Bayesianism seems called for.\n\nThe price to pay is that the epistemic attitudes for which the framework of the open-minded Bayesian provides the norms are different: they have a conditional nature. Whether we spell out the details using a vocal or a silent open-mindedness, the normative framework tells the agent what to believe only if she temporarily supposes, without committing to it, that the true theory is among those currently under consideration.\n\n### The old evidence problem\n\nNow that we have bitten the bullet, we better make sure that we do so for good reasons. In this section, we argue that open-minded Bayesianism provides a new handle on the problem of old evidence, by explaining how old evidence can be re-used.\n\nIn his encyclopedia entry on Bayesianism, Talbott (2008) summarizes the Bayesian problem of new theories as follows: “Suppose that there is one theory $$H_1$$ that is generally regarded as highly confirmed by the available evidence $$E$$. It is possible that simply the introduction of an alternative theory $$H_2$$ can lead to an erosion of $$H_1$$’s support. [...] This sort of change cannot be explained by conditionalization.” It is precisely this “erosion” of support that can be captured by the update rule for open-minded Bayesianism, since both approaches make the agent reconsider the posteriors of the old hypotheses. The strong point of open-minded Bayesianism is that this reconsideration of the posteriors does not render the agent probabilistically incoherent.\n\nWhen writing about the operation of shaving off new hypotheses, (Earman (1992), p. 195) worried that a point may be reached “where the new theory has such a low initial probability as to stand not much of a fighting chance.” This worry, however, does not apply to our framework. Notice that we do not assign an explicit value to the prior of the current theoretical context. We may think of the prior associated with the catch-all hypothesis as a number extremely close to unity—and the humbler we are, the closer to unity we can imagine it to be. At any rate, no matter how large the discrepancy between the posteriors of the old hypotheses and the new one, the impact that the decomposition of the catch-all has on the catch-all’s posterior will remain unknown, or indefinite. Of course, once we pin down a value for the probability $$\\tau _N$$, the worry of Earman becomes a live one. But lacking such a definite value,Footnote 23 the problem that the catch-all gets crowded out by explicit hypotheses does not arise.\n\nThere are, however, differences in how the vocal and silent approaches to open-minded Bayesianism deal with reassessing the posteriors, and in what role they give to old evidence. The vocal approach requires us to assign a prior to the new hypothesis $$H_N$$ after the fact, and to compute its current posterior on the basis of this assignment. The other posteriors are obtained via a renormalization.Footnote 24 This approach requires us to evaluate probabilities retroactively: priors have to be set post hoc, for hypotheses that were not known at the time.Footnote 25 To our mind this need not be a cause of concern though. One cannot unlearn the evidence that has been gathered, but it is still possible to use base rates or other sources of objective information to determine the priors retroactively.Footnote 26\n\nThe silent approach, by contrast, requires us to assign a posterior to the new hypothesis $$H_N$$ without offering an explicit recourse to the prior probability assignments over the old hypotheses. The point here is rather subtle. It is in virtue of a prior probability assignment of $$\\tau _N$$ to the old catch-all $$\\varTheta _N$$ that we can meaningfully claim, as part of the vocal approach, that the prior of the old catch-all is decomposed into the prior of a new hypothesis $$H_N$$ and the prior of a new catch-all $$\\varTheta _{N+1}$$. Since the silent approach remains silent precisely on this prior, it is hard to see how we can retroactively decompose it. So in this approach, it is not clear whether old evidence ever confirms new theories. Unless we have set the value of $$P_{t=2}(H_N \\mid \\varTheta _{N+1})$$ by means of a reconstruction that ultimately depends on $$P_{t=0}(H_N \\mid \\varTheta _{N+1})$$, its value is not obtained via conditionalization on $$E$$. In silent Bayesianism, the old evidence is therefore not given a new role.\n\nNow that we have discussed the role of evidence in two forms of open-minded Bayesianism, it is time to take stock. Both approaches suffer some drawbacks. The vocal proposal comes with the complication of a heavy notational load that hampers the evaluation of the degree of confirmation. The silent proposal allows too much freedom in the assignment of a posterior to the new hypothesis—so much freedom, that it is not clear that the old evidence has any impact. For these reasons, we propose a hybrid approach to open-minded Bayesianism, that combines the best elements of both.\n\nOn our hybrid proposal, the open-minded Bayesian remains in the silent phase,Footnote 27 except for the times at which her theoretical context changes. Unlike a standard Bayesian, the open-minded Bayesian is allowed to change the algebra to which probabilities are assigned and thus to assign non-zero probabilities to the new hypothesis, which is impossible without a catch-all. Then she enters the vocal phase: she engages in assigning a prior to the new hypothesis (retroactively) and computing its posterior given the evidence (also retroactively) and renormalizing the other priors.\n\nOpen-minded Bayesianism thus offers a particular perspective on the use of old evidence for confirming a new theory. On the conceptual level, it shows how our perception of evidence and confirmation changes if we move from one theoretical context to another. Relative to one set of hypotheses, the data were telling towards one particular candidate hypothesis, and so counted as evidence that confirms this candidate. But with the inclusion of a new hypothesis, the data may tell against the formerly best candidate, and so count as evidence that disconfirms it. We take it to be a virtue of our model that it brings out this context-sensitivity of evidence and confirmation.\n\n### Illustration of the hybrid approach\n\nTo make our proposal for a hybrid approach more vivid, we apply it to the food inspection example. Initially, when the food inspector implicitly assumes her equipment to be working properly, she can be described by the silent approach to open-minded Bayesianism. Within the initial context, she only needs to take into account two explicit hypotheses: the kitchen is clean or it is not. She assigns prior probabilities to these hypotheses and she computes posteriors, but these assignments are conditional on her implicit assumption that the testing strips are uncontaminated (as well as the many other background assumptions collected in the theoretical context). So far, she acts much like any Bayesian would; her open-mindedness will surface only when provoked.\n\nThe result, that five dishes out of five appear to be infected, was initially unlikely on both of her explicit hypotheses. (Recall that the initial likelihood was $$10^{-10}$$ in the case of a clean kitchen and $$3.2 \\times 10^{-4}$$ in the case of an unclean kitchen.) Computing the posterior probabilities, which implicitly requires us to assume that the correct hypothesis is among the two hypotheses being considered, leads to a value close to zero $$(3.1 \\times 10^{-7})$$ for a clean kitchen and a value near to unity $$(1 - 3.1 \\times 10^{-7})$$ for an unclean kitchen. If the priors were equal (or at least of the same order of magnitude), then on any measure of confirmation, the evidence provides very strong confirmation for the hypothesis that the kitchen was unclean.\n\nThe observation that it is highly unlikely even for an unclean kitchen to produce five infected dishes may suggest that there is an even better hypothesis ‘out there’ that has not yet been taken into account. Indeed, seeing the result prompts the inspector to reconsider one of her implicit assumptions and she turns its negation into a new theory (and associated statistical hypothesis): the testing strips may not have been clean after all ($$\\hbox {bias} = 1$$).Footnote 28 (Of course, this is still but one out of many other alternative hypotheses.) Our framework for open-minded Bayesianism is able to represent this formally.\n\nIn the vocal phase, the agent shaves off her third hypothesis from the catch-all and revises her probability assignments: she retroactively assigns a prior to the new hypothesis, adjusts the priors of the two old hypotheses by a suitable factor, and computes the likelihood of the old evidence on the new hypothesis (as described in Sect. 3.3.3). All this leads her to reassess the posteriors of the old hypotheses and to assign a posterior to the new hypothesis. Assuming equal priors, the final result is this: within the new theoretical context, the posterior of the new hypothesis given the old evidence is more than three thousand times higher than that of the hypothesis that was best confirmed within the old theoretical context. Irrespective of the details of the confirmation measure and assuming priors of at least equal orders of magnitude, this implies that the old evidence strongly confirms the new hypothesis and disconfirms the others. This illustrates that it is the shift in theoretical context itself that may cause old evidence to confirm a new hypothesis.\n\nOnce the agent is satisfied that, for the evidence currently at hand, the new theoretical context includes all the relevant hypotheses, she may start to conditionalize all her findings on this context and thereby enter a new silent phase. The remaining catch-all hypothesis need not be mentioned again until new doubts arise.\n\nIn Kuhnian terminology, the silent version of open-minded Bayesianism is sufficient for describing episodes of normal science (and if the conditionalization on the theoretical context remains implicit, it is indistinguishable from the usual Bayesian picture), but the vocal version of open-minded Bayesianism is required to model revolutionary changes in the theoretical context.\n\n### Further research\n\nWith the foregoing, we believe we have only scratched the surface of the matter at hand. Many avenues for further research lay open for exploration. In what follows, we briefly mention a number of these avenues. With this we showcase our ongoing research on this, we invite the reader to join in, but mostly we indicate where we ourselves feel that our account is lacking.\n\nOne important consideration that has received relatively little attention in the foregoing concerns degrees of confirmation. Our goal with this paper was to show that we can accommodate the introduction of a new theory and hence a new empirical hypothesis in the Bayesian framework, and that old evidence can play a role in the determination of the posterior probability of this new hypothesis without violating probabilistic coherence. We have been mostly silent on how the posteriors may be used to compute a degree of confirmation, so that the impact of old evidence can be expressed more precisely: any such story will supervene on the probability assignments. However, a complete account of open-minded Bayesianism might involve more detail on degrees of confirmation.\n\nAnother aspect of the process of theory change targeted in this article certainly deserves a more detailed normative treatment: the decision to introduce a new theory. In the foregoing, we have treated this decision as completely external to the model. However, we also indicated that the search for new theories may be motivated by a so-called statistical model selection criterion, e.g., by a measure of the predictive performance of the agent, or by some other score that attaches to the data and the hypotheses currently under consideration. We think that our account, which may provide rationality constraints on the transition from one theoretical context to another, can be combined fruitfully with an account of how theoretical contexts are evaluated and selected.\n\nFurthermore, we should stress that we have only considered one type of theory change—a change that may be captured by shaving off new hypotheses from a catch-all hypothesis.Footnote 29 In general, theory change may lead to other types of change to the domain of the probability function, $$\\mathcal {A}(\\varTheta \\times \\varOmega )$$, in various ways. For one, we have not explicitly considered changes in the space $$\\varOmega$$ of empirical possibilities. Notice that such changes are generally more radical than changes in the theoretical realm: theories obtain their empirical content in terms of hypotheses that are formulated by means of $$\\varOmega$$. One captivating question concerns the exact reach of our account of new theory and old evidence. Specifically, can we assume at the outset that $$\\varTheta$$ and $$\\varOmega$$ are rich enough to accommodate all conceivable theory changes? An answer to this question requires us to survey a rich landscape of theory changes as moves in an encompassing space of possible theories.Footnote 30\n\nWe would like to mention one other aspect to theory change that is related to two issues discussed above, namely the decision to introduce a new theory and the type of theory change effected by that. It concerns the notion of awareness. Hill (2010) and Dietrich and List (2013) have argued that a decision problem obtains new dimensions when the agent is made aware of considerations that were previously not live to her. We think that roughly the same can be said about the epistemic problems an agent faces, and that the foregoing offers a natural model for an agent that becomes aware of a theory while performing a predictive, or more generally an epistemic task. It seems natural to combine the frameworks for modeling awareness.\n\nFinally, we briefly mention two possibilities that open-minded Bayesianism offers, when it is combined with ideas on relative infinitesimals (in the sense of Wenmackers 2013). On one side of the spectrum, the framework allows us to model radically skeptical yet empiricist epistemic attitudes: all the priors and posteriors of explicit hypotheses, old and new ones, may be very small, indeed infinitesimally small, compared to the probabilities associated with the catch-all. That is, we may choose $$\\tau _N$$ to be some number very close to one. Despite that, a particular theory may have a large prior or posterior relative to the other theories in the theoretical context. The framework thus allows us to model a radical sceptic who is nevertheless sensitive to differences in empirical support. On the other side of the spectrum, the framework of open-minded Bayesianism allows us to model practical certainty without spilling over into dogmatism. We may be aware of the existence of certain hypotheses, but we might choose not to include them in our considerations: they may seem irrelevant to the kinds of evidence under study (assuming statistical independence), they are deemed highly unlikely,Footnote 31 including them requires too high a number of computations, or other pragmatic reasons. However, upon receiving falsifying or strongly disconfirming evidence, we might want to reconsider some of these omissions.Footnote 32 The catch-all hypothesis with an infinitesimal prior may then serve as a reservoir for the hypotheses that seemed dispensable at one point in time, but that later on turn out to be relevant. Falsifying or strongly disconfirming evidence may lead to a situation in which the probability of the catch-all is no longer regarded as a relative infinitesimal: the marginal likelihood becomes so small that it becomes comparable to the probability of the catch-all.Footnote 33\n\nThe above list of research topics indicates that our resolution to the problem of old evidence and new theories leaves much to be done. However, the list also suggests that the framework of open-minded Bayesianism provides access to several interesting aspects of belief dynamics that fall outside the scope of standard Bayesianism. We call to mind what Sue says in (Earman (1992), p. 235): “By all means keep an open mind, but not so open that your brain falls out.” It seems to us that open-minded Bayesianism does precisely that.\n\n1. 1.\n\nSee for instance Easwaran (2011) for a recent overview of approaches to the problem of old evidence.\n\n2. 2.\n\nYet, this has been noted in the literature. See for example Gillies (2001).\n\n3. 3.\n\nWe also agree with Sprenger (2014) that, if we intend to capture objective confirmation in a scientific context, the relevant credence function belongs to an abstract agent representing any unbiased scientist in the relevant context, rather than a particular historical person.\n\n4. 4.\n\nThroughout the paper, we assume denominators to be non-zero.\n\n5. 5.\n\nSome introductory texts, such as (Hogg (2012), p. 4) and Bertsekas and Tsitsiklis (2008) even argue that we are dealing with four different functions and suggest the use of subscripts to distinguish between them. But we follow a different approach.\n\n6. 6.\n\nIn the discrete case, we may think of the sample space as the set of infinitely long sequences (ranging over temporal instants or individuals) of the values of a property (from a discrete set $$S$$) or a vector of properties (each from a discrete set $$S_i$$): $$\\varOmega = S^I$$, with $$S$$ the possible values of a certain property or a Cartesian product set of such value sets $$S=\\prod _i S_i$$ and $$I$$ the infinite index set (e.g., $$\\mathbb {N}$$); see for instance (Romeijn (2011), Sect. 2). Considering the algebra spanned by the cylindrical subsets of this sample space allows us to represent measurements as initial segments of infinitely long streams of data.\n\n7. 7.\n\nUsing this terminology, this article deals with the problem of new hypotheses, rather than the problem of new theories.\n\n8. 8.\n\nSee for instance (Romeijn (2011), Sect. 7). In such a case, it is more common to speak of a statistical model or a theory, but we stick to the term ‘hypothesis’, to avoid confusion with scientific theories.\n\n9. 9.\n\nIt would be more accurate to label the set as $$\\varTheta _{\\mathcal {A}(\\varOmega )}$$, but we omit the subscript to keep the notation light.\n\n10. 10.\n\nIt is clear that an indeterministic theory can generate statistical predictions about measurable quantities. In the case of deterministic theories, such as Newtonian mechanics, it may be less clear how they lead to hypotheses that are expressed in terms of a probability assignment. However, when we combine such a theory with measured values for masses, velocities, etc. the associated measurement uncertainty can be represented in terms of probability distributions, which in turn leads to statistical predictions concerning other measurable quantities.\n\n11. 11.\n\nTypically, this will happen because the evidence was surprising according to the hypotheses currently under consideration, as witnessed by a very low likelihood (i.e., $$P(E | H_i)$$ is very small for every $$i$$) and initially it did seem possible to obtain evidence with a higher likelihood. A principled decision to introduce a new theory may be based on the computation of a model score, or on the application of a model selection tool. But such scores and tools fall outside the scope of the present paper. The procedure for deciding to introduce a new theory is not intended to be a part of our model.\n\n12. 12.\n\nSee for instance (Duhem (1906), p. 311): “Entre deux théorèmes de Géométrie qui sont contradictoires entre eux, il n’y a pas place pour un troisième jugement; si l’un est faux, l’autre est nécessairement vrai. Deux hypothèses de Physique constituent-elles jamais un dilemme aussi rigoureux? Oserons-nous jamais affirmer qu’aucune autre hypothèse n’est imaginable?” As an example, he considers the hypotheses concerning the nature of light (particles versus wave) and asks if it is forbidden that light may have a different nature altogether.\n\n13. 13.\n\nSee Morey et al. (2013) for a less rigourous exposition of open-minded Bayesianism, which they term humble Bayesianism, in a statistics context. As said above, the idea of open-mindedness is already present in what (Lindley (1991), p. 104) called Cromwell’s Rule.\n\n14. 14.\n\nIn statistics this is known as hierarchical modeling (cf. Gelman et al. 2004). A useful philosophical angle on this is provided in Henderson et al. (2010).\n\n15. 15.\n\nIf the option of a catch-all simply hasn’t been considered, one might intuitively expect its probability to be undefined rather than zero. However, if we represent Bayesianism without a catch-all within an open-minded framework, a probability has to be assigned to the catch-all and its value has to be zero: see Sect. 3.3.2.\n\n16. 16.\n\nAlthough we do not advocate this here, the vocal formalism is compatible with assigning a definite prior to the catch-all. See Sect. 4.4 for some thoughts on the case in which the prior of the catch-all is either close to unity or close to zero.\n\n17. 17.\n\nSince the inspector assumes that the test is perfect, instead of representing the test results, she may just as well represent these data in terms of dishes being infected or not (such that 0 means that a dish is not infected and 1 that a dish is infected.) This illustrates how data and evidence may come apart: we regard evidence as interpreted data, where the interpretation depends on the sample space that is used in a hypothesis. For an example, see footnote 29.\n\n18. 18.\n\nThe assumption of equal priors is not essential for the framework. The agent may assign different priors, based on considerations that are external to the Bayesian framework, such as relevant base rates (where the usual reference class problem emerges; cf. Hájek 2007).\n\n19. 19.\n\nSince these factors are all known at $$t=2$$, it is not a problem if they do not cancel.\n\n20. 20.\n\nObserve that the catch-all $$\\overline{\\varTheta _2}$$ is strictly larger than the family of binomial distributions with $$p \\in [0,1] \\setminus \\{ 0.01, 0.2 \\}$$. The binomial distribution only applies to situations that can be thought of as having a fixed bias and producing independent outcomes. The catch-all should be large enough to allow the agent to reconsider even these assumptions at a later point in time.\n\n21. 21.\n\nThis may be a reason to consider a particular measure of confirmation, such as $$P(H \\mid E) - P(H \\mid \\overline{E})$$ (cf.  Christensen 1999; Joyce 1999), for which the factors do cancel out.\n\n22. 22.\n\nRecall from Sect. 2 that we interpret $$E$$ as shorthand for $$\\varTheta \\times E$$, so $$E \\cap \\varTheta _{N}$$ should be understood as $$(\\varTheta \\cap \\varTheta _N) \\times E = \\varTheta _N \\times E$$.\n\n23. 23.\n\nOr assuming it to be unity minus an infinitesimal: see Sect. 4.4.\n\n24. 24.\n\nMore accurately, the decomposition into definite and indefinite factors changes in a way that is reminiscent of a renormalization.\n\n25. 25.\n\nIn this regard, our approach resembles proposed solutions that employ counterfactual credences.\n\n26. 26.\n\nVocal open-minded Bayesianism can be compared with the analysis of the problem of old evidence given by both Garber (1983) and Jeffrey (1983), who concluded that what is discovered is the fact that the new theory entails the old evidence. To model agents who discover a statement of this kind, they proposed weakening the Bayesian background assumption of logical omniscience. The vocal approach paints a similar, reconstructive picture, though it is not logical omniscience that fails the agent: what is discovered upon the change in the algebra at $$t=2$$, is how to express the posterior (and hence the confirmation) of the new hypothesis given the old evidence, which was inexpressible at $$t=1$$.\n\n27. 27.\n\nWe might call the silently open-minded Bayesian a relativized standard Bayesian: the probabilities conditionalized on the theoretical context appearing in the humble approach equal the corresponding unconditional probabilities of the approach without a catch-all.\n\n28. 28.\n\nThe old evidence was simply ‘five out of five dishes are infected’, whereas in the new theoretical context, the old data (five positive test results) are reinterpreted as ‘five out of five dishes appear to be infected’. This illustrates how the evidence itself may change with the advent of a new hypothesis and that raw data should be sacrosanct; cf. footnote 18.\n\n29. 29.\n\n(Earman (1992), p. 196) has introduced a distinction between two forms of theory change: “The mildest form occurs when the new theory articulates a possibility that lay within the boundaries of the space of theories to be taken seriously but that, because of the failure of logical omniscience [...], had previously been unrecognized as an explicit possibility. The more radical form occurs when the space of possibilities is itself significantly altered.” Although this is a helpful way of categorizing theory change, it is not an absolute one: the kind of theory change that we have discussed can be reconstructed as a radical one in the silent approach (in which $$\\varTheta _N$$ is extended to $$\\varTheta _{N+1}$$) or as a mild one in the vocal approach (in which the partition on $$\\varTheta$$ is refined). Presumably, radical changes that can be reconstructed as mild changes are best considered as intermediate cases, since both milder and more radical changes are conceivable.\n\n30. 30.\n\nRecall that we have defined $$\\varTheta$$ as the set of all probability functions on a common domain, $$\\mathcal {A}(\\varOmega )$$. Arguably, it may suffice to choose a smaller set $$\\varTheta$$, namely the set of all computable probability functions on the domain $$\\mathcal {A}(\\varOmega )$$. This is the idea behind the celebrated theory of universal prediction by Solomonoff (1964).\n\n31. 31.\n\nIn a probabilistic framework, very few theories (or better: the associated statistical hypotheses) can ever be refuted completely, yet some theories—say, phlogiston theory—may become so unlikely that no scientist ever considers them again once a better alternative has been found.\n\n32. 32.\n\nA formal model of the Lockean thesis in terms of context-dependent infinitesimals is given by Wenmackers (2013). Pacuit et al. (2013) provide a different example of the use of infinitesimal probabilities for modeling the revision of practical certainties. See also Schwitzgebel (2014) on what he calls “1%-skepticism” for a less formal treatment of related issues.\n\n33. 33.\n\nIn fact, the food inspection example may be interpreted in this way: the inspector may have been aware of precedents involving contaminated equipment and assumed this possibility to be irrelevant only until she faced some evidence suggesting otherwise.\n\n## References\n\n1. Bertsekas, D. P., & Tsitsiklis, J. N. (2008). Introduction to probability (2nd ed.). Belmont: Athena Scientific.\n\n2. Christensen, D. (1999). Measuring confirmation. Journal of Philosophy, 96, 437–461.\n\n3. Dawid, A. P. (1982). The well-calibrated Bayesian. Journal of the American Statistical Association, 77, 605–610.\n\n4. Dietrich, F., & List, C. (2013). A reason-based theory of rational choice. Noûs, 47, 104–134.\n\n5. Duhem, P. (1906). La théorie physique; Son object et sa structure. Bibliothèque de philosophie expérimentale (Vol. 2). Paris: Chevalier & Rivière.\n\n6. Earman, J. (1992). Bayes or bust? A critical examination of Bayesian confirmation theory. Cambridge, MA: MIT Press.\n\n7. Easwaran, K. (2011). Bayesianism II: Applications and criticisms. Philosophy Compass, 6, 321–332.\n\n8. Garber, D. (1983). Old evidence and logical omniscience in Bayesian confirmation theory. In J. Earman (Ed.), Testing scientific theories, Minnesota studies in the philosophy of science (Vol. 10, pp. 99–131). Minneapolis: University of Minnesota Press.\n\n9. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian data analysis (2nd ed.). Boca Raton: Chapman and Hall.\n\n10. Gillies, D. (2001). Bayesianism and the fixity of the theoretical framework. In D. Corfield & J. Williamson (Eds.), Foundations of Bayesianism (pp. 363–379). Dordrecht: Kluwer.\n\n11. Glymour, C. (1980). Why I am not a Bayesian. In C. Glymour (Ed.), Theory and Evidence (pp. 63–93). Princeton: Princeton University Press.\n\n12. Haenni, R., Romeijn, J. W., Wheeler, G., & Williamson, J. (2003). Probabilistic logics and probabilistic networks, synthese library: Studies in epistemology, logic, methodology, and philosophy of science (Vol. 350). Dordrecht: Springer.\n\n13. Hájek, A. (2003). What conditional probability could not be. Synthese, 137, 273–323.\n\n14. Hájek, A. (2007). The reference class problem is your problem too. Synthese, 156, 563–585.\n\n15. Hájek, A. (2012). Is strict coherence coherent? Dialectica, 66, 411–424.\n\n16. Halpern, J. Y. (2003). Reasoning about uncertainty. Cambridge, MA: MIT Press.\n\n17. Henderson, N. L., Goodman, J. D., Tenenbaum, J. B., & Woodward, F. (2010). The structure and dynamics of scientific theories: A hierarchical Bayesian perspective. Philosophy of Science, 77, 172–200.\n\n18. Hill, B. (2010). Awareness dynamics. Journal of Philosophical Logic, 39, 113–137.\n\n19. Hogg, DW. (2012). Data analysis recipes: Probability calculus for inference. http://arxiv.org/abs/1205.4446v1.\n\n20. Jeffrey, R. (1983). Bayesianism with a human face. In J. Earman (Ed.), Testing scientific theories, Minnesota studies in the philosophy of science (pp. 133–156). University of Minnesota Press: Minneapolis.\n\n21. Joyce, J. (1999). The foundations of causal decision theory. New York, NY: Cambridge University Press.\n\n22. Lindley, D. (1991). Making decisions (2nd ed.). London: Wiley.\n\n23. Morey, R., Romeijn, J. W., & Rouder, J. N. (2013). The humble Bayesian: Model checking from a fully Bayesian perspective. British Journal of Mathematical and Statistical Psychology, 66, 68–75.\n\n24. Pacuit, E., Pedersen, A. P., & Romeijn, J. W. (2013). When is an example a counterexample? In B. C. Schipper (Ed.), TARK XIV proceedings (pp. 156–165). New York: ACM Digital Library.\n\n25. Romeijn, J. W. (2004). Hypotheses and inductive predictions. Synthese, 143(3), 333–364.\n\n26. Romeijn, J. W. (2005). Theory change and Bayesian statistical inference. Philosophy of Science, 72, 1174–1186.\n\n27. Romeijn, J. W. (2011). Statistics as inductive logic. In P. Bandyopadhyay & M. Forster (Eds.), Philosophy of statistics (Vol. 7, pp. 751–774)., Handbook for the philosophy of science Oxford, North Holland: Elsevier.\n\n28. Salmon, W. C. (1990). Rationality and objectivity in science or Tom Kuhn meets Tom Bayes. In C. W. Savage (Ed.), Scientific theories, Minnesota studies in the philosophy of science (pp. 175–205). Minneapolis: University of Minnesota Press.\n\n29. Schwitzgebel, E. (2014). 1% Skepticism. Unpublished manuscript, http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/1%25Skepticism-140512.pdf.\n\n30. Shimony, A. (1970). Scientific inference. In R. G. Colony (Ed.), The Nature and function of scientific theories (pp. 79–172). Pittsburgh: The University of Pittsburgh Press.\n\n31. Sklar, L. (1981). Do unborn hypotheses have rights? Pacific Philosophical Quarterly, 62, 17–29.\n\n32. Solomonoff, R. J. (1964). A formal theory of inductive inference; Parts I and II. Information and Control, 7, 1–22 and 224–254.\n\n33. Sprenger, J. (2014). A novel solution to the problem of old evidence. Unpublished manuscript, http://philsci-archive.pitt.edu/10643/.\n\n34. Stanford, K. (2006). Exceeding our grasp: Science, history, and the problem of unconceived alternatives. Oxford: Oxford University Press.\n\n35. Talbott, W. (2008). Bayesian epistemology. In Zalta N (Ed.), Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/epistemology-bayesian/.\n\n36. Walley, P. (2000). Towards a unified theory of imprecise probability. International Journal of Approximate Reasoning, 24, 125–148.\n\n37. Wenmackers, S. (2013). Ultralarge lotteries: Analyzing the lottery paradox using non-standard analysis. Journal of Applied Logic, 11, 452–467.\n\nDownload references\n\n## Acknowledgments\n\nWe are grateful to Clark Glymour and the other participants of the June 2013 symposium in Düsseldorf for helpful discussions as well as to Eric Schwitzgebel and two anonymous referees for constructive feedback on the previous version of this article. SW’s work was financially supported by a Veni-grant from the Dutch Research Organization (NWO project “Inexactness in the exact sciences” 639.031.244). JWR’s work was financially supported by a Vidi-grant from the Dutch Research Organization (NWO project “What are the chances” 276.20.015) and by the visiting fellowship programme of the University of Johannesburg.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Sylvia Wenmackers.\n\n## Rights and permissions\n\nOpen Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.\n\nReprints and Permissions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9097371,"math_prob":0.9849008,"size":81758,"snap":"2021-04-2021-17","text_gpt3_token_len":18292,"char_repetition_ratio":0.18483499,"word_repetition_ratio":0.041501977,"special_character_ratio":0.22951883,"punctuation_ratio":0.11408677,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99609447,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T12:09:38Z\",\"WARC-Record-ID\":\"<urn:uuid:f8fece7f-aac4-42be-b9bc-f3171215e12d>\",\"Content-Length\":\"247643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b567e855-6e6a-4163-aa73-24d2b8d3e871>\",\"WARC-Concurrent-To\":\"<urn:uuid:b56bac8a-7846-4754-9230-d3417d0a4ac5>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s11229-014-0632-x\",\"WARC-Payload-Digest\":\"sha1:OWB52ZGSL6FOGMVXQS7XRL4V2QJGBLNC\",\"WARC-Block-Digest\":\"sha1:YW6SLPPP6QG4RAIUCFF3GI5LQKTK7ZXA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039536858.83_warc_CC-MAIN-20210421100029-20210421130029-00192.warc.gz\"}"}
http://zwizwa.be/-/meta/20110831-203030
[ "`[<<][meta][>>][..]`\nWed Aug 31 20:30:30 CEST 2011\n\n## Ha!\n\n```The trick is the following: the only thing I don't know (and don't\ncare about) in practice is the form of the container (nested pairs)\nbecause it will be flattened into a list.\n\nIn practice, when compiling I do know what the base terms are, so\ncompiling will probably not effectively loose any information, just\nthe static types which are no longer necessary because the output data\n(language syntax) is already in the required form.\n\nSo let's start out with a class that expresses this: a state object is\ncomposite, and has an interface that allows enumeration and gathering\ninto lists of a certain type. Is this a standard interface?\n\nSo the idea is to convert:\n\nstateShow :: s -> [String]\n\nto something more like\n\nstateShow :: s -> [p]\n\nwhere p is another parameter that represents the primitive state type.\nThe default implementation could then be p = String.\n\nThe trouble I had before was to get the following to typecheck (with\nstateShow renamed to statePeek) :\n\ninstance (SigState a p, SigState b p) => SigState (StateProd a b) p where\nstateIndex n0 (StateProd (a, b)) = (n2, (StateProd (a', b'))) where\n(n1, b') = stateIndex n0 b -- Align numbering with causality through composition\n(n2, a') = stateIndex n1 a\nstatePeek (StateProd (a,b)) = statePeek a ++ statePeek b\n\nI got it to work, but I needed UndecidableInstances because of:\n\nthe Coverage Condition fails for one of the functional dependencies;\nUse -XUndecidableInstances to permit this\n\n```\n`[Reply][About]`\n`[<<][meta][>>][..]`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8286261,"math_prob":0.814679,"size":1528,"snap":"2020-10-2020-16","text_gpt3_token_len":393,"char_repetition_ratio":0.09908137,"word_repetition_ratio":0.0,"special_character_ratio":0.2578534,"punctuation_ratio":0.123674914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97377527,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T19:16:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c1f283ad-2a8f-4798-8927-90ccc92397de>\",\"Content-Length\":\"2816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:116b337a-3780-49e0-a591-265e40cdc40d>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c46a1dc-0d18-41bd-aa22-918dd313a182>\",\"WARC-IP-Address\":\"5.39.77.149\",\"WARC-Target-URI\":\"http://zwizwa.be/-/meta/20110831-203030\",\"WARC-Payload-Digest\":\"sha1:V3NR5WZMTA6XS2OWB7KHUMCQTJKGA3MG\",\"WARC-Block-Digest\":\"sha1:56ELED5QCJU6KZ756KM5DREV3E3GJKXV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145966.48_warc_CC-MAIN-20200224163216-20200224193216-00208.warc.gz\"}"}
https://www.andlearning.org/how-to-express-0-333-as-a-fraction/
[ "Connect with us\n\nHow to Express 0.333 as a fraction?\n\nAre you looking for answers to this question (How to Express 0.333 as a fraction?). Then you should follow steps that are mention below for the solution of this question (How to Express 0.333 as a fraction?).\n\nStep 1: Any number divided by 1 equals the original number. We can write $$\\frac{0.333}{1}$$\n\nStep 2: Multiply both numerator and denominator by 10 for every digit after the decimal point. $$\\frac{0.333 \\times 1000}{1 \\times 1000}$$\n\nStep 3: Simplify the multiplication $$\\frac{333}{1000}$$\n\nStep 4: Simplify the Fraction $$\\frac{333}{1000}$$ [No factor]\n\nResult $$0.333 \\; as \\; a \\; \\frac{333}{1000}$$\n\nHow to Express Decimal as a Fraction?\n\nIf you have any doubts regarding the conversion of 0.333 as a fraction, please let me know through social media and mail. If you would like to contribute any things regarding Math or education the visit write for us." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8834512,"math_prob":0.99932134,"size":927,"snap":"2022-05-2022-21","text_gpt3_token_len":254,"char_repetition_ratio":0.13434453,"word_repetition_ratio":0.070063695,"special_character_ratio":0.33117583,"punctuation_ratio":0.13157895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T01:29:57Z\",\"WARC-Record-ID\":\"<urn:uuid:fa55e4ec-88e9-48c9-8c91-7b63f343cab9>\",\"Content-Length\":\"75740\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e7f58b2-2b99-4c26-9978-a0b89c22e813>\",\"WARC-Concurrent-To\":\"<urn:uuid:a866347e-1e12-4e09-8101-46429f045ed5>\",\"WARC-IP-Address\":\"166.62.10.184\",\"WARC-Target-URI\":\"https://www.andlearning.org/how-to-express-0-333-as-a-fraction/\",\"WARC-Payload-Digest\":\"sha1:CYQ4IBT72XSJKUDNB2U53X6IVOM5MJKR\",\"WARC-Block-Digest\":\"sha1:N6PNB7PMRW4TGFFRIERYNVKECOGBOELG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304749.63_warc_CC-MAIN-20220125005757-20220125035757-00212.warc.gz\"}"}